Method and system for an on-chip AC self-test controller
Flanagan, John D [Rhinebeck, NY; Herring, Jay R [Poughkeepsie, NY; Lo, Tin-Chee [Fishkill, NY
2008-09-30
A method and system for performing AC self-test on an integrated circuit that includes a system clock for use during normal operation are provided. The method includes applying a long data capture pulse to a first test register in response to the system clock, applying an at speed data launch pulse to the first test register in response to the system clock, inputting the data from the first register to a logic path in response to applying the at speed data launch pulse to the first test register, applying an at speed data capture pulse to a second test register in response to the system clock, inputting the logic path output to the second test register in response to applying the at speed data capture pulse to the second test register, and applying a long data launch pulse to the second test register in response to the system clock.
Method and system for an on-chip AC self-test controller
Flanagan, John D.; Herring, Jay R.; Lo, Tin-Chee
2006-06-06
A method for performing AC self-test on an integrated circuit, including a system clock for use during normal operation. The method includes applying a long data capture pulse to a first test register in response to the system clock, and further applying at an speed data launch pulse to the first test register in response to the system clock. Inputting the data from the first register to a logic path in response to applying the at speed data launch pulse to the first test register. Applying at speed data capture pulse to a second test register in response to the system clock. Inputting the output from the logic path to the second test register in response to applying the at speed data capture pulse to the second register. Applying a long data launch pulse to the second test register in response to the system clock.
Equating Scores from Adaptive to Linear Tests
ERIC Educational Resources Information Center
van der Linden, Wim J.
2006-01-01
Two local methods for observed-score equating are applied to the problem of equating an adaptive test to a linear test. In an empirical study, the methods were evaluated against a method based on the test characteristic function (TCF) of the linear test and traditional equipercentile equating applied to the ability estimates on the adaptive test…
Restricted random search method based on taboo search in the multiple minima problem
NASA Astrophysics Data System (ADS)
Hong, Seung Do; Jhon, Mu Shik
1997-03-01
The restricted random search method is proposed as a simple Monte Carlo sampling method to search minima fast in the multiple minima problem. This method is based on taboo search applied recently to continuous test functions. The concept of the taboo region instead of the taboo list is used and therefore the sampling of a region near an old configuration is restricted in this method. This method is applied to 2-dimensional test functions and the argon clusters. This method is found to be a practical and efficient method to search near-global configurations of test functions and the argon clusters.
Testing for intracycle determinism in pseudoperiodic time series.
Coelho, Mara C S; Mendes, Eduardo M A M; Aguirre, Luis A
2008-06-01
A determinism test is proposed based on the well-known method of the surrogate data. Assuming predictability to be a signature of determinism, the proposed method checks for intracycle (e.g., short-term) determinism in the pseudoperiodic time series for which standard methods of surrogate analysis do not apply. The approach presented is composed of two steps. First, the data are preprocessed to reduce the effects of seasonal and trend components. Second, standard tests of surrogate analysis can then be used. The determinism test is applied to simulated and experimental pseudoperiodic time series and the results show the applicability of the proposed test.
Methods for evaluating the biological impact of potentially toxic waste applied to soils
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neuhauser, E.F.; Loehr, R.C.; Malecki, M.R.
1985-12-01
The study was designed to evaluate two methods that can be used to estimate the biological impact of organics and inorganics that may be in wastes applied to land for treatment and disposal. The two methods were the contact test and the artificial soil test. The contact test is a 48 hr test using an adult worm, a small glass vial, and filter paper to which the test chemical or waste is applied. The test is designed to provide close contact between the worm and a chemical similar to the situation in soils. The method provides a rapid estimate ofmore » the relative toxicity of chemicals and industrial wastes. The artificial soil test uses a mixture of sand, kaolin, peat, and calcium carbonate as a representative soil. Different concentrations of the test material are added to the artificial soil, adult worms are added and worm survival is evaluated after two weeks. These studies have shown that: earthworms can distinguish between a wide variety of chemicals with a high degree of accuracy.« less
Chaurasia, Ashok; Harel, Ofer
2015-02-10
Tests for regression coefficients such as global, local, and partial F-tests are common in applied research. In the framework of multiple imputation, there are several papers addressing tests for regression coefficients. However, for simultaneous hypothesis testing, the existing methods are computationally intensive because they involve calculation with vectors and (inversion of) matrices. In this paper, we propose a simple method based on the scalar entity, coefficient of determination, to perform (global, local, and partial) F-tests with multiply imputed data. The proposed method is evaluated using simulated data and applied to suicide prevention data. Copyright © 2014 John Wiley & Sons, Ltd.
Method of measuring metal coating adhesion
Roper, J.R.
A method for measuring metal coating adhesion to a substrate material comprising the steps of preparing a test coupon of substrate material having the metal coating applied to one surface thereof, applying a second metal coating of gold or silver to opposite surfaces of the test coupon by hot hollow cathode process, applying a coating to one end of each of two pulling rod members, joining the coated ends of the pulling rod members to said opposite coated surfaces of the test coupon by a solid state bonding technique and finally applying instrumented static tensile loading to the pulling rod members until fracture of the metal coating adhesion to the substrate material occurs.
Method of measuring metal coating adhesion
Roper, John R.
1985-01-01
A method for measuring metal coating adhesion to a substrate material comprising the steps of preparing a test coupon of substrate material having the metal coating applied to one surface thereof, applying a second metal coating of gold or silver to opposite surfaces of the test coupon by hot hollow cathode process, applying a coating to one end of each of two pulling rod members, joining the coated ends of the pulling rod members to said opposite coated surfaces of the test coupon by a solid state bonding technique and finally applying instrumented static tensile loading to the pulling rod members until fracture of the metal coating adhesion to the substrate material occurs.
75 FR 55811 - Testing Method of Pressed and Toughened (Specially Tempered) Glassware
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-14
.... 10-31] Testing Method of Pressed and Toughened (Specially Tempered) Glassware AGENCY: U.S. Customs and Border Protection, Department of Homeland Security. ACTION: Notice of method CBP uses to test... document adopts modifications to the test method currently applied by U.S. Customs and Border Protection...
A Multivariate Randomization Text of Association Applied to Cognitive Test Results
NASA Technical Reports Server (NTRS)
Ahumada, Albert; Beard, Bettina
2009-01-01
Randomization tests provide a conceptually simple, distribution-free way to implement significance testing. We have applied this method to the problem of evaluating the significance of the association among a number (k) of variables. The randomization method was the random re-ordering of k-1 of the variables. The criterion variable was the value of the largest eigenvalue of the correlation matrix.
NASA Astrophysics Data System (ADS)
Dörr, Dominik; Joppich, Tobias; Schirmaier, Fabian; Mosthaf, Tobias; Kärger, Luise; Henning, Frank
2016-10-01
Thermoforming of continuously fiber reinforced thermoplastics (CFRTP) is ideally suited to thin walled and complex shaped products. By means of forming simulation, an initial validation of the producibility of a specific geometry, an optimization of the forming process and the prediction of fiber-reorientation due to forming is possible. Nevertheless, applied methods need to be validated. Therefor a method is presented, which enables the calculation of error measures for the mismatch between simulation results and experimental tests, based on measurements with a conventional coordinate measuring device. As a quantitative measure, describing the curvature is provided, the presented method is also suitable for numerical or experimental sensitivity studies on wrinkling behavior. The applied methods for forming simulation, implemented in Abaqus explicit, are presented and applied to a generic geometry. The same geometry is tested experimentally and simulation and test results are compared by the proposed validation method.
A new compound control method for sine-on-random mixed vibration test
NASA Astrophysics Data System (ADS)
Zhang, Buyun; Wang, Ruochen; Zeng, Falin
2017-09-01
Vibration environmental test (VET) is one of the important and effective methods to provide supports for the strength design, reliability and durability test of mechanical products. A new separation control strategy was proposed to apply in multiple-input multiple-output (MIMO) sine on random (SOR) mixed mode vibration test, which is the advanced and intensive test type of VET. As the key problem of the strategy, correlation integral method was applied to separate the mixed signals which included random and sinusoidal components. The feedback control formula of MIMO linear random vibration system was systematically deduced in frequency domain, and Jacobi control algorithm was proposed in view of the elements, such as self-spectrum, coherence, and phase of power spectral density (PSD) matrix. Based on the excessive correction of excitation in sine vibration test, compression factor was introduced to reduce the excitation correction, avoiding the destruction to vibration table or other devices. The two methods were synthesized to be applied in MIMO SOR vibration test system. In the final, verification test system with the vibration of a cantilever beam as the control object was established to verify the reliability and effectiveness of the methods proposed in the paper. The test results show that the exceeding values can be controlled in the tolerance range of references accurately, and the method can supply theory and application supports for mechanical engineering.
NASA Technical Reports Server (NTRS)
Chin, Alexander W.; Herrera, Claudia Y.; Spivey, Natalie D.; Fladung, William A.; Cloutier, David
2015-01-01
The mass properties of an aerospace vehicle are required by multiple disciplines in the analysis and prediction of flight behavior. Pendulum oscillation methods have been developed and employed for almost a century as a means to measure mass properties. However, these oscillation methods are costly, time consuming, and risky. The NASA Armstrong Flight Research Center has been investigating the Dynamic Inertia Measurement, or DIM method as a possible alternative to oscillation methods. The DIM method uses ground test techniques that are already applied to aerospace vehicles when conducting modal surveys. Ground vibration tests would require minimal additional instrumentation and time to apply the DIM method. The DIM method has been validated on smaller test articles, but has not yet been fully proven on large aerospace vehicles.
Lack of harmonization in sweat testing for cystic fibrosis - a national survey.
Christiansen, Anne Lindegaard; Nybo, Mads
2014-11-01
Sweat testing is used in the diagnosis of cystic fibrosis. Interpretation of the sweat test depends, however, on the method performed since conductivity, osmolality and chloride concentration all can be measured as part of a sweat test. The aim of this study was to investigate how performance of the test is organized in Denmark. Departments conducting the sweat test were contacted and interviewed following a premade questionnaire. They were asked about methods performed, applied NPU (Nomenclature for Properties and Units) code, reference interval, recommended interpretation and referred literature. 14 departments performed the sweat test. One department measured chloride and sodium concentration, while 13 departments measured conductivity. One department used a non-existing NPU code, two departments applied NPU codes inconsistent with the method performed, four departments applied no NPU code and seven applied a correct NPU code. Ten of the departments measuring conductivity applied reference intervals. Nine departments measuring conductivity had recommendations of a normal area, a grey zone and a pathological value, while four departments only applied a normal and grey zone or a pathological value. Cut-off values for normal, grey and pathological areas were like the reference intervals inconsistent. There is inconsistent use of NPU codes, reference intervals and interpretation of sweat conductivity used in the process of diagnosing cystic fibrosis. Because diagnosing cystic fibrosis is a combined effort between local pediatric departments, biochemical and genetic departments and cystic fibrosis centers, a national harmonization is necessary to assure correct clinical use.
Zhang, Yiwei; Xu, Zhiyuan; Shen, Xiaotong; Pan, Wei
2014-08-01
There is an increasing need to develop and apply powerful statistical tests to detect multiple traits-single locus associations, as arising from neuroimaging genetics and other studies. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI), in addition to genome-wide single nucleotide polymorphisms (SNPs), thousands of neuroimaging and neuropsychological phenotypes as intermediate phenotypes for Alzheimer's disease, have been collected. Although some classic methods like MANOVA and newly proposed methods may be applied, they have their own limitations. For example, MANOVA cannot be applied to binary and other discrete traits. In addition, the relationships among these methods are not well understood. Importantly, since these tests are not data adaptive, depending on the unknown association patterns among multiple traits and between multiple traits and a locus, these tests may or may not be powerful. In this paper we propose a class of data-adaptive weights and the corresponding weighted tests in the general framework of generalized estimation equations (GEE). A highly adaptive test is proposed to select the most powerful one from this class of the weighted tests so that it can maintain high power across a wide range of situations. Our proposed tests are applicable to various types of traits with or without covariates. Importantly, we also analytically show relationships among some existing and our proposed tests, indicating that many existing tests are special cases of our proposed tests. Extensive simulation studies were conducted to compare and contrast the power properties of various existing and our new methods. Finally, we applied the methods to an ADNI dataset to illustrate the performance of the methods. We conclude with the recommendation for the use of the GEE-based Score test and our proposed adaptive test for their high and complementary performance. Copyright © 2014 Elsevier Inc. All rights reserved.
A review of methods for assessment of the rate of gastric emptying in the dog and cat: 1898-2002.
Wyse, C A; McLellan, J; Dickie, A M; Sutton, D G M; Preston, T; Yam, P S
2003-01-01
Gastric emptying is the process by which food is delivered to the small intestine at a rate and in a form that optimizes intestinal absorption of nutrients. The rate of gastric emptying is subject to alteration by physiological, pharmacological, and pathological conditions. Gastric emptying of solids is of greater clinical significance because disordered gastric emptying rarely is detectable in the liquid phase. Imaging techniques have the disadvantage of requiring restraint of the animal and access to expensive equipment. Radiographic methods require administration of test meals that are not similar to food. Scintigraphy is the gold standard method for assessment of gastric emptying but requires administration of a radioisotope. Magnetic resonance imaging has not yet been applied for assessment of gastric emptying in small animals. Ultrasonography is a potentially useful, but subjective, method for assessment of gastric emptying in dogs. Gastric tracer methods require insertion of gastric or intestinal cannulae and are rarely applied outside of the research laboratory. The paracetamol absorption test has been applied for assessment of liquid phase gastric emptying in the dog, but requires IV cannulation. The gastric emptying breath test is a noninvasive method for assessment of gastric emptying that has been applied in dogs and cats. This method can be carried out away from the veterinary hospital, but the effects of physiological and pathological abnormalities on the test are not known. Advances in technology will facilitate the development of reliable methods for assessment of gastric emptying in small animals.
A meta-analysis of in vitro antibiotic synergy against Acinetobacter baumannii.
March, Gabriel A; Bratos, Miguel A
2015-12-01
The aim of the work was to describe the different in vitro models for testing synergism of antibiotics and gather the results of antibiotic synergy against multidrug-resistant Acinetobacter baumannii (MDR-Ab). The different original articles were obtained from different web sites. In order to compare the results obtained by the different methods for synergy testing, the Pearson chi-square and the Fischer tests were used. Moreover, non-parametric chi-square test was used in order to compare the frequency distribution in each analysed manuscript. In the current meta-analysis 24 manuscripts, which encompassed 2016 tests of in vitro synergism of different antimicrobials against MDR-Ab, were revised. Checkerboard synergy testing was used in 11 studies, which encompasses 1086 tests (53.9%); time-kill assays were applied in 12 studies, which encompass 359 tests (17.8%); gradient diffusion methods were used in seven studies, encompassing 293 tests (14.5%). And, finally, time-kill plus checkerboard were applied in two studies, encompassing 278 tests (13.8%). By comparing these data, checkerboard and time-kill methods were significantly more used than gradient diffusion methods (p<0.005). Regarding synergy rates obtained on the basis of the applied method, checkerboard provided 227 tests (20.9%) with a synergistic effect; time-kill assays yielded 222 tests (61.8%) with a synergistic effect; gradient diffusion methods only provided 29 tests (9.9%) with a synergistic effect; and, finally, time-kill plus checkerboard yielded just 15 tests (5.4%) with a synergistic effect. When comparing these percentages, synergy rates reported by time-kill methods were significantly higher than that obtained by checkerboard and gradient diffusion methods (p<0.005). On the basis of the revised data, the combinations of a bactericidal antibiotic plus Tigecycline, Vancomycin or Teicoplanin are not recommended. The best combinations of antibiotics are those which include bactericidal antibiotics such as Carbapenems, Fosfomycin, Amikacin, Polymyxins, Rifampicin and Ampicillin/Sulbactam. Copyright © 2015. Published by Elsevier B.V.
Life Prediction/Reliability Data of Glass-Ceramic Material Determined for Radome Applications
NASA Technical Reports Server (NTRS)
Choi, Sung R.; Gyekenyesi, John P.
2002-01-01
Brittle materials, ceramics, are candidate materials for a variety of structural applications for a wide range of temperatures. However, the process of slow crack growth, occurring in any loading configuration, limits the service life of structural components. Therefore, it is important to accurately determine the slow crack growth parameters required for component life prediction using an appropriate test methodology. This test methodology also should be useful in determining the influence of component processing and composition variables on the slow crack growth behavior of newly developed or existing materials, thereby allowing the component processing and composition to be tailored and optimized to specific needs. Through the American Society for Testing and Materials (ASTM), the authors recently developed two test methods to determine the life prediction parameters of ceramics. The two test standards, ASTM 1368 for room temperature and ASTM C 1465 for elevated temperatures, were published in the 2001 Annual Book of ASTM Standards, Vol. 15.01. Briefly, the test method employs constant stress-rate (or dynamic fatigue) testing to determine flexural strengths as a function of the applied stress rate. The merit of this test method lies in its simplicity: strengths are measured in a routine manner in flexure at four or more applied stress rates with an appropriate number of test specimens at each applied stress rate. The slow crack growth parameters necessary for life prediction are then determined from a simple relationship between the strength and the applied stress rate. Extensive life prediction testing was conducted at the NASA Glenn Research Center using the developed ASTM C 1368 test method to determine the life prediction parameters of a glass-ceramic material that the Navy will use for radome applications.
Noar, Seth M; Mehrotra, Purnima
2011-03-01
Traditional theory testing commonly applies cross-sectional (and occasionally longitudinal) survey research to test health behavior theory. Since such correlational research cannot demonstrate causality, a number of researchers have called for the increased use of experimental methods for theory testing. We introduce the multi-methodological theory-testing (MMTT) framework for testing health behavior theory. The MMTT framework introduces a set of principles that broaden the perspective of how we view evidence for health behavior theory. It suggests that while correlational survey research designs represent one method of testing theory, the weaknesses of this approach demand that complementary approaches be applied. Such approaches include randomized lab and field experiments, mediation analysis of theory-based interventions, and meta-analysis. These alternative approaches to theory testing can demonstrate causality in a much more robust way than is possible with correlational survey research methods. Such approaches should thus be increasingly applied in order to more completely and rigorously test health behavior theory. Greater application of research derived from the MMTT may lead researchers to refine and modify theory and ultimately make theory more valuable to practitioners. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
40 CFR 60.58b - Compliance and performance testing.
Code of Federal Regulations, 2014 CFR
2014-07-01
... monitored and record the output of the system and shall comply with the test procedures and test methods....1.1 (relative accuracy test audit) shall apply to the monitor. (6) If carbon dioxide is selected for... established during the initial performance test according to the procedures and methods specified in...
40 CFR 60.58b - Compliance and performance testing.
Code of Federal Regulations, 2012 CFR
2012-07-01
... monitored and record the output of the system and shall comply with the test procedures and test methods....1.1 (relative accuracy test audit) shall apply to the monitor. (6) If carbon dioxide is selected for... established during the initial performance test according to the procedures and methods specified in...
40 CFR 60.58b - Compliance and performance testing.
Code of Federal Regulations, 2013 CFR
2013-07-01
... monitored and record the output of the system and shall comply with the test procedures and test methods....1.1 (relative accuracy test audit) shall apply to the monitor. (6) If carbon dioxide is selected for... established during the initial performance test according to the procedures and methods specified in...
Integration of optical measurement methods with flight parameter measurement systems
NASA Astrophysics Data System (ADS)
Kopecki, Grzegorz; Rzucidlo, Pawel
2016-05-01
During the AIM (advanced in-flight measurement techniques) and AIM2 projects, innovative modern techniques were developed. The purpose of the AIM project was to develop optical measurement techniques dedicated for flight tests. Such methods give information about aircraft elements deformation, thermal loads or pressure distribution, etc. In AIM2 the development of optical methods for flight testing was continued. In particular, this project aimed at the development of methods that could be easily applied in flight tests in an industrial setting. Another equally important task was to guarantee the synchronization of the classical measuring system with cameras. The PW-6U glider used in flight tests was provided by the Rzeszów University of Technology. The glider had all the equipment necessary for testing the IPCT (image pattern correlation technique) and IRT (infrared thermometry) methods. Additionally, equipment adequate for the measurement of typical flight parameters, registration and analysis has been developed. This article describes the designed system, as well as presenting the system’s application during flight tests. Additionally, the results obtained in flight tests show certain limitations of the IRT method as applied.
40 CFR 60.1790 - What test methods must I use to stack test?
Code of Federal Regulations, 2010 CFR
2010-07-01
... methods in table 6 of this subpart for other required equations. (e) You can apply to the Administrator... 40 Protection of Environment 6 2010-07-01 2010-07-01 false What test methods must I use to stack test? 60.1790 Section 60.1790 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR...
Method and apparatus for testing surface characteristics of a material
NASA Technical Reports Server (NTRS)
Johnson, David L. (Inventor); Kersker, Karl D. (Inventor); Stratton, Troy C. (Inventor); Richardson, David E. (Inventor)
2006-01-01
A method, apparatus and system for testing characteristics of a material sample is provided. The system includes an apparatus configured to house the material test sample while defining a sealed volume against a surface of the material test sample. A source of pressurized fluid is in communication with, and configured to pressurize, the sealed volume. A load applying apparatus is configured to apply a defined load to the material sample while the sealed volume is monitored for leakage of the pressurized fluid. Thus, the inducement of surface defects such as microcracking and crazing may be detected and their effects analyzed for a given material. The material test samples may include laminar structures formed of, for example, carbon cloth phenolic, glass cloth phenolic, silica cloth phenolic materials or carbon-carbon materials. In one embodiment the system may be configured to analyze the material test sample while an across-ply loading is applied thereto.
NASA Technical Reports Server (NTRS)
Bever, R. S.
1984-01-01
Nondestructive high voltage test techniques (mostly electrical methods) are studied to prevent total or catastrophic breakdown of insulation systems under applied high voltage in space. Emphasis is on the phenomenon of partial breakdown or partial discharge (P.D.) as a symptom of insulation quality, notably partial discharge testing under D.C. applied voltage. Many of the electronic parts and high voltage instruments in space experience D.C. applied stress in service, and application of A.C. voltage to any portion thereof would be prohibited. Suggestions include: investigation of the ramp test method for D.C. partial discharge measurements; testing of actual flight-type insulation specimen; perfect plotting resin samples with controlled defects for test; several types of plotting resins and recommendations of the better ones from the electrical characteristics; thermal and elastic properties are also considered; testing of commercial capaciters; and approximate acceptance/rejection/rerating criteria for sample test elements for space use, based on D.C. partial discharge.
Model-independent test for scale-dependent non-Gaussianities in the cosmic microwave background.
Räth, C; Morfill, G E; Rossmanith, G; Banday, A J; Górski, K M
2009-04-03
We present a model-independent method to test for scale-dependent non-Gaussianities in combination with scaling indices as test statistics. Therefore, surrogate data sets are generated, in which the power spectrum of the original data is preserved, while the higher order correlations are partly randomized by applying a scale-dependent shuffling procedure to the Fourier phases. We apply this method to the Wilkinson Microwave Anisotropy Probe data of the cosmic microwave background and find signatures for non-Gaussianities on large scales. Further tests are required to elucidate the origin of the detected anomalies.
The Effect of Schooling and Ability on Achievement Test Scores. NBER Working Paper Series.
ERIC Educational Resources Information Center
Hansen, Karsten; Heckman, James J.; Mullen, Kathleen J.
This study developed two methods for estimating the effect of schooling on achievement test scores that control for the endogeneity of schooling by postulating that both schooling and test scores are generated by a common unobserved latent ability. The methods were applied to data on schooling and test scores. Estimates from the two methods are in…
Optimal Stratification of Item Pools in a-Stratified Computerized Adaptive Testing.
ERIC Educational Resources Information Center
Chang, Hua-Hua; van der Linden, Wim J.
2003-01-01
Developed a method based on 0-1 linear programming to stratify an item pool optimally for use in alpha-stratified adaptive testing. Applied the method to a previous item pool from the computerized adaptive test of the Graduate Record Examinations. Results show the new method performs well in practical situations. (SLD)
Murphy, Thomas; Schwedock, Julie; Nguyen, Kham; Mills, Anna; Jones, David
2015-01-01
New recommendations for the validation of rapid microbiological methods have been included in the revised Technical Report 33 release from the PDA. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This case study applies those statistical methods to accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological methods system being evaluated for water bioburden testing. Results presented demonstrate that the statistical methods described in the PDA Technical Report 33 chapter can all be successfully applied to the rapid microbiological method data sets and gave the same interpretation for equivalence to the standard method. The rapid microbiological method was in general able to pass the requirements of PDA Technical Report 33, though the study shows that there can be occasional outlying results and that caution should be used when applying statistical methods to low average colony-forming unit values. Prior to use in a quality-controlled environment, any new method or technology has to be shown to work as designed by the manufacturer for the purpose required. For new rapid microbiological methods that detect and enumerate contaminating microorganisms, additional recommendations have been provided in the revised PDA Technical Report No. 33. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This paper applies those statistical methods to analyze accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological method system being validated for water bioburden testing. The case study demonstrates that the statistical methods described in the PDA Technical Report No. 33 chapter can be successfully applied to rapid microbiological method data sets and give the same comparability results for similarity or difference as the standard method. © PDA, Inc. 2015.
Development of a REBCO HTS magnet for Maglev - repeated bending tests of HTS pancake coils -
NASA Astrophysics Data System (ADS)
Sugino, Motohikoa; Mizuno, Katsutoshi; Tanaka, Minoru; Ogata, Masafumi
2018-01-01
In the past study, two manufacturing methods were developed that can manufacture pancake coils by using REBCO coated conductors. It was confirmed that the conductors have no electric degradation that caused by the manufacturing method. The durability evaluation tests of the pancake coils were conducted as the final evaluation of the coil manufacturing method in this study. The repeated bending deformation was applied to manufactured pancake coils in the tests. As the results of these tests, it was confirmed that the pancake coils that were manufactured by two methods had the durability for the repeated bending deformation and the coils maintained the appropriate mechanical performance and electric performance. We adopted the fusion bonding method as the coil manufacturing method of the HTS magnet Furthermore, using the prototype pancake coil that was manufactured by the fusion bonding method as a test sample, the repeated bending test under the exited condition was conducted. Thus it was confirmed that the coil manufactured by the fusion bonding method has no degradation of the electricity performance and the mechanical properties even if the repeated bending deformation was applied under the exited condition.
Zhao, Xing; Zhou, Xiao-Hua; Feng, Zijian; Guo, Pengfei; He, Hongyan; Zhang, Tao; Duan, Lei; Li, Xiaosong
2013-01-01
As a useful tool for geographical cluster detection of events, the spatial scan statistic is widely applied in many fields and plays an increasingly important role. The classic version of the spatial scan statistic for the binary outcome is developed by Kulldorff, based on the Bernoulli or the Poisson probability model. In this paper, we apply the Hypergeometric probability model to construct the likelihood function under the null hypothesis. Compared with existing methods, the likelihood function under the null hypothesis is an alternative and indirect method to identify the potential cluster, and the test statistic is the extreme value of the likelihood function. Similar with Kulldorff's methods, we adopt Monte Carlo test for the test of significance. Both methods are applied for detecting spatial clusters of Japanese encephalitis in Sichuan province, China, in 2009, and the detected clusters are identical. Through a simulation to independent benchmark data, it is indicated that the test statistic based on the Hypergeometric model outweighs Kulldorff's statistics for clusters of high population density or large size; otherwise Kulldorff's statistics are superior.
NASA Astrophysics Data System (ADS)
Pardimin, H.; Arcana, N.
2018-01-01
Many types of research in the field of mathematics education apply the Quasi-Experimental method and statistical analysis use t-test. Quasi-experiment has a weakness that is difficult to fulfil “the law of a single independent variable”. T-test also has a weakness that is a generalization of the conclusions obtained is less powerful. This research aimed to find ways to reduce the weaknesses of the Quasi-experimental method and improved the generalization of the research results. The method applied in the research was a non-interactive qualitative method, and the type was concept analysis. Concepts analysed are the concept of statistics, research methods of education, and research reports. The result represented a way to overcome the weaknesses of quasi-Experiments and T-test. In addition, the way was to apply a combination of Factorial Design and Balanced Design, which the authors refer to as Factorial-Balanced Design. The advantages of this design are: (1) almost fulfilling “the low of single independent variable” so no need to test the similarity of the academic ability, (2) the sample size of the experimental group and the control group became larger and equal; so it becomes robust to deal with violations of the assumptions of the ANOVA test.
Detection of fatigue cracks by nondestructive testing methods
NASA Technical Reports Server (NTRS)
Anderson, R. T.; Delacy, T. J.; Stewart, R. C.
1973-01-01
The effectiveness was assessed of various NDT methods to detect small tight cracks by randomly introducing fatigue cracks into aluminum sheets. The study included optimizing NDT methods calibrating NDT equipment with fatigue cracked standards, and evaluating a number of cracked specimens by the optimized NDT methods. The evaluations were conducted by highly trained personnel, provided with detailed procedures, in order to minimize the effects of human variability. These personnel performed the NDT on the test specimens without knowledge of the flaw locations and reported on the flaws detected. The performance of these tests was measured by comparing the flaws detected against the flaws present. The principal NDT methods utilized were radiographic, ultrasonic, penetrant, and eddy current. Holographic interferometry, acoustic emission monitoring, and replication methods were also applied on a reduced number of specimens. Generally, the best performance was shown by eddy current, ultrasonic, penetrant and holographic tests. Etching provided no measurable improvement, while proof loading improved flaw detectability. Data are shown that quantify the performances of the NDT methods applied.
Applying Quantum Monte Carlo to the Electronic Structure Problem
NASA Astrophysics Data System (ADS)
Powell, Andrew D.; Dawes, Richard
2016-06-01
Two distinct types of Quantum Monte Carlo (QMC) calculations are applied to electronic structure problems such as calculating potential energy curves and producing benchmark values for reaction barriers. First, Variational and Diffusion Monte Carlo (VMC and DMC) methods using a trial wavefunction subject to the fixed node approximation were tested using the CASINO code.[1] Next, Full Configuration Interaction Quantum Monte Carlo (FCIQMC), along with its initiator extension (i-FCIQMC) were tested using the NECI code.[2] FCIQMC seeks the FCI energy for a specific basis set. At a reduced cost, the efficient i-FCIQMC method can be applied to systems in which the standard FCIQMC approach proves to be too costly. Since all of these methods are statistical approaches, uncertainties (error-bars) are introduced for each calculated energy. This study tests the performance of the methods relative to traditional quantum chemistry for some benchmark systems. References: [1] R. J. Needs et al., J. Phys.: Condensed Matter 22, 023201 (2010). [2] G. H. Booth et al., J. Chem. Phys. 131, 054106 (2009).
Flexible thermal cycle test equipment for concentrator solar cells
Hebert, Peter H [Glendale, CA; Brandt, Randolph J [Palmdale, CA
2012-06-19
A system and method for performing thermal stress testing of photovoltaic solar cells is presented. The system and method allows rapid testing of photovoltaic solar cells under controllable thermal conditions. The system and method presents a means of rapidly applying thermal stresses to one or more photovoltaic solar cells in a consistent and repeatable manner.
16 CFR 1610.3 - Summary of test method.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Summary of test method. 1610.3 Section 1610... FOR THE FLAMMABILITY OF CLOTHING TEXTILES The Standard § 1610.3 Summary of test method. The Standard... surface, and held in a special apparatus at an angle of 45°. A standardized flame shall be applied to the...
16 CFR 1610.3 - Summary of test method.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Summary of test method. 1610.3 Section 1610... FOR THE FLAMMABILITY OF CLOTHING TEXTILES The Standard § 1610.3 Summary of test method. The Standard... surface, and held in a special apparatus at an angle of 45°. A standardized flame shall be applied to the...
16 CFR 1610.3 - Summary of test method.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 16 Commercial Practices 2 2014-01-01 2014-01-01 false Summary of test method. 1610.3 Section 1610... FOR THE FLAMMABILITY OF CLOTHING TEXTILES The Standard § 1610.3 Summary of test method. The Standard... surface, and held in a special apparatus at an angle of 45°. A standardized flame shall be applied to the...
Experimental design methodologies in the optimization of chiral CE or CEC separations: an overview.
Dejaegher, Bieke; Mangelings, Debby; Vander Heyden, Yvan
2013-01-01
In this chapter, an overview of experimental designs to develop chiral capillary electrophoresis (CE) and capillary electrochromatographic (CEC) methods is presented. Method development is generally divided into technique selection, method optimization, and method validation. In the method optimization part, often two phases can be distinguished, i.e., a screening and an optimization phase. In method validation, the method is evaluated on its fit for purpose. A validation item, also applying experimental designs, is robustness testing. In the screening phase and in robustness testing, screening designs are applied. During the optimization phase, response surface designs are used. The different design types and their application steps are discussed in this chapter and illustrated by examples of chiral CE and CEC methods.
The potential for increased power from combining P-values testing the same hypothesis.
Ganju, Jitendra; Julie Ma, Guoguang
2017-02-01
The conventional approach to hypothesis testing for formal inference is to prespecify a single test statistic thought to be optimal. However, we usually have more than one test statistic in mind for testing the null hypothesis of no treatment effect but we do not know which one is the most powerful. Rather than relying on a single p-value, combining p-values from prespecified multiple test statistics can be used for inference. Combining functions include Fisher's combination test and the minimum p-value. Using randomization-based tests, the increase in power can be remarkable when compared with a single test and Simes's method. The versatility of the method is that it also applies when the number of covariates exceeds the number of observations. The increase in power is large enough to prefer combined p-values over a single p-value. The limitation is that the method does not provide an unbiased estimator of the treatment effect and does not apply to situations when the model includes treatment by covariate interaction.
Isentropic Bulk Modulus: Development of a Federal Test Method
2016-01-01
ranging from 30-80 °C and applied pressures of 1,000-18,000 psi. This method has been applied successfully to aviation turbine fuels and diesel fuels...FFP), aviation fuel, diesel fuel, 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 19a. NAME OF RESPONSIBLE...pressures of 1,000-18,000 psi. This method has been applied successfully to aviation turbine fuels and diesel fuels composed of petroleum, synthetic
An Ultrasonic Guided Wave Method to Estimate Applied Biaxial Loads (Preprint)
2011-11-01
VALIDATION A fatigue test was performed with an array of six surface-bonded PZT transducers on a 6061 aluminum plate as shown in Figure 4. The specimen...direct paths of propagation are oriented at different angles. This method is applied to experimental sparse array data recorded during a fatigue test...and the additional complication of the resulting fatigue cracks interfering with some of the direct arrivals is addressed via proper selection of
Theory of chromatic noise masking applied to testing linearity of S-cone detection mechanisms.
Giulianini, Franco; Eskew, Rhea T
2007-09-01
A method for testing the linearity of cone combination of chromatic detection mechanisms is applied to S-cone detection. This approach uses the concept of mechanism noise, the noise as seen by a postreceptoral neural mechanism, to represent the effects of superposing chromatic noise components in elevating thresholds and leads to a parameter-free prediction for a linear mechanism. The method also provides a test for the presence of multiple linear detectors and off-axis looking. No evidence for multiple linear mechanisms was found when using either S-cone increment or decrement tests. The results for both S-cone test polarities demonstrate that these mechanisms combine their cone inputs nonlinearly.
NASA Technical Reports Server (NTRS)
Mark, W. D.
1982-01-01
A transfer function method for predicting the dynamic responses of gear systems with more than one gear mesh is developed and applied to the NASA Lewis four-square gear fatigue test apparatus. Methods for computing bearing-support force spectra and temporal histories of the total force transmitted by a gear mesh, the force transmitted by a single pair of teeth, and the maximum root stress in a single tooth are developed. Dynamic effects arising from other gear meshes in the system are included. A profile modification design method to minimize the vibration excitation arising from a pair of meshing gears is reviewed and extended. Families of tooth loading functions required for such designs are developed and examined for potential excitation of individual tooth vibrations. The profile modification design method is applied to a pair of test gears.
Instrument for the application of controlled mechanical loads to tissues in sterile culture
Lintilhac, Phillip M.; Vesecky, Thompson B.
1995-01-01
Apparatus and methods are disclosed facilitating the application of forces and measurement of dimensions of a test subject. In one arrangement the test subject is coupled to a forcing frame and controlled forces applied thereto by a series of guideways and sliders. The sliders, which contact the test subject are in force transmitting relation to a forcing frame. Tension, compression and bending forces can be applied to the test subject. Force applied to the test subject is measured and controlled. A dimensional characteristic of the test subject, such as growth, is measured by a linear variable differential transformer. The growth measurement data can be used to control the force applied. Substantially uniaxial stretching is achieved by placing the test subject on an elastic membrane stretched by an arrangement of members securing the elastic member to the forcing frame.
16 CFR § 1610.3 - Summary of test method.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 16 Commercial Practices 2 2013-01-01 2013-01-01 false Summary of test method. § 1610.3 Section Â... STANDARD FOR THE FLAMMABILITY OF CLOTHING TEXTILES The Standard § 1610.3 Summary of test method. The... surface, and held in a special apparatus at an angle of 45°. A standardized flame shall be applied to the...
Selective structural source identification
NASA Astrophysics Data System (ADS)
Totaro, Nicolas
2018-04-01
In the field of acoustic source reconstruction, the inverse Patch Transfer Function (iPTF) has been recently proposed and has shown satisfactory results whatever the shape of the vibrating surface and whatever the acoustic environment. These two interesting features are due to the virtual acoustic volume concept underlying the iPTF methods. The aim of the present article is to show how this concept of virtual subsystem can be used in structures to reconstruct the applied force distribution. Some virtual boundary conditions can be applied on a part of the structure, called virtual testing structure, to identify the force distribution applied in that zone regardless of the presence of other sources outside the zone under consideration. In the present article, the applicability of the method is only demonstrated on planar structures. However, the final example show how the method can be applied to a complex shape planar structure with point welded stiffeners even in the tested zone. In that case, if the virtual testing structure includes the stiffeners the identified force distribution only exhibits the positions of external applied forces. If the virtual testing structure does not include the stiffeners, the identified force distribution permits to localize the forces due to the coupling between the structure and the stiffeners through the welded points as well as the ones due to the external forces. This is why this approach is considered here as a selective structural source identification method. It is demonstrated that this approach clearly falls in the same framework as the Force Analysis Technique, the Virtual Fields Method or the 2D spatial Fourier transform. Even if this approach has a lot in common with these latters, it has some interesting particularities like its low sensitivity to measurement noise.
40 CFR 62.15245 - What test methods must I use to stack test?
Code of Federal Regulations, 2010 CFR
2010-07-01
... other required equations. (e) You can apply to the Administrator for approval under § 60.8(b) of subpart... 40 Protection of Environment 8 2010-07-01 2010-07-01 false What test methods must I use to stack test? 62.15245 Section 62.15245 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED...
EMI / EMC Design for Class D Payloads (Resource Prospector / NIRVSS)
NASA Technical Reports Server (NTRS)
Forgione, Josh; Benton, Joshua Eric; Thompson, Sarah; Colaprete, Anthony
2015-01-01
EMI/EMC techniques are applied to a Class D instrument (NIRVSS) to achieve low noise performance and reduce risk of EMI/EMC testing failures and/or issues during system integration and test. Basic techniques are not terribly expensive or complex, but do require close coordination between electrical and mechanical staff early in the design process. Low-cost methods to test subsystems on the bench without renting an EMI chamber are discussed. This method was applied to the NIRVSS instrument and achieved improvements up to 59dB on conducted emissions measurements between hardware revisions.
Borycki, E; Kushniruk, A; Nohr, C; Takeda, H; Kuwata, S; Carvalho, C; Bainbridge, M; Kannry, J
2013-01-01
Issues related to lack of system usability and potential safety hazards continue to be reported in the health information technology (HIT) literature. Usability engineering methods are increasingly used to ensure improved system usability and they are also beginning to be applied more widely for ensuring the safety of HIT applications. These methods are being used in the design and implementation of many HIT systems. In this paper we describe evidence-based approaches to applying usability engineering methods. A multi-phased approach to ensuring system usability and safety in healthcare is described. Usability inspection methods are first described including the development of evidence-based safety heuristics for HIT. Laboratory-based usability testing is then conducted under artificial conditions to test if a system has any base level usability problems that need to be corrected. Usability problems that are detected are corrected and then a new phase is initiated where the system is tested under more realistic conditions using clinical simulations. This phase may involve testing the system with simulated patients. Finally, an additional phase may be conducted, involving a naturalistic study of system use under real-world clinical conditions. The methods described have been employed in the analysis of the usability and safety of a wide range of HIT applications, including electronic health record systems, decision support systems and consumer health applications. It has been found that at least usability inspection and usability testing should be applied prior to the widespread release of HIT. However, wherever possible, additional layers of testing involving clinical simulations and a naturalistic evaluation will likely detect usability and safety issues that may not otherwise be detected prior to widespread system release. The framework presented in the paper can be applied in order to develop more usable and safer HIT, based on multiple layers of evidence.
Yanagawa, T; Tokudome, S
1990-01-01
We developed methods to assess the cancer risks by screening tests. These methods estimate the size of the high risk group adjusted for the characteristics of screening tests and estimate the incidence rates of cancer among the high risk group adjusted for the characteristics of the tests. A method was also developed for selecting the cut-off point of a screening test. Finally, the methods were applied to estimate the risk of the adult T-cell leukemia/lymphoma. PMID:2269244
NASA Astrophysics Data System (ADS)
Mandal, Shyamapada; Santhi, B.; Sridhar, S.; Vinolia, K.; Swaminathan, P.
2017-06-01
In this paper, an online fault detection and classification method is proposed for thermocouples used in nuclear power plants. In the proposed method, the fault data are detected by the classification method, which classifies the fault data from the normal data. Deep belief network (DBN), a technique for deep learning, is applied to classify the fault data. The DBN has a multilayer feature extraction scheme, which is highly sensitive to a small variation of data. Since the classification method is unable to detect the faulty sensor; therefore, a technique is proposed to identify the faulty sensor from the fault data. Finally, the composite statistical hypothesis test, namely generalized likelihood ratio test, is applied to compute the fault pattern of the faulty sensor signal based on the magnitude of the fault. The performance of the proposed method is validated by field data obtained from thermocouple sensors of the fast breeder test reactor.
ERIC Educational Resources Information Center
Wang, Tianyou
2008-01-01
Von Davier, Holland, and Thayer (2004) laid out a five-step framework of test equating that can be applied to various data collection designs and equating methods. In the continuization step, they presented an adjusted Gaussian kernel method that preserves the first two moments. This article proposes an alternative continuization method that…
Understanding Genetic Toxicity Through Data Mining: The ...
This paper demonstrates the usefulness of representing a chemical by its structural features and the use of these features to profile a battery of tests rather than relying on a single toxicity test of a given chemical. This paper presents data mining/profiling methods applied in a weight-of-evidence approach to assess potential for genetic toxicity, and to guide the development of intelligent testing strategies. This paper demonstrates the usefulness of representing a chemical by its structural features and the use of these features to profile a battery of tests rather than relying on a single toxicity test of a given chemical. This paper presents data mining/profiling methods applied in a weight-of-evidence approach to assess potential for genetic toxicity, and to guide the development of intelligent testing strategies.
Instrument for the application of controlled mechanical loads to tissues in sterile culture
Lintilhac, P.M.; Vesecky, T.B.
1995-04-18
Apparatus and methods are disclosed facilitating the application of forces and measurement of dimensions of a test subject. In one arrangement the test subject is coupled to a forcing frame and controlled forces applied thereto by a series of guideways and sliders. The sliders, which contact the test subject are in force transmitting relation to a forcing frame. Tension, compression and bending forces can be applied to the test subject. Force applied to the test subject is measured and controlled. A dimensional characteristic of the test subject, such as growth, is measured by a linear variable differential transformer. The growth measurement data can be used to control the force applied. Substantially uniaxial stretching is achieved by placing the test subject on an elastic membrane stretched by an arrangement of members securing the elastic member to the forcing frame. 8 figs.
Implementation of a Biaxial Resonant Fatigue Test Method on a Large Wind Turbine Blade
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snowberg, D.; Dana, S.; Hughes, S.
2014-09-01
A biaxial resonant test method was utilized to simultaneously fatigue test a wind turbine blade in the flap and edge (lead-lag) direction. Biaxial resonant blade fatigue testing is an accelerated life test method utilizing oscillating masses on the blade; each mass is independently oscillated at the respective flap and edge blade resonant frequency. The flap and edge resonant frequency were not controlled, nor were they constant for this demonstrated test method. This biaxial resonant test method presented surmountable challenges in test setup simulation, control and data processing. Biaxial resonant testing has the potential to complete test projects faster than single-axismore » testing. The load modulation during a biaxial resonant test may necessitate periodic load application above targets or higher applied test cycles.« less
NASA Technical Reports Server (NTRS)
Willsky, A. S.; Deyst, J. J.; Crawford, B. S.
1975-01-01
The paper describes two self-test procedures applied to the problem of estimating the biases in accelerometers and gyroscopes on an inertial platform. The first technique is the weighted sum-squared residual (WSSR) test, with which accelerator bias jumps are easily isolated, but gyro bias jumps are difficult to isolate. The WSSR method does not take full advantage of the knowledge of system dynamics. The other technique is a multiple hypothesis method developed by Buxbaum and Haddad (1969). It has the advantage of directly providing jump isolation information, but suffers from computational problems. It might be possible to use the WSSR to detect state jumps and then switch to the BH system for jump isolation and estimate compensation.
Using a fuzzy comprehensive evaluation method to determine product usability: A test case
Zhou, Ronggang; Chan, Alan H. S.
2016-01-01
BACKGROUND: In order to take into account the inherent uncertainties during product usability evaluation, Zhou and Chan [1] proposed a comprehensive method of usability evaluation for products by combining the analytic hierarchy process (AHP) and fuzzy evaluation methods for synthesizing performance data and subjective response data. This method was designed to provide an integrated framework combining the inevitable vague judgments from the multiple stages of the product evaluation process. OBJECTIVE AND METHODS: In order to illustrate the effectiveness of the model, this study used a summative usability test case to assess the application and strength of the general fuzzy usability framework. To test the proposed fuzzy usability evaluation framework [1], a standard summative usability test was conducted to benchmark the overall usability of a specific network management software. Based on the test data, the fuzzy method was applied to incorporate both the usability scores and uncertainties involved in the multiple components of the evaluation. Then, with Monte Carlo simulation procedures, confidence intervals were used to compare the reliabilities among the fuzzy approach and two typical conventional methods combining metrics based on percentages. RESULTS AND CONCLUSIONS: This case study showed that the fuzzy evaluation technique can be applied successfully for combining summative usability testing data to achieve an overall usability quality for the network software evaluated. Greater differences of confidence interval widths between the method of averaging equally percentage and weighted evaluation method, including the method of weighted percentage averages, verified the strength of the fuzzy method. PMID:28035942
NASA Technical Reports Server (NTRS)
Pines, S.
1981-01-01
The methods used to compute the mass, structural stiffness, and aerodynamic forces in the form of influence coefficient matrices as applied to a flutter analysis of the Drones for Aerodynamic and Structural Testing (DAST) Aeroelastic Research Wing. The DAST wing was chosen because wind tunnel flutter test data and zero speed vibration data of the modes and frequencies exist and are available for comparison. A derivation of the equations of motion that can be used to apply the modal method for flutter suppression is included. A comparison of the open loop flutter predictions with both wind tunnel data and other analytical methods is presented.
Accelerated Stress-Corrosion Testing
NASA Technical Reports Server (NTRS)
1986-01-01
Test procedures for accelerated stress-corrosion testing of high-strength aluminum alloys faster and provide more quantitative information than traditional pass/fail tests. Method uses data from tests on specimen sets exposed to corrosive environment at several levels of applied static tensile stress for selected exposure times then subsequently tensile tested to failure. Method potentially applicable to other degrading phenomena (such as fatigue, corrosion fatigue, fretting, wear, and creep) that promote development and growth of cracklike flaws within material.
Simple Test Functions in Meshless Local Petrov-Galerkin Methods
NASA Technical Reports Server (NTRS)
Raju, Ivatury S.
2016-01-01
Two meshless local Petrov-Galerkin (MLPG) methods based on two different trial functions but that use a simple linear test function were developed for beam and column problems. These methods used generalized moving least squares (GMLS) and radial basis (RB) interpolation functions as trial functions. These two methods were tested on various patch test problems. Both methods passed the patch tests successfully. Then the methods were applied to various beam vibration problems and problems involving Euler and Beck's columns. Both methods yielded accurate solutions for all problems studied. The simple linear test function offers considerable savings in computing efforts as the domain integrals involved in the weak form are avoided. The two methods based on this simple linear test function method produced accurate results for frequencies and buckling loads. Of the two methods studied, the method with radial basis trial functions is very attractive as the method is simple, accurate, and robust.
16 CFR 1509.6 - Component-spacing test method.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Component-spacing test method. 1509.6 Section 1509.6 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT... applied to the wedge perpendicular to the plane of the crib side. ...
NASA Astrophysics Data System (ADS)
Majidi, Omid; Jahazi, Mohammad; Bombardier, Nicolas; Samuel, Ehab
2017-10-01
The strain rate sensitivity index, m-value, is being applied as a common tool to evaluate the impact of the strain rate on the viscoplastic behaviour of materials. The m-value, as a constant number, has been frequently taken into consideration for modeling material behaviour in the numerical simulation of superplastic forming processes. However, the impact of the testing variables on the measured m-values has not been investigated comprehensively. In this study, the m-value for a superplastic grade of an aluminum alloy (i.e., AA5083) has been investigated. The conditions and the parameters that influence the strain rate sensitivity for the material are compared with three different testing methods, i.e., monotonic uniaxial tension test, strain rate jump test and stress relaxation test. All tests were conducted at elevated temperature (470°C) and at strain rates up to 0.1 s-1. The results show that the m-value is not constant and is highly dependent on the applied strain rate, strain level and testing method.
Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua
2018-01-01
Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
A scoping review of spatial cluster analysis techniques for point-event data.
Fritz, Charles E; Schuurman, Nadine; Robertson, Colin; Lear, Scott
2013-05-01
Spatial cluster analysis is a uniquely interdisciplinary endeavour, and so it is important to communicate and disseminate ideas, innovations, best practices and challenges across practitioners, applied epidemiology researchers and spatial statisticians. In this research we conducted a scoping review to systematically search peer-reviewed journal databases for research that has employed spatial cluster analysis methods on individual-level, address location, or x and y coordinate derived data. To illustrate the thematic issues raised by our results, methods were tested using a dataset where known clusters existed. Point pattern methods, spatial clustering and cluster detection tests, and a locally weighted spatial regression model were most commonly used for individual-level, address location data (n = 29). The spatial scan statistic was the most popular method for address location data (n = 19). Six themes were identified relating to the application of spatial cluster analysis methods and subsequent analyses, which we recommend researchers to consider; exploratory analysis, visualization, spatial resolution, aetiology, scale and spatial weights. It is our intention that researchers seeking direction for using spatial cluster analysis methods, consider the caveats and strengths of each approach, but also explore the numerous other methods available for this type of analysis. Applied spatial epidemiology researchers and practitioners should give special consideration to applying multiple tests to a dataset. Future research should focus on developing frameworks for selecting appropriate methods and the corresponding spatial weighting schemes.
Aghdam, Rosa; Baghfalaki, Taban; Khosravi, Pegah; Saberi Ansari, Elnaz
2017-12-01
Deciphering important genes and pathways from incomplete gene expression data could facilitate a better understanding of cancer. Different imputation methods can be applied to estimate the missing values. In our study, we evaluated various imputation methods for their performance in preserving significant genes and pathways. In the first step, 5% genes are considered in random for two types of ignorable and non-ignorable missingness mechanisms with various missing rates. Next, 10 well-known imputation methods were applied to the complete datasets. The significance analysis of microarrays (SAM) method was applied to detect the significant genes in rectal and lung cancers to showcase the utility of imputation approaches in preserving significant genes. To determine the impact of different imputation methods on the identification of important genes, the chi-squared test was used to compare the proportions of overlaps between significant genes detected from original data and those detected from the imputed datasets. Additionally, the significant genes are tested for their enrichment in important pathways, using the ConsensusPathDB. Our results showed that almost all the significant genes and pathways of the original dataset can be detected in all imputed datasets, indicating that there is no significant difference in the performance of various imputation methods tested. The source code and selected datasets are available on http://profiles.bs.ipm.ir/softwares/imputation_methods/. Copyright © 2017. Production and hosting by Elsevier B.V.
Using a fuzzy comprehensive evaluation method to determine product usability: A test case.
Zhou, Ronggang; Chan, Alan H S
2017-01-01
In order to take into account the inherent uncertainties during product usability evaluation, Zhou and Chan [1] proposed a comprehensive method of usability evaluation for products by combining the analytic hierarchy process (AHP) and fuzzy evaluation methods for synthesizing performance data and subjective response data. This method was designed to provide an integrated framework combining the inevitable vague judgments from the multiple stages of the product evaluation process. In order to illustrate the effectiveness of the model, this study used a summative usability test case to assess the application and strength of the general fuzzy usability framework. To test the proposed fuzzy usability evaluation framework [1], a standard summative usability test was conducted to benchmark the overall usability of a specific network management software. Based on the test data, the fuzzy method was applied to incorporate both the usability scores and uncertainties involved in the multiple components of the evaluation. Then, with Monte Carlo simulation procedures, confidence intervals were used to compare the reliabilities among the fuzzy approach and two typical conventional methods combining metrics based on percentages. This case study showed that the fuzzy evaluation technique can be applied successfully for combining summative usability testing data to achieve an overall usability quality for the network software evaluated. Greater differences of confidence interval widths between the method of averaging equally percentage and weighted evaluation method, including the method of weighted percentage averages, verified the strength of the fuzzy method.
The Use of Time Series Analysis and t Tests with Serially Correlated Data Tests.
ERIC Educational Resources Information Center
Nicolich, Mark J.; Weinstein, Carol S.
1981-01-01
Results of three methods of analysis applied to simulated autocorrelated data sets with an intervention point (varying in autocorrelation degree, variance of error term, and magnitude of intervention effect) are compared and presented. The three methods are: t tests; maximum likelihood Box-Jenkins (ARIMA); and Bayesian Box Jenkins. (Author/AEF)
Testing the Stability of 2-D Recursive QP, NSHP and General Digital Filters of Second Order
NASA Astrophysics Data System (ADS)
Rathinam, Ananthanarayanan; Ramesh, Rengaswamy; Reddy, P. Subbarami; Ramaswami, Ramaswamy
Several methods for testing stability of first quadrant quarter-plane two dimensional (2-D) recursive digital filters have been suggested in 1970's and 80's. Though Jury's row and column algorithms, row and column concatenation stability tests have been considered as highly efficient mapping methods. They still fall short of accuracy as they need infinite number of steps to conclude about the exact stability of the filters and also the computational time required is enormous. In this paper, we present procedurally very simple algebraic method requiring only two steps when applied to the second order 2-D quarter - plane filter. We extend the same method to the second order Non-Symmetric Half-plane (NSHP) filters. Enough examples are given for both these types of filters as well as some lower order general recursive 2-D digital filters. We applied our method to barely stable or barely unstable filter examples available in the literature and got the same decisions thus showing that our method is accurate enough.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nyholm, N.; Kristensen, P.
1992-04-01
An international ring test involving 14 laboratories was organized on behalf of the Commission of the European Economic Communities (EEC) with the purpose of evaluating two proposed screening methods for assessment of biodegradability in seawater: (a) a shake flask die-away test based primarily on analysis of dissolved organic carbon and (b) a closed bottle test based on determination of dissolved oxygen. Both tests are performed with nutrient-enriched natural seawater as the test medium and with no inoculum added other than the natural seawater microflora. The test methods are seawater versions of the modified OECD screening test and the closed bottlemore » test, respectively, adopted by the Organization for Economic Cooperation and Development (OECD) and by the EEC as tests for ready biodegradability.' The following five chemicals were examined: sodium benzoate, aniline, diethylene glycol, pentaerythritol, and 4-nitrophenol. Sodium benzoate and aniline, which are known to be generally readily biodegradable consistently degraded in practically all tests, thus demonstrating the technical feasibility of the methods. Like in previous ring tests with freshwater screening methods variable results were obtained with the other three compounds, which is believed primarily to be due to site-specific differences between the microflora of the different seawater samples used and to some extent also to differences in the applied concentrations of test material. A positive result with the screening methods indicates that the test substance will most likely degrade relatively rapidly in seawater from the site of collection, while a negative test result does not preclude biodegradability under environmental conditions where the concentrations of chemicals are much lower than the concentrations applied for analytical reasons in screening tests.« less
Pyrogen tests of infusions, blood anticoagulant solutions, plastic materials and rubber products.
Pintér, J; Zsdánszky, C; Györffy, G
1977-01-01
The methods of the pyrogen test in rabbit as adopted by the authors are presented. The test includes positive and negative controls. The conditions of using the same rabbits on two consecutive days are discussed. Methods of sampling of sterile infusions and the preparation for pyrogen test of anticoagulant solutions containing citrate, phosphate and/or edetate ions are presented. The necessity of pyrogen control of distilled water is stressed. Attention is called on the importance of testing for pyrogenicity of the plastic materials and the rubber-wares to be applied during the production of anticoagulant solutions and infusions. A pyrogen test highly sensitive for detecting traces of detergent is applied for washed glassware. It is emphasized that sensitive pyrogen tests are indispensable not only when new derivatives are being introduced, but also during routine control, because occasional changes in the manufacturer's technology may sometimes be demonstrable in this way.
Code of Federal Regulations, 2010 CFR
2010-07-01
... applying existing technology to new products and processes in a general way. Advanced research is most... Category 6.3A) programs within Research, Development, Test and Evaluation (RDT&E). Applied research... technology such as new materials, devices, methods and processes. It typically is funded in Applied Research...
An approach to operating system testing
NASA Technical Reports Server (NTRS)
Sum, R. N., Jr.; Campbell, R. H.; Kubitz, W. J.
1984-01-01
To ensure the reliability and performance of a new system, it must be verified or validated in some manner. Currently, testing is the only resonable technique available for doing this. Part of this testing process is the high level system test. System testing is considered with respect to operating systems and in particular UNIX. This consideration results in the development and presentation of a good method for performing the system test. The method includes derivations from the system specifications and ideas for management of the system testing project. Results of applying the method to the IBM System/9000 XENIX operating system test and the development of a UNIX test suite are presented.
Code of Federal Regulations, 2010 CFR
2010-07-01
... emissions limitations? (a) You must conduct each performance test that applies to your iron and steel...) Method 18 to determine the VOHAP concentration. Alternatively, you may use Method 25 to determine the... of total hydrocarbons (as hexane) for 180 continuous operating minutes. You must measure emissions at...
Lintilhac, Phillip M.; Vesecky, Thompson B.
1995-01-01
Apparatus and methods are disclosed facilitating the application of forces and measurement of dimensions of a test subject. In one arrangement the test subject is coupled to a forcing frame and controlled forces applied thereto. Force applied to the test subject is measured and controlled. A dimensional characteristic of the test subject, such as growth, is measured by a linear variable differential transformer. The growth measurement data can be used to control the force applied. The transducer module receives force and dimensional data from the forcing frame. The transducer module is a separate, microprocessor-based unit that communicates the test data to a controller unit that controls the application of force to the test subject and receives the test data from the transducer module for force control, storage, and/or communication to the user.
Lintilhac, P.M.; Vesecky, T.B.
1995-09-19
An apparatus and methods are disclosed facilitating the application of forces and measurement of dimensions of a test subject. In one arrangement the test subject is coupled to a forcing frame and controlled forces applied thereto. Force applied to the test subject is measured and controlled. A dimensional characteristic of the test subject, such as growth, is measured by a linear variable differential transformer. The growth measurement data can be used to control the force applied. The transducer module receives force and dimensional data from the forcing frame. The transducer module is a separate, microprocessor-based unit that communicates the test data to a controller unit that controls the application of force to the test subject and receives the test data from the transducer module for force control, storage, and/or communication to the user. 8 figs.
Analysis and computation of a least-squares method for consistent mesh tying
Day, David; Bochev, Pavel
2007-07-10
We report in the finite element method, a standard approach to mesh tying is to apply Lagrange multipliers. If the interface is curved, however, discretization generally leads to adjoining surfaces that do not coincide spatially. Straightforward Lagrange multiplier methods lead to discrete formulations failing a first-order patch test [T.A. Laursen, M.W. Heinstein, Consistent mesh-tying methods for topologically distinct discretized surfaces in non-linear solid mechanics, Internat. J. Numer. Methods Eng. 57 (2003) 1197–1242]. This paper presents a theoretical and computational study of a least-squares method for mesh tying [P. Bochev, D.M. Day, A least-squares method for consistent mesh tying, Internat. J.more » Numer. Anal. Modeling 4 (2007) 342–352], applied to the partial differential equation -∇ 2φ+αφ=f. We prove optimal convergence rates for domains represented as overlapping subdomains and show that the least-squares method passes a patch test of the order of the finite element space by construction. To apply the method to subdomain configurations with gaps and overlaps we use interface perturbations to eliminate the gaps. Finally, theoretical error estimates are illustrated by numerical experiments.« less
ERIC Educational Resources Information Center
Baillie, Ray
1980-01-01
Outlines methods for including skill testing in teacher-made history tests. Focuses on distinguishing fact and fiction, evaluating the reliability of a source, distinguishing between primary and secondary sources, recognizing statements which support generalizations, testing with media, mapping geo-politics, and applying knowledge to new…
The Effect of Estrogen Usage on Eccentric Exercise-Induced Damage in Rat Testes
Can, Serpil; Selli, Jale; Buyuk, Basak; Aydin, Sergulen; Kocaaslan, Ramazan; Guvendi, Gulname Findik
2015-01-01
Background: Recent years, lots of scientific studies are focused on the possible mechanism of inflammatory response and oxidative stress which are the mechanism related with tissue damage and exercise fatigue. It is well-known that free oxygen radicals may be induced under invitro conditions as well as oxidative stress by exhaustive physical exercise. Objectives: The aim of this study was to investigate the effects of anabolic steroids in conjunction with exercise in the process of spermatogenesis in the testes, using histological and stereological methods. Materials and Methods: Thirty-six male Sprague Dawley rats were divided to six groups, including the control group, the eccentric exercise administered group, the estrogen applied group, the estrogen applied and dissected one hour after eccentric exercise group, the no estrogen applied and dissected 48 hours after eccentric exercise group and the estrogen applied and dissected 48 hours after eccentric exercise group. Eccentric exercise was performed on a motorized rodent treadmill and the estrogen applied groups received daily physiological doses by subcutaneous injections. Testicular tissues were examined using specific histopathological, immunohistochemical and stereological methods. Sections of the testes tissue were stained using the TUNEL method to identify apoptotic cells. Apoptosis was calculated as the percentage of positive cells, using stereological analysis. A statistical analysis of the data was carried out with one-way analysis of variance (ANOVA) for the data obtained from stereological analysis. Results: Conventional light microscopic results revealed that testes tissues of the eccentric exercise administered group and the estrogen supplemented group exhibited slight impairment. In groups that were both eccentrically exercised and estrogen supplemented, more deterioration was detected in testes tissues. Likewise, immunohistochemistry findings were also more prominent in the eccentrically exercised and estrogen supplemented groups. Conclusions: The findings suggest that estrogen supplementation increases damage in testicular tissue due to eccentric exercise. PMID:26023337
NASA Technical Reports Server (NTRS)
Miller, Eric J.; Manalo, Russel; Tessler, Alexander
2016-01-01
A study was undertaken to investigate the measurement of wing deformation and internal loads using measured strain data. Future aerospace vehicle research depends on the ability to accurately measure the deformation and internal loads during ground testing and in flight. The approach uses the inverse Finite Element Method (iFEM). The iFEM is a robust, computationally efficient method that is well suited for real-time measurement of real-time structural deformation and loads. The method has been validated in previous work, but has yet to be applied to a large-scale test article. This work is in preparation for an upcoming loads test of a half-span test wing in the Flight Loads Laboratory at the National Aeronautics and Space Administration Armstrong Flight Research Center (Edwards, California). The method has been implemented into an efficient MATLAB® (The MathWorks, Inc., Natick, Massachusetts) code for testing different sensor configurations. This report discusses formulation and implementation along with the preliminary results from a representative aerospace structure. The end goal is to investigate the modeling and sensor placement approach so that the best practices can be applied to future aerospace projects.
A Finite Difference Method for Modeling Migration of Impurities in Multilayer Systems
NASA Astrophysics Data System (ADS)
Tosa, V.; Kovacs, Katalin; Mercea, P.; Piringer, O.
2008-09-01
A finite difference method to solve the one-dimensional diffusion of impurities in a multilayer system was developed for the special case in which a partition coefficient K impose a ratio of the concentrations at the interface between two adiacent layers. The fictitious point method was applied to derive the algebraic equations for the mesh points at the interface, while for the non-uniform mesh points within the layers a combined method was used. The method was tested and then applied to calculate migration of impurities from multilayer systems into liquids or solids samples, in migration experiments performed for quality testing purposes. An application was developed in the field of impurities migrations from multilayer plastic packagings into food, a problem of increasing importance in food industry.
How to detect carbapenemase producers? A literature review of phenotypic and molecular methods.
Hammoudi, D; Moubareck, C Ayoub; Sarkis, D Karam
2014-12-01
This review describes the current state-of-art of carbapenemase detection methods. Identification of carbapenemases is first based on conventional phenotypic tests including antimicrobial susceptibility testing, modified-Hodge test and carbapenemase-inhibitor culture tests. Second, molecular characterization of carbapenemase genes by PCR sequencing is essential. Third, innovative biochemical and spectrometric detection may be applied. Copyright © 2014 Elsevier B.V. All rights reserved.
Development of the GPM Observatory Thermal Vacuum Test Model
NASA Technical Reports Server (NTRS)
Yang, Kan; Peabody, Hume
2012-01-01
A software-based thermal modeling process was documented for generating the thermal panel settings necessary to simulate worst-case on-orbit flight environments in an observatory-level thermal vacuum test setup. The method for creating such a thermal model involved four major steps: (1) determining the major thermal zones for test as indicated by the major dissipating components on the spacecraft, then mapping the major heat flows between these components; (2) finding the flight equivalent sink temperatures for these test thermal zones; (3) determining the thermal test ground support equipment (GSE) design and initial thermal panel settings based on the equivalent sink temperatures; and (4) adjusting the panel settings in the test model to match heat flows and temperatures with the flight model. The observatory test thermal model developed from this process allows quick predictions of the performance of the thermal vacuum test design. In this work, the method described above was applied to the Global Precipitation Measurement (GPM) core observatory spacecraft, a joint project between NASA and the Japanese Aerospace Exploration Agency (JAXA) which is currently being integrated at NASA Goddard Space Flight Center for launch in Early 2014. From preliminary results, the thermal test model generated from this process shows that the heat flows and temperatures match fairly well with the flight thermal model, indicating that the test model can simulate fairly accurately the conditions on-orbit. However, further analysis is needed to determine the best test configuration possible to validate the GPM thermal design before the start of environmental testing later this year. Also, while this analysis method has been applied solely to GPM, it should be emphasized that the same process can be applied to any mission to develop an effective test setup and panel settings which accurately simulate on-orbit thermal environments.
49 CFR Appendix D to Part 173 - Test Methods for Dynamite (Explosive, Blasting, Type A)
Code of Federal Regulations, 2011 CFR
2011-10-01
... weighed to determine the percent of weight loss. 3. Test method D-3—Compression Exudation Test The entire... from the glass tube and weighed to determine the percent of weight loss. EC02MR91.067 ... assembly is placed under the compression rod, and compression is applied by means of the weight on the...
HF Surface Wave Radar Tests at the Eastern China Sea
NASA Astrophysics Data System (ADS)
Wu, Xiong Bin; Cheng, Feng; Wu, Shi Cai; Yang, Zi Jie; Wen, Biyang; Shi, Zhen Hua; Tian, Jiansheng; Ke, Hengyu; Gao, Huotao
2005-01-01
The HF surface wave radar system OSMAR2000 adopts Frequency Modulated Interrupted Continuous Waveform (FMICW) and its 120m-antenna array is transmitting/receiving co-used. MUSIC and MVM are applied to obtain sea echo's direction of arrival (DOA) when extracting currents information. Verification tests of OSMAR2000 ocean surface dynamics detection against in-situ measurements had been accomplished on Oct. 23~29, 2000. Ship detection test was carried out on Dec.24, 2001. It shows that OSMAR2000 is capable of detecting 1000 tons ships with a wide beam out to 70 km. This paper introduces the radar system and the applied DOA estimation methods in the first, and then presents ship detection results and some sea state measurement results of surface currents and waves. The results indicate the validity of the developed radar system and the effectiveness of the applied signal processing methods.
Cylinder surface test with Chebyshev polynomial fitting method
NASA Astrophysics Data System (ADS)
Yu, Kui-bang; Guo, Pei-ji; Chen, Xi
2017-10-01
Zernike polynomials fitting method is often applied in the test of optical components and systems, used to represent the wavefront and surface error in circular domain. Zernike polynomials are not orthogonal in rectangular region which results in its unsuitable for the test of optical element with rectangular aperture such as cylinder surface. Applying the Chebyshev polynomials which are orthogonal among the rectangular area as an substitution to the fitting method, can solve the problem. Corresponding to a cylinder surface with diameter of 50 mm and F number of 1/7, a measuring system has been designed in Zemax based on Fizeau Interferometry. The expressions of the two-dimensional Chebyshev polynomials has been given and its relationship with the aberration has been presented. Furthermore, Chebyshev polynomials are used as base items to analyze the rectangular aperture test data. The coefficient of different items are obtained from the test data through the method of least squares. Comparing the Chebyshev spectrum in different misalignment, it show that each misalignment is independence and has a certain relationship with the certain Chebyshev terms. The simulation results show that, through the Legendre polynomials fitting method, it will be a great improvement in the efficient of the detection and adjustment of the cylinder surface test.
A new MUSIC electromagnetic imaging method with enhanced resolution for small inclusions
NASA Astrophysics Data System (ADS)
Zhong, Yu; Chen, Xudong
2008-11-01
This paper investigates the influence of test dipole on the resolution of the multiple signal classification (MUSIC) imaging method applied to the electromagnetic inverse scattering problem of determining the locations of a collection of small objects embedded in a known background medium. Based on the analysis of the induced electric dipoles in eigenstates, an algorithm is proposed to determine the test dipole that generates a pseudo-spectrum with enhanced resolution. The amplitudes in three directions of the optimal test dipole are not necessarily in phase, i.e., the optimal test dipole may not correspond to a physical direction in the real three-dimensional space. In addition, the proposed test-dipole-searching algorithm is able to deal with some special scenarios, due to the shapes and materials of objects, to which the standard MUSIC doesn't apply.
40 CFR 63.5460 - What definitions apply to this subpart?
Code of Federal Regulations, 2012 CFR
2012-07-01
... combination of smaller leather pieces and leather fibers, which when joined together, form an integral..., thus, cannot withstand 5,000 Maeser Flexes with a Maeser Flex Testing Machine or a method approved by... Maeser Flex Testing Machine or a method approved by the Administrator prior to initial water penetration...
40 CFR 63.5460 - What definitions apply to this subpart?
Code of Federal Regulations, 2014 CFR
2014-07-01
... combination of smaller leather pieces and leather fibers, which when joined together, form an integral..., thus, cannot withstand 5,000 Maeser Flexes with a Maeser Flex Testing Machine or a method approved by... Maeser Flex Testing Machine or a method approved by the Administrator prior to initial water penetration...
40 CFR 63.5460 - What definitions apply to this subpart?
Code of Federal Regulations, 2013 CFR
2013-07-01
... combination of smaller leather pieces and leather fibers, which when joined together, form an integral..., thus, cannot withstand 5,000 Maeser Flexes with a Maeser Flex Testing Machine or a method approved by... Maeser Flex Testing Machine or a method approved by the Administrator prior to initial water penetration...
Method and apparatus for using magneto-acoustic remanence to determine embrittlement
NASA Technical Reports Server (NTRS)
Allison, Sidney G. (Inventor); Namkung, Min (Inventor); Yost, William T. (Inventor); Cantrell, John H. (Inventor)
1992-01-01
A method and apparatus for testing steel components for temperature embrittlement uses magneto-acoustic emission to nondestructively evaluate the component are presented. Acoustic emission signals occur more frequently at higher levels in embrittled components. A pair of electromagnets are used to create magnetic induction in the test component. Magneto-acoustic emission signals may be generated by applying an AC current to the electromagnets. The acoustic emission signals are analyzed to provide a comparison between a component known to be unembrittled and a test component. Magnetic remanence is determined by applying a DC current to the electromagnets and then by turning the magnets off and observing the residual magnetic induction.
49 CFR 178.604 - Leakproofness test.
Code of Federal Regulations, 2012 CFR
2012-10-01
... packaging must be restrained under water while an internal air pressure is applied; the method of restraint... accordance with appendix B of this part. (e) Pressure applied. An internal air pressure (gauge) must be applied to the packaging as indicated for the following packing groups: (1) Packing Group I: Not less than...
49 CFR 178.604 - Leakproofness test.
Code of Federal Regulations, 2014 CFR
2014-10-01
... packaging must be restrained under water while an internal air pressure is applied; the method of restraint... accordance with appendix B of this part. (e) Pressure applied. An internal air pressure (gauge) must be applied to the packaging as indicated for the following packing groups: (1) Packing Group I: Not less than...
49 CFR 178.604 - Leakproofness test.
Code of Federal Regulations, 2011 CFR
2011-10-01
... packaging must be restrained under water while an internal air pressure is applied; the method of restraint... accordance with appendix B of this part. (e) Pressure applied. An internal air pressure (gauge) must be applied to the packaging as indicated for the following packing groups: (1) Packing Group I: Not less than...
NASA Astrophysics Data System (ADS)
Pacheco-Sanchez, Anibal; Claus, Martin; Mothes, Sven; Schröter, Michael
2016-11-01
Three different methods for the extraction of the contact resistance based on both the well-known transfer length method (TLM) and two variants of the Y-function method have been applied to simulation and experimental data of short- and long-channel CNTFETs. While for TLM special CNT test structures are mandatory, standard electrical device characteristics are sufficient for the Y-function methods. The methods have been applied to CNTFETs with low and high channel resistance. It turned out that the standard Y-function method fails to deliver the correct contact resistance in case of a relatively high channel resistance compared to the contact resistances. A physics-based validation is also given for the application of these methods based on applying traditional Si MOSFET theory to quasi-ballistic CNTFETs.
The Effect of Coordinated Teaching Method Practices on Some Motor Skills of 6-Year-Old Children
ERIC Educational Resources Information Center
Altinkok, Mustafa
2017-01-01
Purpose: This study was designed to examine the effects of Coordinated Teaching Method activities applied for 10 weeks on 6-year-old children, and to examine the effects of these activities on the development of some motor skills in children. Research Methods: The "Experimental Research Model with Pre-test and Post-test Control Group"…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koch, D.; Fertitta, E.; Paulus, B.
Due to the importance of both static and dynamical correlation in the bond formation, low-dimensional beryllium systems constitute interesting case studies to test correlation methods. Aiming to describe the whole dissociation curve of extended Be systems we chose to apply the method of increments (MoI) in its multireference (MR) formalism. To gain insight into the main characteristics of the wave function, we started by focusing on the description of small Be chains using standard quantum chemical methods. In a next step we applied the MoI to larger beryllium systems, starting from the Be{sub 6} ring. The complete active space formalismmore » was employed and the results were used as reference for local MR calculations of the whole dissociation curve. Although this is a well-established approach for systems with limited multireference character, its application regarding the description of whole dissociation curves requires further testing. Subsequent to the discussion of the role of the basis set, the method was finally applied to larger rings and extrapolated to an infinite chain.« less
Analysis of separation test for automatic brake adjuster based on linear radon transformation
NASA Astrophysics Data System (ADS)
Luo, Zai; Jiang, Wensong; Guo, Bin; Fan, Weijun; Lu, Yi
2015-01-01
The linear Radon transformation is applied to extract inflection points for online test system under the noise conditions. The linear Radon transformation has a strong ability of anti-noise and anti-interference by fitting the online test curve in several parts, which makes it easy to handle consecutive inflection points. We applied the linear Radon transformation to the separation test system to solve the separating clearance of automatic brake adjuster. The experimental results show that the feature point extraction error of the gradient maximum optimal method is approximately equal to ±0.100, while the feature point extraction error of linear Radon transformation method can reach to ±0.010, which has a lower error than the former one. In addition, the linear Radon transformation is robust.
Ice Shape Scaling for Aircraft in SLD Conditions
NASA Technical Reports Server (NTRS)
Anderson, David N.; Tsao, Jen-Ching
2008-01-01
This paper has summarized recent NASA research into scaling of SLD conditions with data from both SLD and Appendix C tests. Scaling results obtained by applying existing scaling methods for size and test-condition scaling will be reviewed. Large feather growth issues, including scaling approaches, will be discussed briefly. The material included applies only to unprotected, unswept geometries. Within the limits of the conditions tested to date, the results show that the similarity parameters needed for Appendix C scaling also can be used for SLD scaling, and no additional parameters are required. These results were based on visual comparisons of reference and scale ice shapes. Nearly all of the experimental results presented have been obtained in sea-level tunnels. The currently recommended methods to scale model size, icing limit and test conditions are described.
International Standards on stability of digital prints
NASA Astrophysics Data System (ADS)
Adelstein, Peter Z.
2010-06-01
The International Standards Organization (ISO) is a worldwide recognized standardizing body which has responsibility for standards on permanence of digital prints. This paper is an update on the progress made to date by ISO in writing test methods in this area. Three technologies are involved, namely ink jet, dye diffusion thermal transfer (dye-sublimation) and electrophotography. Two types of test methods are possible, namely comparative tests and predictive tests. To date a comparative test on water fastness has been published and final balloting is underway on a comparative test on humidity fastness. Predictive tests are being finalized on thermal stability and pollution susceptibility. The test method on thermal stability is intended to predict the print life during normal aging. One of the testing concerns is that some prints do not show significant image change in practical testing times. The test method on pollution susceptibility only deals with ozone and assumes that the reciprocity law applies. This law assumes that a long time under a low pollutant concentration is equivalent to a short time under the high concentration used in the test procedure. Longer term studies include a predictive test for light stability and the preparation of a material specification. The latter requires a decision about the proper colour target to be used and what constitutes an unacceptable colour change. Moreover, a specification which gives a predictive life is very dependent upon the conditions the print encounters and will only apply to specific levels of temperature, ozone and light.
NASA Astrophysics Data System (ADS)
Cheema, Tabinda Shahid
This study of laboratory based instruction at higher secondary school level was an attempt to gain some insight into the effectiveness of three laboratory instruction methods: cooperative group instruction method, individualised instruction method and lecture demonstration method on biology achievement and retention. A Randomised subjects, Pre-test Post-test Comparative Methods Design was applied. Three groups of students from a year 11 class in Pakistan conducted experiments using the different laboratory instruction methods. Pre-tests, achievement tests after the experiments and retention tests one month later were administered. Results showed no significant difference between the groups on total achievement and retention, nor was there any significant difference on knowledge and comprehension test scores or skills performance. Future research investigating a similar problem is suggested.
Mirza, Tahseen; Liu, Qian Julie; Vivilecchia, Richard; Joshi, Yatindra
2009-03-01
There has been a growing interest during the past decade in the use of fiber optics dissolution testing. Use of this novel technology is mainly confined to research and development laboratories. It has not yet emerged as a tool for end product release testing despite its ability to generate in situ results and efficiency improvement. One potential reason may be the lack of clear validation guidelines that can be applied for the assessment of suitability of fiber optics. This article describes a comprehensive validation scheme and development of a reliable, robust, reproducible and cost-effective dissolution test using fiber optics technology. The test was successfully applied for characterizing the dissolution behavior of a 40-mg immediate-release tablet dosage form that is under development at Novartis Pharmaceuticals, East Hanover, New Jersey. The method was validated for the following parameters: linearity, precision, accuracy, specificity, and robustness. In particular, robustness was evaluated in terms of probe sampling depth and probe orientation. The in situ fiber optic method was found to be comparable to the existing manual sampling dissolution method. Finally, the fiber optic dissolution test was successfully performed by different operators on different days, to further enhance the validity of the method. The results demonstrate that the fiber optics technology can be successfully validated for end product dissolution/release testing. (c) 2008 Wiley-Liss, Inc. and the American Pharmacists Association
Illustrating Newton's Second Law with the Automobile Coast-Down Test.
ERIC Educational Resources Information Center
Bryan, Ronald A.; And Others
1988-01-01
Describes a run test of automobiles for applying Newton's second law of motion and the concept of power. Explains some automobile thought-experiments and provides the method and data of an actual coast-down test. (YP)
Improved wheal detection from skin prick test images
NASA Astrophysics Data System (ADS)
Bulan, Orhan
2014-03-01
Skin prick test is a commonly used method for diagnosis of allergic diseases (e.g., pollen allergy, food allergy, etc.) in allergy clinics. The results of this test are erythema and wheal provoked on the skin where the test is applied. The sensitivity of the patient against a specific allergen is determined by the physical size of the wheal, which can be estimated from images captured by digital cameras. Accurate wheal detection from these images is an important step for precise estimation of wheal size. In this paper, we propose a method for improved wheal detection on prick test images captured by digital cameras. Our method operates by first localizing the test region by detecting calibration marks drawn on the skin. The luminance variation across the localized region is eliminated by applying a color transformation from RGB to YCbCr and discarding the luminance channel. We enhance the contrast of the captured images for the purpose of wheal detection by performing principal component analysis on the blue-difference (Cb) and red-difference (Cr) color channels. We finally, perform morphological operations on the contrast enhanced image to detect the wheal on the image plane. Our experiments performed on images acquired from 36 different patients show the efficiency of the proposed method for wheal detection from skin prick test images captured in an uncontrolled environment.
Test method development for structural characterization of fiber composites at high temperatures
NASA Technical Reports Server (NTRS)
Mandell, J. F.; Grande, D. H.; Edwards, B.
1985-01-01
Test methods used for structural characterization of polymer matrix composites can be applied to glass and ceramic matrix composites only at low temperatures. New test methods are required for tensile, compressive, and shear properties of fiber composites at high temperatures. A tensile test which should be useful to at least 1000 C has been developed and used to characterize the properties of a Nicalon/glass composite up to the matrix limiting temperature of 600 C. Longitudinal and transverse unidirectional composite data are presented and discussed.
Optimal Test Design with Rule-Based Item Generation
ERIC Educational Resources Information Center
Geerlings, Hanneke; van der Linden, Wim J.; Glas, Cees A. W.
2013-01-01
Optimal test-design methods are applied to rule-based item generation. Three different cases of automated test design are presented: (a) test assembly from a pool of pregenerated, calibrated items; (b) test generation on the fly from a pool of calibrated item families; and (c) test generation on the fly directly from calibrated features defining…
NASA Astrophysics Data System (ADS)
Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed
2016-04-01
Three simple, specific, accurate and precise spectrophotometric methods were developed for the determination of cefprozil (CZ) in the presence of its alkaline induced degradation product (DCZ). The first method was the bivariate method, while the two other multivariate methods were partial least squares (PLS) and spectral residual augmented classical least squares (SRACLS). The multivariate methods were applied with and without variable selection procedure (genetic algorithm GA). These methods were tested by analyzing laboratory prepared mixtures of the above drug with its alkaline induced degradation product and they were applied to its commercial pharmaceutical products.
NASA Astrophysics Data System (ADS)
Ishizawa, Y.; Abe, K.; Shirako, G.; Takai, T.; Kato, H.
The electromagnetic compatibility (EMC) control method, system EMC analysis method, and system test method which have been applied to test the components of the MOS-1 satellite are described. The merits and demerits of the problem solving, specification, and system approaches to EMC control are summarized, and the data requirements of the SEMCAP (specification and electromagnetic compatibility analysis program) computer program for verifying the EMI safety margin of the components are sumamrized. Examples of EMC design are mentioned, and the EMC design process and selection method for EMC critical points are shown along with sample EMC test results.
Kastle-Meyer blood test reagents are deleterious to DNA.
Sloots, James; Lalonde, Wendy; Reid, Barbara; Millman, Jonathan
2017-12-01
The Kastle-Meyer (KM) test is a quick and easy chemical test for blood used in forensic analyses. Two practical variations of this test are the KM-rub (indirect) test and the more sensitive KM-direct test, the latter of which is performed by applying reagents directly to a suspected blood stain. This study found that sodium hydroxide present in the KM reagents eliminated the potential to generate a DNA profile when applied directly to small quantities of blood. A modified approach to the KM-rub test that increases its sensitivity is presented as a method to replace destructive KM-direct testing. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
40 CFR 799.6784 - TSCA water solubility: Column elution method; shake flask method.
Code of Federal Regulations, 2010 CFR
2010-07-01
... reaction quality should be used to apply the test substance to the carrier material. Double distilled water... this section. (i) With this apparatus, the microcolumn must be modified. A stopcock with 2-way action... particles invalidates the results, and the test should be repeated with improvements in the filtering action...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raskin, Cody; Owen, J. Michael, E-mail: raskin1@llnl.gov, E-mail: mikeowen@llnl.gov
2016-11-01
We discuss a generalization of the classic Keplerian disk test problem allowing for both pressure and rotational support, as a method of testing astrophysical codes incorporating both gravitation and hydrodynamics. We argue for the inclusion of pressure in rotating disk simulations on the grounds that realistic, astrophysical disks exhibit non-negligible pressure support. We then apply this test problem to examine the performance of various smoothed particle hydrodynamics (SPH) methods incorporating a number of improvements proposed over the years to address problems noted in modeling the classical gravitation-only Keplerian disk. We also apply this test to a newly developed extension ofmore » SPH based on reproducing kernels called CRKSPH. Counterintuitively, we find that pressure support worsens the performance of traditional SPH on this problem, causing unphysical collapse away from the steady-state disk solution even more rapidly than the purely gravitational problem, whereas CRKSPH greatly reduces this error.« less
Multiple Hypothesis Testing for Experimental Gingivitis Based on Wilcoxon Signed Rank Statistics
Preisser, John S.; Sen, Pranab K.; Offenbacher, Steven
2011-01-01
Dental research often involves repeated multivariate outcomes on a small number of subjects for which there is interest in identifying outcomes that exhibit change in their levels over time as well as to characterize the nature of that change. In particular, periodontal research often involves the analysis of molecular mediators of inflammation for which multivariate parametric methods are highly sensitive to outliers and deviations from Gaussian assumptions. In such settings, nonparametric methods may be favored over parametric ones. Additionally, there is a need for statistical methods that control an overall error rate for multiple hypothesis testing. We review univariate and multivariate nonparametric hypothesis tests and apply them to longitudinal data to assess changes over time in 31 biomarkers measured from the gingival crevicular fluid in 22 subjects whereby gingivitis was induced by temporarily withholding tooth brushing. To identify biomarkers that can be induced to change, multivariate Wilcoxon signed rank tests for a set of four summary measures based upon area under the curve are applied for each biomarker and compared to their univariate counterparts. Multiple hypothesis testing methods with choice of control of the false discovery rate or strong control of the family-wise error rate are examined. PMID:21984957
Lotfy, Hayam M; Fayez, Yasmin M; Michael, Adel M; Nessim, Christine K
2016-02-15
Smart, sensitive, simple and accurate spectrophotometric methods were developed and validated for the quantitative determination of a binary mixture of mebeverine hydrochloride (MVH) and chlordiazepoxide (CDZ) without prior separation steps via different manipulating pathways. These pathways were applied either on zero order absorption spectra namely, absorbance subtraction (AS) or based on the recovered zero order absorption spectra via a decoding technique namely, derivative transformation (DT) or via ratio spectra namely, ratio subtraction (RS) coupled with extended ratio subtraction (EXRS), spectrum subtraction (SS), constant multiplication (CM) and constant value (CV) methods. The manipulation steps applied on the ratio spectra are namely, ratio difference (RD) and amplitude modulation (AM) methods or applying a derivative to these ratio spectra namely, derivative ratio (DD(1)) or second derivative (D(2)). Finally, the pathway based on the ratio spectra of derivative spectra is namely, derivative subtraction (DS). The specificity of the developed methods was investigated by analyzing the laboratory mixtures and was successfully applied for their combined dosage form. The proposed methods were validated according to ICH guidelines. These methods exhibited linearity in the range of 2-28μg/mL for mebeverine hydrochloride and 1-12μg/mL for chlordiazepoxide. The obtained results were statistically compared with those of the official methods using Student t-test, F-test, and one way ANOVA, showing no significant difference with respect to accuracy and precision. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lotfy, Hayam M.; Fayez, Yasmin M.; Michael, Adel M.; Nessim, Christine K.
2016-02-01
Smart, sensitive, simple and accurate spectrophotometric methods were developed and validated for the quantitative determination of a binary mixture of mebeverine hydrochloride (MVH) and chlordiazepoxide (CDZ) without prior separation steps via different manipulating pathways. These pathways were applied either on zero order absorption spectra namely, absorbance subtraction (AS) or based on the recovered zero order absorption spectra via a decoding technique namely, derivative transformation (DT) or via ratio spectra namely, ratio subtraction (RS) coupled with extended ratio subtraction (EXRS), spectrum subtraction (SS), constant multiplication (CM) and constant value (CV) methods. The manipulation steps applied on the ratio spectra are namely, ratio difference (RD) and amplitude modulation (AM) methods or applying a derivative to these ratio spectra namely, derivative ratio (DD1) or second derivative (D2). Finally, the pathway based on the ratio spectra of derivative spectra is namely, derivative subtraction (DS). The specificity of the developed methods was investigated by analyzing the laboratory mixtures and was successfully applied for their combined dosage form. The proposed methods were validated according to ICH guidelines. These methods exhibited linearity in the range of 2-28 μg/mL for mebeverine hydrochloride and 1-12 μg/mL for chlordiazepoxide. The obtained results were statistically compared with those of the official methods using Student t-test, F-test, and one way ANOVA, showing no significant difference with respect to accuracy and precision.
Proposed test method for and evaluation of wheelchair seating system (WCSS) crashworthiness.
van Roosmalen, L; Bertocci, G; Ha, D R; Karg, P; Szobota, S
2000-01-01
Safety of motor vehicle seats is of great importance in providing crash protection to the occupant. An increasing number of wheelchair users use their wheelchairs as motor vehicle seats when traveling. A voluntary standard requires that compliant wheelchairs be dynamically sled impact tested. However, testing to evaluate the crashworthiness of add-on wheelchair seating systems (WCSS) independent of their wheelchair frame is not addressed by this standard. To address this need, this study developed a method to evaluate the crash-worthiness of WCSS with independent frames. Federal Motor Vehicle Safety Standards (FMVSS) 207 test protocols, used to test the strength of motor vehicle seats, were modified and used to test the strength of three WCSS. Forward and rearward loads were applied at the WCSS center of gravity (CGSS), and a moment was applied at the uppermost point of the seat back. Each of the three tested WCSS met the strength requirements of FMVSS 207. Wheelchair seat-back stiffness was also investigated and compared to motor vehicle seat-back stiffness.
Attia, Khalid A M; Nassar, Mohammed W I; El-Zeiny, Mohamed B; Serag, Ahmed
2016-05-15
Three different spectrophotometric methods were applied for the quantitative analysis of flucloxacillin and amoxicillin in their binary mixture, namely, ratio subtraction, absorbance subtraction and amplitude modulation. A comparative study was done listing the advantages and the disadvantages of each method. All the methods were validated according to the ICH guidelines and the obtained accuracy, precision and repeatability were found to be within the acceptable limits. The selectivity of the proposed methods was tested using laboratory prepared mixtures and assessed by applying the standard addition technique. So, they can be used for the routine analysis of flucloxacillin and amoxicillin in their binary mixtures. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Song, Jun Hee; Kim, Hak Kun; Kim, Sam Yeon
2014-07-01
Laminated fiber-reinforced composites can be applied to an insulating structure of a nuclear fusion device. It is necessary to investigate the interlaminar fracture characteristics of the laminated composites for the assurance of design and structural integrity. The three methods used to prepare the glass fiber reinforced plastic composites tested in this study were vacuum pressure impregnation, high pressure laminate (HPL), and prepreg laminate. We discuss the design criteria for safe application of composites and the shear-compressive test methods for evaluating mechanical properties of the material. Shear-compressive tests could be performed successfully using series-type test jigs that were inclined 0°, 30°, 45°, 60°, and 75° to the normal axis. Shear strength depends strongly on the applied compressive stress. The design range of allowable shear stress was extended by use of the appropriate composite fabrication method. HPL had the largest design range, and the allowable interlaminar shear stress was 0.254 times the compressive stress.
Microwave and Millimeter Wave Imaging Using Synthetic Aperture Focusing and Holographical Techniques
NASA Technical Reports Server (NTRS)
Case, Joseph Tobias
2005-01-01
Microwave and millimeter wave nondestructive testing and evaluation (NDT&E) methods have shown great potential for determining material composition in composite structures, determining material thickness or debond thickness between two layers, and determining the location and size of flaws, defects, and anomalies. The same testing methods have also shown great potential to produce relatively high-resolution images of voids inside Spray On Foam Insulation (SOFI) test panels using real focused methods employing lens antennas. An alternative to real focusing methods are synthetic focusing methods. The essence of synthetic focusing is to match the phase of the scattered signal to measured points spaced regularly on a plane. Many variations of synthetic focusing methods have already been developed for radars, ultrasonic testing applications, and microwave concealed weapon detection. Two synthetic focusing methods were investigated; namely, a) frequency-domain synthetic aperture focusing technique (FDSAFT), and b) wide-band microwave holography. These methods were applied towards materials whose defects were of low dielectric contrast like air void in SOFI. It is important to note that this investigation used relatively low frequencies from 8.2 GHz to 26.5 GHz that are not conducive for direct imaging of the SOFI. The ultimate goal of this work has been to demonstrate the capability of these methods before they are applied to much higher frequencies such as the millimeter wave frequency spectrum (e.g., 30-300 GHz).
Experiences Using Lightweight Formal Methods for Requirements Modeling
NASA Technical Reports Server (NTRS)
Easterbrook, Steve; Lutz, Robyn; Covington, Rick; Kelly, John; Ampo, Yoko; Hamilton, David
1997-01-01
This paper describes three case studies in the lightweight application of formal methods to requirements modeling for spacecraft fault protection systems. The case studies differ from previously reported applications of formal methods in that formal methods were applied very early in the requirements engineering process, to validate the evolving requirements. The results were fed back into the projects, to improve the informal specifications. For each case study, we describe what methods were applied, how they were applied, how much effort was involved, and what the findings were. In all three cases, formal methods enhanced the existing verification and validation processes, by testing key properties of the evolving requirements, and helping to identify weaknesses. We conclude that the benefits gained from early modeling of unstable requirements more than outweigh the effort needed to maintain multiple representations.
Proof test methodology for composites
NASA Technical Reports Server (NTRS)
Wu, Edward M.; Bell, David K.
1992-01-01
The special requirements for proof test of composites are identified based on the underlying failure process of composites. Two proof test methods are developed to eliminate the inevitable weak fiber sites without also causing flaw clustering which weakens the post-proof-test composite. Significant reliability enhancement by these proof test methods has been experimentally demonstrated for composite strength and composite life in tension. This basic proof test methodology is relevant to the certification and acceptance of critical composite structures. It can also be applied to the manufacturing process development to achieve zero-reject for very large composite structures.
Antenna reconfiguration verification and validation
NASA Technical Reports Server (NTRS)
Becker, Robert C. (Inventor); Meyers, David W. (Inventor); Muldoon, Kelly P. (Inventor); Carlson, Douglas R. (Inventor); Drexler, Jerome P. (Inventor)
2009-01-01
A method of testing the electrical functionality of an optically controlled switch in a reconfigurable antenna is provided. The method includes configuring one or more conductive paths between one or more feed points and one or more test point with switches in the reconfigurable antenna. Applying one or more test signals to the one or more feed points. Monitoring the one or more test points in response to the one or more test signals and determining the functionality of the switch based upon the monitoring of the one or more test points.
NASA Astrophysics Data System (ADS)
Abdellatef, Hisham E.
2007-04-01
Picric acid, bromocresol green, bromothymol blue, cobalt thiocyanate and molybdenum(V) thiocyanate have been tested as spectrophotometric reagents for the determination of disopyramide and irbesartan. Reaction conditions have been optimized to obtain coloured comoplexes of higher sensitivity and longer stability. The absorbance of ion-pair complexes formed were found to increases linearity with increases in concentrations of disopyramide and irbesartan which were corroborated by correction coefficient values. The developed methods have been successfully applied for the determination of disopyramide and irbesartan in bulk drugs and pharmaceutical formulations. The common excipients and additives did not interfere in their determination. The results obtained by the proposed methods have been statistically compared by means of student t-test and by the variance ratio F-test. The validity was assessed by applying the standard addition technique. The results were compared statistically with the official or reference methods showing a good agreement with high precision and accuracy.
NASA Astrophysics Data System (ADS)
Fischer, J.; Doolan, C.
2017-12-01
A method to improve the quality of acoustic beamforming in reverberant environments is proposed in this paper. The processing is based on a filtering of the cross-correlation matrix of the microphone signals obtained using a microphone array. The main advantage of the proposed method is that it does not require information about the geometry of the reverberant environment and thus it can be applied to any configuration. The method is applied to the particular example of aeroacoustic testing in a hard-walled low-speed wind tunnel; however, the technique can be used in any reverberant environment. Two test cases demonstrate the technique. The first uses a speaker placed in the hard-walled working section with no wind tunnel flow. In the second test case, an airfoil is placed in a flow and acoustic beamforming maps are obtained. The acoustic maps have been improved, as the reflections observed in the conventional maps have been removed after application of the proposed method.
The Use of Empirical Methods for Testing Granular Materials in Analogue Modelling
Montanari, Domenico; Agostini, Andrea; Bonini, Marco; Corti, Giacomo; Del Ventisette, Chiara
2017-01-01
The behaviour of a granular material is mainly dependent on its frictional properties, angle of internal friction, and cohesion, which, together with material density, are the key factors to be considered during the scaling procedure of analogue models. The frictional properties of a granular material are usually investigated by means of technical instruments such as a Hubbert-type apparatus and ring shear testers, which allow for investigating the response of the tested material to a wide range of applied stresses. Here we explore the possibility to determine material properties by means of different empirical methods applied to mixtures of quartz and K-feldspar sand. Empirical methods exhibit the great advantage of measuring the properties of a certain analogue material under the experimental conditions, which are strongly sensitive to the handling techniques. Finally, the results obtained from the empirical methods have been compared with ring shear tests carried out on the same materials, which show a satisfactory agreement with those determined empirically. PMID:28772993
NASA Astrophysics Data System (ADS)
Jin, Zhenyu; Lin, Jing; Liu, Zhong
2008-07-01
By study of the classical testing techniques (such as Shack-Hartmann Wave-front Sensor) adopted in testing the aberration of ground-based astronomical optical telescopes, we bring forward two testing methods on the foundation of high-resolution image reconstruction technology. One is based on the averaged short-exposure OTF and the other is based on the Speckle Interferometric OTF by Antoine Labeyrie. Researches made by J.Ohtsubo, F. Roddier, Richard Barakat and J.-Y. ZHANG indicated that the SITF statistical results would be affected by the telescope optical aberrations, which means the SITF statistical results is a function of optical system aberration and the atmospheric Fried parameter (seeing). Telescope diffraction-limited information can be got through two statistics methods of abundant speckle images: by the first method, we can extract the low frequency information such as the full width at half maximum (FWHM) of the telescope PSF to estimate the optical quality; by the second method, we can get a more precise description of the telescope PSF with high frequency information. We will apply the two testing methods to the 2.4m optical telescope of the GMG Observatory, in china to validate their repeatability and correctness and compare the testing results with that of the Shack-Hartmann Wave-Front Sensor got. This part will be described in detail in our paper.
Efficient forced vibration reanalysis method for rotating electric machines
NASA Astrophysics Data System (ADS)
Saito, Akira; Suzuki, Hiromitsu; Kuroishi, Masakatsu; Nakai, Hideo
2015-01-01
Rotating electric machines are subject to forced vibration by magnetic force excitation with wide-band frequency spectrum that are dependent on the operating conditions. Therefore, when designing the electric machines, it is inevitable to compute the vibration response of the machines at various operating conditions efficiently and accurately. This paper presents an efficient frequency-domain vibration analysis method for the electric machines. The method enables the efficient re-analysis of the vibration response of electric machines at various operating conditions without the necessity to re-compute the harmonic response by finite element analyses. Theoretical background of the proposed method is provided, which is based on the modal reduction of the magnetic force excitation by a set of amplitude-modulated standing-waves. The method is applied to the forced response vibration of the interior permanent magnet motor at a fixed operating condition. The results computed by the proposed method agree very well with those computed by the conventional harmonic response analysis by the FEA. The proposed method is then applied to the spin-up test condition to demonstrate its applicability to various operating conditions. It is observed that the proposed method can successfully be applied to the spin-up test conditions, and the measured dominant frequency peaks in the frequency response can be well captured by the proposed approach.
Methods for Scaling Icing Test Conditions
NASA Technical Reports Server (NTRS)
Anderson, David N.
1995-01-01
This report presents the results of tests at NASA Lewis to evaluate several methods to establish suitable alternative test conditions when the test facility limits the model size or operating conditions. The first method was proposed by Olsen. It can be applied when full-size models are tested and all the desired test conditions except liquid-water content can be obtained in the facility. The other two methods discussed are: a modification of the French scaling law and the AEDC scaling method. Icing tests were made with cylinders at both reference and scaled conditions representing mixed and glaze ice in the NASA Lewis Icing Research Tunnel. Reference and scale ice shapes were compared to evaluate each method. The Olsen method was tested with liquid-water content varying from 1.3 to .8 g/m(exp3). Over this range, ice shapes produced using the Olsen method were unchanged. The modified French and AEDC methods produced scaled ice shapes which approximated the reference shapes when model size was reduced to half the reference size for the glaze-ice cases tested.
Step-control of electromechanical systems
Lewis, Robert N.
1979-01-01
The response of an automatic control system to a general input signal is improved by applying a test input signal, observing the response to the test input signal and determining correctional constants necessary to provide a modified input signal to be added to the input to the system. A method is disclosed for determining correctional constants. The modified input signal, when applied in conjunction with an operating signal, provides a total system output exhibiting an improved response. This method is applicable to open-loop or closed-loop control systems. The method is also applicable to unstable systems, thus allowing controlled shut-down before dangerous or destructive response is achieved and to systems whose characteristics vary with time, thus resulting in improved adaptive systems.
NASA Astrophysics Data System (ADS)
Rakkapao, Suttida; Prasitpong, Singha; Arayathanitkul, Kwan
2016-12-01
This study investigated the multiple-choice test of understanding of vectors (TUV), by applying item response theory (IRT). The difficulty, discriminatory, and guessing parameters of the TUV items were fit with the three-parameter logistic model of IRT, using the parscale program. The TUV ability is an ability parameter, here estimated assuming unidimensionality and local independence. Moreover, all distractors of the TUV were analyzed from item response curves (IRC) that represent simplified IRT. Data were gathered on 2392 science and engineering freshmen, from three universities in Thailand. The results revealed IRT analysis to be useful in assessing the test since its item parameters are independent of the ability parameters. The IRT framework reveals item-level information, and indicates appropriate ability ranges for the test. Moreover, the IRC analysis can be used to assess the effectiveness of the test's distractors. Both IRT and IRC approaches reveal test characteristics beyond those revealed by the classical analysis methods of tests. Test developers can apply these methods to diagnose and evaluate the features of items at various ability levels of test takers.
NASA Astrophysics Data System (ADS)
Zhang, Yu; Pan, Peng; Gong, Runhua; Wang, Tao; Xue, Weichen
2017-10-01
An online hybrid test was carried out on a 40-story 120-m high concrete shear wall structure. The structure was divided into two substructures whereby a physical model of the bottom three stories was tested in the laboratory and the upper 37 stories were simulated numerically using ABAQUS. An overlapping domain method was employed for the bottom three stories to ensure the validity of the boundary conditions of the superstructure. Mixed control was adopted in the test. Displacement control was used to apply the horizontal displacement, while two controlled force actuators were applied to simulate the overturning moment, which is very large and cannot be ignored in the substructure hybrid test of high-rise buildings. A series of tests with earthquake sources of sequentially increasing intensities were carried out. The test results indicate that the proposed hybrid test method is a solution to reproduce the seismic response of high-rise concrete shear wall buildings. The seismic performance of the tested precast high-rise building satisfies the requirements of the Chinese seismic design code.
Link, Manuela; Schmid, Christina; Pleus, Stefan; Baumstark, Annette; Rittmeyer, Delia; Haug, Cornelia; Freckmann, Guido
2015-04-14
The standard ISO (International Organization for Standardization) 15197 is widely accepted for the accuracy evaluation of systems for self-monitoring of blood glucose (SMBG). Accuracy evaluation was performed for 4 SMBG systems (Accu-Chek Aviva, ContourXT, GlucoCheck XL, GlucoMen LX PLUS) with 3 test strip lots each. To investigate a possible impact of the comparison method on system accuracy data, 2 different established methods were used. The evaluation was performed in a standardized manner following test procedures described in ISO 15197:2003 (section 7.3). System accuracy was assessed by applying ISO 15197:2003 and in addition ISO 15197:2013 criteria (section 6.3.3). For each system, comparison measurements were performed with a glucose oxidase (YSI 2300 STAT Plus glucose analyzer) and a hexokinase (cobas c111) method. All 4 systems fulfilled the accuracy requirements of ISO 15197:2003 with the tested lots. More stringent accuracy criteria of ISO 15197:2013 were fulfilled by 3 systems (Accu-Chek Aviva, ContourXT, GlucoMen LX PLUS) when compared to the manufacturer's comparison method and by 2 systems (Accu-Chek Aviva, ContourXT) when compared to the alternative comparison method. All systems showed lot-to-lot variability to a certain degree; 2 systems (Accu-Chek Aviva, ContourXT), however, showed only minimal differences in relative bias between the 3 evaluated lots. In this study, all 4 systems complied with the evaluated test strip lots with accuracy criteria of ISO 15197:2003. Applying ISO 15197:2013 accuracy limits, differences in the accuracy of the tested systems were observed, also demonstrating that the applied comparison method/system and the lot-to-lot variability can have a decisive influence on accuracy data obtained for a SMBG system. © 2015 Diabetes Technology Society.
Valeriani, Federica; Agodi, Antonella; Casini, Beatrice; Cristina, Maria Luisa; D'Errico, Marcello Mario; Gianfranceschi, Gianluca; Liguori, Giorgio; Liguori, Renato; Mucci, Nicolina; Mura, Ida; Pasquarella, Cesira; Piana, Andrea; Sotgiu, Giovanni; Privitera, Gaetano; Protano, Carmela; Quattrocchi, Annalisa; Ripabelli, Giancarlo; Rossini, Angelo; Spagnolo, Anna Maria; Tamburro, Manuela; Tardivo, Stefano; Veronesi, Licia; Vitali, Matteo; Romano Spica, Vincenzo
2018-02-01
Reprocessing of endoscopes is key to preventing cross-infection after colonoscopy. Culture-based methods are recommended for monitoring, but alternative and rapid approaches are needed to improve surveillance and reduce turnover times. A molecular strategy based on detection of residual traces from gut microbiota was developed and tested using a multicenter survey. A simplified sampling and DNA extraction protocol using nylon-tipped flocked swabs was optimized. A multiplex real-time polymerase chain reaction (PCR) test was developed that targeted 6 bacteria genes that were amplified in 3 mixes. The method was validated by interlaboratory tests involving 5 reference laboratories. Colonoscopy devices (n = 111) were sampled in 10 Italian hospitals. Culture-based microbiology and metagenomic tests were performed to verify PCR data. The sampling method was easily applied in all 10 endoscopy units and the optimized DNA extraction and amplification protocol was successfully performed by all of the involved laboratories. This PCR-based method allowed identification of both contaminated (n = 59) and fully reprocessed endoscopes (n = 52) with high sensibility (98%) and specificity (98%), within 3-4 hours, in contrast to the 24-72 hours needed for a classic microbiology test. Results were confirmed by next-generation sequencing and classic microbiology. A novel approach for monitoring reprocessing of colonoscopy devices was developed and successfully applied in a multicenter survey. The general principle of tracing biological fluids through microflora DNA amplification was successfully applied and may represent a promising approach for hospital hygiene. Copyright © 2018 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Comparison of Pressures Applied by Digital Tourniquets in the Emergency Department
Lahham, Shadi; Tu, Khoa; Ni, Mickey; Tran, Viet; Lotfipour, Shahram; Anderson, Craig L.; Fox, J Christian
2011-01-01
Background: Digital tourniquets used in the emergency department have been scrutinized due to complications associated with their use, including neurovascular injury secondary to excessive tourniquet pressure and digital ischemia caused by a forgotten tourniquet. To minimize these risks, a conspicuous tourniquet that applies the least amount of pressure necessary to maintain hemostasis is recommended. Objective: To evaluate the commonly used tourniquet methods, the Penrose drain, rolled glove, the Tourni-cot and the T-Ring, to determine which applies the lowest pressure while consistently preventing digital perfusion. Methods: We measured the circumference of selected digits of 200 adult males and 200 adult females to determine the adult finger size range. We then measured the pressure applied to four representative finger sizes using a pressure monitor and assessed the ability of each method to prevent digital blood flow with a pulse oximeter. Results: We selected four representative finger sizes: 45mm, 65mm, 70mm, and 85mm to test the different tourniquet methods. All methods consistently prevented digital perfusion. The highest pressure recorded for the Penrose drain was 727 mmHg, the clamped rolled glove 439, the unclamped rolled glove 267, Tourni-cot 246, while the T-Ring had the lowest at 151 mmHg and least variable pressures of all methods. Conclusion: All tested methods provided adequate hemostasis. Only the Tourni-cot and T-Ring provided hemostasis at safe pressures across all digit sizes with the T-Ring having a lower overall average pressure. PMID:21691536
Using a Linear Regression Method to Detect Outliers in IRT Common Item Equating
ERIC Educational Resources Information Center
He, Yong; Cui, Zhongmin; Fang, Yu; Chen, Hanwei
2013-01-01
Common test items play an important role in equating alternate test forms under the common item nonequivalent groups design. When the item response theory (IRT) method is applied in equating, inconsistent item parameter estimates among common items can lead to large bias in equated scores. It is prudent to evaluate inconsistency in parameter…
The Effect of Montessori Method on Cognitive Tempo of Kindergarten Children
ERIC Educational Resources Information Center
Kayili, Gökhan
2018-01-01
This study was undertaken to discover the effect of the Montessori Method on the cognitive tempo of 4-5-year-old children. Using an experimental pre-test-post-test paired control group design, the study sample included 60 children attending Ihsan Dogramaci Applied Nursery School (affiliated to Selcuk University, Department of Health Sciences) in…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okura, Yuki; Futamase, Toshifumi, E-mail: yuki.okura@riken.jp
We improve the ellipticity of re-smeared artificial image (ERA) method of point-spread function (PSF) correction in a weak lensing shear analysis in order to treat the realistic shape of galaxies and the PSF. This is done by re-smearing the PSF and the observed galaxy image using a re-smearing function (RSF) and allows us to use a new PSF with a simple shape and to correct the PSF effect without any approximations or assumptions. We perform a numerical test to show that the method applied for galaxies and PSF with some complicated shapes can correct the PSF effect with a systematicmore » error of less than 0.1%. We also apply the ERA method for real data of the Abell 1689 cluster to confirm that it is able to detect the systematic weak lensing shear pattern. The ERA method requires less than 0.1 or 1 s to correct the PSF for each object in a numerical test and a real data analysis, respectively.« less
46 CFR 160.062-4 - Inspections and tests.
Code of Federal Regulations, 2010 CFR
2010-10-01
... approved hydraulic releases are manufactured or reconditioned to observe production methods and to conduct... place of manufacture by the marine inspector. (b) Classification of tests. The sampling, inspections... shall be tested by applying buoyant loads of its designed capacity to its spring-tensioned gripe as...
40 CFR 796.2750 - Sediment and soil adsorption isotherm.
Code of Federal Regulations, 2014 CFR
2014-07-01
... are highly reproducible. The test provides excellent quantitative data readily amenable to statistical... combination of methods suitable for the identification and quantitative detection of the parent test chemical... quantitative analysis of the parent chemical. (3) Amount of parent test chemical applied, the amount recovered...
40 CFR 796.2750 - Sediment and soil adsorption isotherm.
Code of Federal Regulations, 2013 CFR
2013-07-01
... highly reproducible. The test provides excellent quantitative data readily amenable to statistical... combination of methods suitable for the identification and quantitative detection of the parent test chemical... quantitative analysis of the parent chemical. (3) Amount of parent test chemical applied, the amount recovered...
40 CFR 796.2750 - Sediment and soil adsorption isotherm.
Code of Federal Regulations, 2012 CFR
2012-07-01
... highly reproducible. The test provides excellent quantitative data readily amenable to statistical... combination of methods suitable for the identification and quantitative detection of the parent test chemical... quantitative analysis of the parent chemical. (3) Amount of parent test chemical applied, the amount recovered...
Comparing the field and laboratory emission cell (FLEC) with traditional emissions testing chambers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roache, N.F.; Guo, Z.; Fortmann, R.
1996-12-31
A series of tests was designed to evaluate the performance of the field and laboratory emission cell (FLEC) as applied to the testing of emissions from two indoor coating materials, floor wax and latex paint. These tests included validation of the repeatability of the test method, evaluation of the effect of different air velocities on source emissions, and a comparison of FLEC versus small chamber characterization of emissions. The FLEC exhibited good repeatability in characterization of emissions when applied to both sources under identical conditions. Tests with different air velocities showed significant effects on the emissions from latex paint, yetmore » little effect on emissions from the floor wax. Comparisons of data from the FLEC and small chamber show good correlation for measurements involving floor wax, but less favorable results for emissions from latex paint. The procedures and findings are discussed; conclusions are limited and include emphasis on the need for additional study and development of a standard method.« less
NASA Astrophysics Data System (ADS)
Krstulović-Opara, Lovre; Surjak, Martin; Vesenjak, Matej; Tonković, Zdenko; Kodvanj, Janoš; Domazet, Željko
2015-11-01
To investigate the applicability of infrared thermography as a tool for acquiring dynamic yielding in metals, a comparison of infrared thermography with three dimensional digital image correlation has been made. Dynamical tension tests and three point bending tests of aluminum alloys have been performed to evaluate results obtained by IR thermography in order to detect capabilities and limits for these two methods. Both approaches detect pastification zone migrations during the yielding process. The results of the tension test and three point bending test proved the validity of the IR approach as a method for evaluating the dynamic yielding process when used on complex structures such as cellular porous materials. The stability of the yielding process in the three point bending test, as contrary to the fluctuation of the plastification front in the tension test, is of great importance for the validation of numerical constitutive models. The research proved strong performance, robustness and reliability of the IR approach when used to evaluate yielding during dynamic loading processes, while the 3D DIC method proved to be superior in the low velocity loading regimes. This research based on two basic tests, proved the conclusions and suggestions presented in our previous research on porous materials where middle wave infrared thermography was applied.
Berthels, Nele; Matthijs, Gert; Van Overwalle, Geertrui
2011-01-01
Recent reports in Europe and the United States raise concern about the potential negative impact of gene patents on the freedom to operate of diagnosticians and on the access of patients to genetic diagnostic services. Patents, historically seen as legal instruments to trigger innovation, could cause undesired side effects in the public health domain. Clear empirical evidence on the alleged hindering effect of gene patents is still scarce. We therefore developed a patent categorization method to determine which gene patents could indeed be problematic. The method is applied to patents relevant for genetic testing of spinocerebellar ataxia (SCA). The SCA test is probably the most widely used DNA test in (adult) neurology, as well as one of the most challenging due to the heterogeneity of the disease. Typically tested as a gene panel covering the five common SCA subtypes, we show that the patenting of SCA genes and testing methods and the associated licensing conditions could have far-reaching consequences on legitimate access to this gene panel. Moreover, with genetic testing being increasingly standardized, simply ignoring patents is unlikely to hold out indefinitely. This paper aims to differentiate among so-called ‘gene patents' by lifting out the truly problematic ones. In doing so, awareness is raised among all stakeholders in the genetic diagnostics field who are not necessarily familiar with the ins and outs of patenting and licensing. PMID:21811306
NASA Astrophysics Data System (ADS)
Secmen, Mustafa
2011-10-01
This paper introduces the performance of an electromagnetic target recognition method in resonance scattering region, which includes pseudo spectrum Multiple Signal Classification (MUSIC) algorithm and principal component analysis (PCA) technique. The aim of this method is to classify an "unknown" target as one of the "known" targets in an aspect-independent manner. The suggested method initially collects the late-time portion of noise-free time-scattered signals obtained from different reference aspect angles of known targets. Afterward, these signals are used to obtain MUSIC spectrums in real frequency domain having super-resolution ability and noise resistant feature. In the final step, PCA technique is applied to these spectrums in order to reduce dimensionality and obtain only one feature vector per known target. In the decision stage, noise-free or noisy scattered signal of an unknown (test) target from an unknown aspect angle is initially obtained. Subsequently, MUSIC algorithm is processed for this test signal and resulting test vector is compared with feature vectors of known targets one by one. Finally, the highest correlation gives the type of test target. The method is applied to wire models of airplane targets, and it is shown that it can tolerate considerable noise levels although it has a few different reference aspect angles. Besides, the runtime of the method for a test target is sufficiently low, which makes the method suitable for real-time applications.
40 CFR 60.547 - Test methods and procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... materials. In the event of dispute, Method 24 shall be the reference method. For Method 24, the cement or... sample will be representative of the material as applied in the affected facility. (2) Method 25 as the... by the Administrator. (3) Method 2, 2A, 2C, or 2D, as appropriate, as the reference method for...
40 CFR 60.547 - Test methods and procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... materials. In the event of dispute, Method 24 shall be the reference method. For Method 24, the cement or... sample will be representative of the material as applied in the affected facility. (2) Method 25 as the... by the Administrator. (3) Method 2, 2A, 2C, or 2D, as appropriate, as the reference method for...
40 CFR 60.547 - Test methods and procedures.
Code of Federal Regulations, 2014 CFR
2014-07-01
... materials. In the event of dispute, Method 24 shall be the reference method. For Method 24, the cement or... sample will be representative of the material as applied in the affected facility. (2) Method 25 as the... by the Administrator. (3) Method 2, 2A, 2C, or 2D, as appropriate, as the reference method for...
Neuro-parity pattern recognition system and method
Gross, Kenneth C.; Singer, Ralph M.; Van Alstine, Rollin G.; Wegerich, Stephan W.; Yue, Yong
2000-01-01
A method and system for monitoring a process and determining its condition. Initial data is sensed, a first set of virtual data is produced by applying a system state analyzation to the initial data, a second set of virtual data is produced by applying a neural network analyzation to the initial data and a parity space analyzation is applied to the first and second set of virtual data and also to the initial data to provide a parity space decision about the condition of the process. A logic test can further be applied to produce a further system decision about the state of the process.
Accelerated Testing Methodology Developed for Determining the Slow Crack Growth of Advanced Ceramics
NASA Technical Reports Server (NTRS)
Choi, Sung R.; Gyekenyesi, John P.
1998-01-01
Constant stress-rate ("dynamic fatigue") testing has been used for several decades to characterize the slow crack growth behavior of glass and structural ceramics at both ambient and elevated temperatures. The advantage of such testing over other methods lies in its simplicity: strengths are measured in a routine manner at four or more stress rates by applying a constant displacement or loading rate. The slow crack growth parameters required for component design can be estimated from a relationship between strength and stress rate. With the proper use of preloading in constant stress-rate testing, test time can be reduced appreciably. If a preload corresponding to 50 percent of the strength is applied to the specimen prior to testing, 50 percent of the test time can be saved as long as the applied preload does not change the strength. In fact, it has been a common, empirical practice in the strength testing of ceramics or optical fibers to apply some preloading (<40 percent). The purpose of this work at the NASA Lewis Research Center is to study the effect of preloading on measured strength in order to add a theoretical foundation to the empirical practice.
Kulikov, A M; Lazebnyĭ, O E; Chekunova, A I; Mitrofanov, V G
2010-01-01
The steadiness of the molecular clock was estimated in 11 Drosophila species of the virilis group by sequences of five genes by applying Tajima's Simple Method. The main characteristic of this method is the independence of its phylogenetic constructions. The obtained results have completely confirmed the conclusions drawn relying on the application of the two-cluster test and the Takezaki branch-length test. In addition, the deviation of the molecular clock has found confirmation in D. virilis evolutionary lineages.
Wootton, Roy E.
1979-01-01
A method of testing a gas insulated system for the presence of conducting particles. The method includes inserting a gaseous mixture comprising about 98 volume percent nitrogen and about 2 volume percent sulfur hexafluoride into the gas insulated system at a pressure greater than 60 lb./sq. in. gauge, and then applying a test voltage to the system. If particles are present within the system, the gaseous mixture will break down, providing an indicator of the presence of the particles.
ERIC Educational Resources Information Center
Dodd, Carol Ann
This study explores a technique for evaluating teacher education programs in terms of teaching competencies, as applied to the Indiana University Mathematics Methods Program (MMP). The evaluation procedures formulated for the study include a process product design in combination with a modification of Pophan's performance test paradigm and Gage's…
Watertight cataract incision closure using fibrin tissue adhesive.
Hovanesian, John A; Karageozian, Vicken H
2007-08-01
To determine whether a simple method for applying fibrin tissue adhesive to a clear corneal cataract incision can create a watertight seal. Laboratory investigation. Clear corneal cataract incisions were simulated in 8 eye-bank eyes. In 4 eyes, fibrin adhesive was applied to the incision in a simple manner; the other 4 eyes were controls with no adhesive. Each eye was tested under low pressure conditions to detect fluid ingress of India Ink on the eye's surface. The eyes were tested again with external compression to distort the incision to detect fluid egress. In the eyes with fibrin adhesive, there was no egress of fluid with incision distortion and no ingress of India Ink. In the 4 eyes without adhesive, there was ingress and egress of fluid. A simple method of applying fibrin adhesive to cataract incisions created a watertight seal.
NASA Astrophysics Data System (ADS)
Minatour, Yasser; Bonakdari, Hossein; Zarghami, Mahdi; Bakhshi, Maryam Ali
2015-09-01
The purpose of this study was to develop a group fuzzy multi-criteria decision-making method to be applied in rating problems associated with water resources management. Thus, here Chen's group fuzzy TOPSIS method extended by a difference technique to handle uncertainties of applying a group decision making. Then, the extended group fuzzy TOPSIS method combined with a consistency check. In the presented method, initially linguistic judgments are being surveyed via a consistency checking process, and afterward these judgments are being used in the extended Chen's fuzzy TOPSIS method. Here, each expert's opinion is turned to accurate mathematical numbers and, then, to apply uncertainties, the opinions of group are turned to fuzzy numbers using three mathematical operators. The proposed method is applied to select the optimal strategy for the rural water supply of Nohoor village in north-eastern Iran, as a case study and illustrated example. Sensitivity analyses test over results and comparing results with project reality showed that proposed method offered good results for water resources projects.
Drawbar Pull (DP) Procedures for Off-Road Vehicle Testing
NASA Technical Reports Server (NTRS)
Creager, Colin; Asnani, Vivake; Oravec, Heather; Woodward, Adam
2017-01-01
As NASA strives to explore the surface of the Moon and Mars, there is a continued need for improved tire and vehicle development. When tires or vehicles are being designed for off-road conditions where significant thrust generation is required, such as climbing out of craters on the Moon, it is important to use a standard test method for evaluating their tractive performance. The drawbar pull (DP) test is a way of measuring the net thrust generated by tires or a vehicle with respect to performance metrics such as travel reduction, sinkage, or power efficiency. DP testing may be done using a single tire on a traction rig, or with a set of tires on a vehicle; this report focuses on vehicle DP tests. Though vehicle DP tests have been used for decades, there are no standard procedures that apply to exploration vehicles. This report summarizes previous methods employed, shows the sensitivity of certain test parameters, and provides a body of knowledge for developing standard testing procedures. The focus of this work is on lunar applications, but these test methods can be applied to terrestrial and planetary conditions as well. Section 1.0 of this report discusses the utility of DP testing for off-road vehicle evaluation and the metrics used. Section 2.0 focuses on test-terrain preparation, using the example case of lunar terrain. There is a review of lunar terrain analogs implemented in the past and a discussion on the lunar terrain conditions created at the NASA Glenn Research Center, including methods of evaluating the terrain strength variation and consistency from test to test. Section 3.0 provides details of the vehicle test procedures. These consist of a review of past methods, a comprehensive study on the sensitivity of test parameters, and a summary of the procedures used for DP testing at Glenn.
Effects of Multiple Intelligences Activities on Writing Skill Development in an EFL Context
ERIC Educational Resources Information Center
Gündüz, Zennure Elgün; Ünal, Ismail Dogan
2016-01-01
This study aims at exploring the effects of multiple intelligences activities versus traditional method on English writing development of the sixth grade students in Turkey. A quasi-experimental research method with a pre-test post-test design was applied. The participants were 50 sixth grade students at a state school in Ardahan in Turkey. The…
ERIC Educational Resources Information Center
Liu, Boquan; Polce, Evan; Sprott, Julien C.; Jiang, Jack J.
2018-01-01
Purpose: The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression. Study Design: Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100…
Choi, Bongkyoo; Ko, Sangbaek; Ostergren, Per-Olof
2015-01-01
This study aims to test the validity of the IPD-Work Consortium approach for creating comparable job strain groups between the Job Content Questionnaire (JCQ) and the Demand-Control Questionnaire (DCQ). A random population sample (N = 682) of all middle-aged Malmö males and females was given a questionnaire with the 14-item JCQ and 11-item DCQ for the job control and job demands. The JCQ job control and job demands scores were calculated in 3 different ways: using the 14-item JCQ standard scale formulas (method 1); dropping 3 job control items and using the 11-item JCQ standard scale formulas with additional scale weights (method 2); and the approach of the IPD Group (method 3), dropping 3 job control items, but using the simple 11-item summation-based scale formulas. The high job strain was defined as a combination of high demands and low control. Between the 2 questionnaires, false negatives for the high job strain were much greater than false positives (37-49% vs. 7-13%). When the method 3 was applied, the sensitivity of the JCQ for the high job strain against the DCQ was lowest (0.51 vs. 0.60-0.63 when the methods 1 and 2 were applied), although the specificity was highest (0.93 vs. 0.87-0.89 when the methods 1 and 2 were applied). The prevalence of the high job strain with the JCQ (the method 3 was applied) was considerably lower (4-7%) than with the JCQ (the methods 1 and 2 were applied) and the DCQ. The number of congruent cases for the high job strain between the 2 questionnaires was smallest when the method 3 was applied. The IPD-Work Consortium approach showed 2 major weaknesses to be used for epidemiological studies on the high job strain and health outcomes as compared to the standard JCQ methods: the greater misclassification of the high job strain and lower prevalence of the high job strain. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.
Lemmer, K; Howaldt, S; Heinrich, R; Roder, A; Pauli, G; Dorner, B G; Pauly, D; Mielke, M; Schwebke, I; Grunow, R
2017-11-01
The work aimed at developing and evaluating practically relevant methods for testing of disinfectants on contaminated personal protective equipment (PPE). Carriers were prepared from PPE fabrics and contaminated with Bacillus subtilis spores. Peracetic acid (PAA) was applied as a suitable disinfectant. In method 1, the contaminated carrier was submerged in PAA solution; in method 2, the contaminated area was covered with PAA; and in method 3, PAA, preferentially combined with a surfactant, was dispersed as a thin layer. In each method, 0·5-1% PAA reduced the viability of spores by a factor of ≥6 log 10 within 3 min. The technique of the most realistic method 3 proved to be effective at low temperatures and also with a high organic load. Vaccinia virus and Adenovirus were inactivated with 0·05-0·1% PAA by up to ≥6 log 10 within 1 min. The cytotoxicity of ricin was considerably reduced by 2% PAA within 15 min of exposure. PAA/detergent mixture enabled to cover hydrophobic PPE surfaces with a thin and yet effective disinfectant layer. The test methods are objective tools for estimating the biocidal efficacy of disinfectants on hydrophobic flexible surfaces. © 2017 The Society for Applied Microbiology.
Buhr, T L; Young, A A; Minter, Z A; Wells, C M; McPherson, D C; Hooban, C L; Johnson, C A; Prokop, E J; Crigler, J R
2012-11-01
To develop test methods and evaluate the survival of Bacillus anthracis ∆Sterne and Bacillus thuringiensis Al Hakam spores after exposure to hot, humid air. Spores (>7 logs) of both strains were dried on six different test materials. Response surface methodology was employed to identify the limits of spore survival at optimal test combinations of temperature (60, 68, 77°C), relative humidity (60, 75, 90%) and time (1, 4, 7 days). No spores survived the harshest test run (77°C, 90% r.h., 7 days), while > 6·5 logs of spores survived the mildest test run (60°C, 60% r.h., 1 day). Spores of both strains inoculated on nylon webbing and polypropylene had greater survival rates at 68°C, 75% r.h., 4 days than spores on other materials. Electron microscopy showed no obvious physical damage to spores using hot, humid air, which contrasted with pH-adjusted bleach decontamination. Test methods were developed to show that hot, humid air effectively inactivates B. anthracis ∆Sterne and B. thuringiensis Al Hakam spores with similar kinetics. Hot, humid air is a potential alternative to conventional chemical decontamination. © 2012 The Authors Journal of Applied Microbiology © 2012 The Society for Applied Microbiology.
Sequential Probability Ratio Test for Collision Avoidance Maneuver Decisions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis
2010-01-01
When facing a conjunction between space objects, decision makers must chose whether to maneuver for collision avoidance or not. We apply a well-known decision procedure, the sequential probability ratio test, to this problem. We propose two approaches to the problem solution, one based on a frequentist method, and the other on a Bayesian method. The frequentist method does not require any prior knowledge concerning the conjunction, while the Bayesian method assumes knowledge of prior probability densities. Our results show that both methods achieve desired missed detection rates, but the frequentist method's false alarm performance is inferior to the Bayesian method's
Automatic test comes to focal plane array production
NASA Astrophysics Data System (ADS)
Skaggs, Frank L.; Barton, T. D.
1992-08-01
To meet the needs of military and commercial markets, the infrared focal plane array industry must develop new, effective and low cost methods of fabricating and testing imaging detectors. This paper describes Texas Instruments new concepts in automated testing and cold probe technology as they apply to volume production.
Coaching for Tests. ERIC Digest.
ERIC Educational Resources Information Center
Wildemuth, Barbara
The term "coaching" applies to a variety of types of test preparation programs which vary in length, instructional method, and content. Most research on the effectiveness of coaching has examined the Scholastic Aptitude Test (SAT), a measure of academic abilities used to predict college performance. This ERIC Digest reviews studies of…
NASA Technical Reports Server (NTRS)
Wallace, Dolores R.
2003-01-01
In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.
[Application of Stata software to test heterogeneity in meta-analysis method].
Wang, Dan; Mou, Zhen-yun; Zhai, Jun-xia; Zong, Hong-xia; Zhao, Xiao-dong
2008-07-01
To introduce the application of Stata software to heterogeneity test in meta-analysis. A data set was set up according to the example in the study, and the corresponding commands of the methods in Stata 9 software were applied to test the example. The methods used were Q-test and I2 statistic attached to the fixed effect model forest plot, H statistic and Galbraith plot. The existence of the heterogeneity among studies could be detected by Q-test and H statistic and the degree of the heterogeneity could be detected by I2 statistic. The outliers which were the sources of the heterogeneity could be spotted from the Galbraith plot. Heterogeneity test in meta-analysis can be completed by the four methods in Stata software simply and quickly. H and I2 statistics are more robust, and the outliers of the heterogeneity can be clearly seen in the Galbraith plot among the four methods.
Southwest electronic one-stop shopping, motor carrier test report
DOT National Transportation Integrated Search
1997-12-22
The Electronic One-Stop System (EOSS) used in this credential test was designed to replace current normal credentialling procedures with a personal computer-based electronic method that allows users to prepare, apply for, and obtain certain types of ...
Southwest electronic one-stop shopping, state agency test report
DOT National Transportation Integrated Search
1997-12-22
The Electronic One-Stop System (EOSS) used in this credential test was designed to replace current normal credentialling procedures with a personal computer-based electronic method that allows users to prepare, apply for, and obtain certain types of ...
EPAs National Center for Computational Toxicology is developing methods that apply computational chemistry, high-throughput screening (HTS) and genomic technologies to predict potential toxicity and prioritize the use of limited testing resources.
NASA Astrophysics Data System (ADS)
Ghannadpour, Seyyed Saeed; Hezarkhani, Ardeshir
2016-03-01
The U-statistic method is one of the most important structural methods to separate the anomaly from the background. It considers the location of samples and carries out the statistical analysis of the data without judging from a geochemical point of view and tries to separate subpopulations and determine anomalous areas. In the present study, to use U-statistic method in three-dimensional (3D) condition, U-statistic is applied on the grade of two ideal test examples, by considering sample Z values (elevation). So far, this is the first time that this method has been applied on a 3D condition. To evaluate the performance of 3D U-statistic method and in order to compare U-statistic with one non-structural method, the method of threshold assessment based on median and standard deviation (MSD method) is applied on the two example tests. Results show that the samples indicated by U-statistic method as anomalous are more regular and involve less dispersion than those indicated by the MSD method. So that, according to the location of anomalous samples, denser areas of them can be determined as promising zones. Moreover, results show that at a threshold of U = 0, the total error of misclassification for U-statistic method is much smaller than the total error of criteria of bar {x}+n× s. Finally, 3D model of two test examples for separating anomaly from background using 3D U-statistic method is provided. The source code for a software program, which was developed in the MATLAB programming language in order to perform the calculations of the 3D U-spatial statistic method, is additionally provided. This software is compatible with all the geochemical varieties and can be used in similar exploration projects.
Hartmann, Nanna B; Jensen, Keld Alstrup; Baun, Anders; Rasmussen, Kirsten; Rauscher, Hubert; Tantra, Ratna; Cupi, Denisa; Gilliland, Douglas; Pianella, Francesca; Riego Sintes, Juan M
2015-01-01
Selecting appropriate ways of bringing engineered nanoparticles (ENP) into aqueous dispersion is a main obstacle for testing, and thus for understanding and evaluating, their potential adverse effects to the environment and human health. Using different methods to prepare (stock) dispersions of the same ENP may be a source of variation in the toxicity measured. Harmonization and standardization of dispersion methods applied in mammalian and ecotoxicity testing are needed to ensure a comparable data quality and to minimize test artifacts produced by modifications of ENP during the dispersion preparation process. Such harmonization and standardization will also enhance comparability among tests, labs, and studies on different types of ENP. The scope of this review was to critically discuss the essential parameters in dispersion protocols for ENP. The parameters are identified from individual scientific studies and from consensus reached in larger scale research projects and international organizations. A step-wise approach is proposed to develop tailored dispersion protocols for ecotoxicological and mammalian toxicological testing of ENP. The recommendations of this analysis may serve as a guide to researchers, companies, and regulators when selecting, developing, and evaluating the appropriateness of dispersion methods applied in mammalian and ecotoxicity testing. However, additional experimentation is needed to further document the protocol parameters and investigate to what extent different stock dispersion methods affect ecotoxicological and mammalian toxicological responses of ENP.
Liu, Boquan; Polce, Evan; Sprott, Julien C; Jiang, Jack J
2018-05-17
The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression. Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100 Monte Carlo experiments were applied to analyze the output of jitter, shimmer, correlation dimension, and spectrum convergence ratio. The computational output of the 4 classifiers was then plotted against signal chaos level to investigate the performance of these acoustic analysis methods under varying degrees of signal chaos. A diffusive behavior detection-based chaos level test was used to investigate the performances of different voice classification methods. Voice signals were constructed by varying the signal-to-noise ratio to establish differing signal chaos conditions. Chaos level increased sigmoidally with increasing noise power. Jitter and shimmer performed optimally when the chaos level was less than or equal to 0.01, whereas correlation dimension was capable of analyzing signals with chaos levels of less than or equal to 0.0179. Spectrum convergence ratio demonstrated proficiency in analyzing voice signals with all chaos levels investigated in this study. The results of this study corroborate the performance relationships observed in previous studies and, therefore, demonstrate the validity of the validation test method. The presented chaos level validation test could be broadly utilized to evaluate acoustic analysis methods and establish the most appropriate methodology for objective voice analysis in clinical practice.
Fatigue analysis and testing of wind turbine blades
NASA Astrophysics Data System (ADS)
Greaves, Peter Robert
This thesis focuses on fatigue analysis and testing of large, multi MW wind turbine blades. The blades are one of the most expensive components of a wind turbine, and their mass has cost implications for the hub, nacelle, tower and foundations of the turbine so it is important that they are not unnecessarily strong. Fatigue is often an important design driver, but fatigue of composites is poorly understood and so large safety factors are often applied to the loads. This has implications for the weight of the blade. Full scale fatigue testing of blades is required by the design standards, and provides manufacturers with confidence that the blade will be able to survive its service life. This testing is usually performed by resonating the blade in the flapwise and edgewise directions separately, but in service these two loads occur at the same time.. A fatigue testing method developed at Narec (the National Renewable Energy Centre) in the UK in which the flapwise and edgewise directions are excited simultaneously has been evaluated by comparing the Palmgren-Miner damage sum around the blade cross section after testing with the damage distribution caused by the service life. A method to obtain the resonant test configuration that will result in the optimum mode shapes for the flapwise and edgewise directions was then developed, and simulation software was designed to allow the blade test to be simulated so that realistic comparisons between the damage distributions after different test types could be obtained. During the course of this work the shortcomings with conventional fatigue analysis methods became apparent, and a novel method of fatigue analysis based on multi-continuum theory and the kinetic theory of fracture was developed. This method was benchmarked using physical test data from the OPTIDAT database and was applied to the analysis of a complete blade. A full scale fatigue test method based on this new analysis approach is also discussed..
NASA Astrophysics Data System (ADS)
Lotfy, Hayam M.; Tawakkol, Shereen M.; Fahmy, Nesma M.; Shehata, Mostafa A.
2015-02-01
Simultaneous determination of mixtures of lidocaine hydrochloride (LH), flucortolone pivalate (FCP), in presence of chlorquinaldol (CQ) without prior separation steps was applied using either successive or progressive resolution techniques. According to the concentration of CQ the extent of overlapping changed so it can be eliminated from the mixture to get the binary mixture of LH and FCP using ratio subtraction method for partially overlapped spectra or constant value via amplitude difference followed by ratio subtraction or constant center followed by spectrum subtraction spectrum subtraction for severely overlapped spectra. Successive ratio subtraction was coupled with extended ratio subtraction, constant multiplication, derivative subtraction coupled constant multiplication, and spectrum subtraction can be applied for the analysis of partially overlapped spectra. On the other hand severely overlapped spectra can be analyzed by constant center and the novel methods namely differential dual wavelength (D1 DWL) for CQ, ratio difference and differential derivative ratio (D1 DR) for FCP, while LH was determined by applying constant value via amplitude difference followed by successive ratio subtraction, and successive derivative subtraction. The spectra of the cited drugs can be resolved and their concentrations are determined progressively from the same ratio spectrum using amplitude modulation method. The specificity of the developed methods was investigated by analyzing laboratory prepared mixtures and were successfully applied for the analysis of pharmaceutical formulations containing the cited drugs with no interference from additives. The proposed methods were validated according to the ICH guidelines. The obtained results were statistically compared with those of the official or reported methods; using student t-test, F-test, and one way ANOVA, showing no significant difference with respect to accuracy and precision.
Local lymph node assay: how testing laboratories apply OECD TG 429 for REACH purposes.
Rovida, Costanza
2011-01-01
The Local Lymph Node Assay (LLNA) is the official method for assessing the allergic contact dermatitis potential of chemicals for the purposes of REACH regulation. The LLNA went through a validation process that allowed the delineation of a robust protocol for performing new tests. The OECD accepted this method in 2002 and published OECD TG 429. The European Chemical Agency (ECHA) recently published data that were submitted in the registration dossiers of chemicals. This database was analysed to determine how testing laboratories apply OECD TG 429. This analysis comes after a detailed analysis of four full study reports that were also prepared for REACH purposes. Although the majority of the tests are fully compliant with OECD TG 429, some showed major deviations, and a number of others used more animals than necessary. This suggests that in vivo tests need to be planned more carefully and consciously to obtain meaningful results with the minimum animal number necessary.
Raskin, Cody; Owen, J. Michael
2016-10-24
Here, we discuss a generalization of the classic Keplerian disk test problem allowing for both pressure and rotational support, as a method of testing astrophysical codes incorporating both gravitation and hydrodynamics. We argue for the inclusion of pressure in rotating disk simulations on the grounds that realistic, astrophysical disks exhibit non-negligible pressure support. We then apply this test problem to examine the performance of various smoothed particle hydrodynamics (SPH) methods incorporating a number of improvements proposed over the years to address problems noted in modeling the classical gravitation-only Keplerian disk. We also apply this test to a newly developed extensionmore » of SPH based on reproducing kernels called CRKSPH. Counterintuitively, we find that pressure support worsens the performance of traditional SPH on this problem, causing unphysical collapse away from the steady-state disk solution even more rapidly than the purely gravitational problem, whereas CRKSPH greatly reduces this error.« less
Testing large aspheric surfaces with complementary annular subaperture interferometric method
NASA Astrophysics Data System (ADS)
Hou, Xi; Wu, Fan; Lei, Baiping; Fan, Bin; Chen, Qiang
2008-07-01
Annular subaperture interferometric method has provided an alternative solution to testing rotationally symmetric aspheric surfaces with low cost and flexibility. However, some new challenges, particularly in the motion and algorithm components, appear when applied to large aspheric surfaces with large departure in the practical engineering. Based on our previously reported annular subaperture reconstruction algorithm with Zernike annular polynomials and matrix method, and the experimental results for an approximate 130-mm diameter and f/2 parabolic mirror, an experimental investigation by testing an approximate 302-mm diameter and f/1.7 parabolic mirror with the complementary annular subaperture interferometric method is presented. We have focused on full-aperture reconstruction accuracy, and discuss some error effects and limitations of testing larger aspheric surfaces with the annular subaperture method. Some considerations about testing sector segment with complementary sector subapertures are provided.
NASA Astrophysics Data System (ADS)
Chatzistergos, Theodosios; Ermolli, Ilaria; Solanki, Sami K.; Krivova, Natalie A.
2018-01-01
Context. Historical Ca II K spectroheliograms (SHG) are unique in representing long-term variations of the solar chromospheric magnetic field. They usually suffer from numerous problems and lack photometric calibration. Thus accurate processing of these data is required to get meaningful results from their analysis. Aims: In this paper we aim at developing an automatic processing and photometric calibration method that provides precise and consistent results when applied to historical SHG. Methods: The proposed method is based on the assumption that the centre-to-limb variation of the intensity in quiet Sun regions does not vary with time. We tested the accuracy of the proposed method on various sets of synthetic images that mimic problems encountered in historical observations. We also tested our approach on a large sample of images randomly extracted from seven different SHG archives. Results: The tests carried out on the synthetic data show that the maximum relative errors of the method are generally <6.5%, while the average error is <1%, even if rather poor quality observations are considered. In the absence of strong artefacts the method returns images that differ from the ideal ones by <2% in any pixel. The method gives consistent values for both plage and network areas. We also show that our method returns consistent results for images from different SHG archives. Conclusions: Our tests show that the proposed method is more accurate than other methods presented in the literature. Our method can also be applied to process images from photographic archives of solar observations at other wavelengths than Ca II K.
NDT evaluation of long-term bond durability of CFRP-structural systems applied to RC highway bridges
NASA Astrophysics Data System (ADS)
Crawford, Kenneth C.
2016-06-01
The long-term durability of CFRP structural systems applied to reinforced-concrete (RC) highway bridges is a function of the system bond behavior over time. The sustained structural load performance of strengthened bridges depends on the carbon fiber-reinforced polymer (CFRP) laminates remaining 100 % bonded to concrete bridge members. Periodic testing of the CFRP-concrete bond condition is necessary to sustain load performance. The objective of this paper is to present a non-destructive testing (NDT) method designed to evaluate the bond condition and long-term durability of CFRP laminate (plate) systems applied to RC highway bridges. Using the impact-echo principle, a mobile mechanical device using light impact hammers moving along the length of a bonded CFRP plate produces unique acoustic frequencies which are a function of existing CFRP plate-concrete bond conditions. The purpose of this method is to test and locate CFRP plates de-bonded from bridge structural members to identify associated deterioration in bridge load performance. Laboratory tests of this NDT device on a CFRP plate bonded to concrete with staged voids (de-laminations) produced different frequencies for bonded and de-bonded areas of the plate. The spectra (bands) of frequencies obtained in these tests show a correlation to the CFRP-concrete bond condition and identify bonded and de-bonded areas of the plate. The results of these tests indicate that this NDT impact machine, with design improvements, can potentially provide bridge engineers a means to rapidly evaluate long lengths of CFRP laminates applied to multiple highway bridges within a national transportation infrastructure.
Magneto acoustic emission apparatus for testing materials for embrittlement
NASA Technical Reports Server (NTRS)
Allison, Sidney G. (Inventor); Min, Namkung (Inventor); Yost, William T. (Inventor); Cantrell, John H. (Inventor)
1990-01-01
A method and apparatus for testing steel components for temper embrittlement uses magneto-acoustic emission to nondestructively evaluate the component. Acoustic emission signals occur more frequently at higher levels in embrittled components. A pair of electromagnets are used to create magnetic induction in the test component. Magneto-acoustic emission signals may be generated by applying an ac current to the electromagnets. The acoustic emission signals are analyzed to provide a comparison between a component known to be unembrittled and a test component. Magnetic remanence is determined by applying a dc current to the electromagnets, then turning the magnets off and observing the residual magnetic induction.
ERIC Educational Resources Information Center
Garcia-Quintana, Roan A.; Johnson, Lynne M.
Three different computational procedures for equating two forms of a test were applied to a pair of mathematics tests to compare the results of the three procedures. The tests that were being equated were two forms of the SRA Mastery Mathematics Tests. The common, linking test used for equating was the Comprehensive Tests of Basic Skills, Form S,…
Validation of catchment models for predicting land-use and climate change impacts. 1. Method
NASA Astrophysics Data System (ADS)
Ewen, J.; Parkin, G.
1996-02-01
Computer simulation models are increasingly being proposed as tools capable of giving water resource managers accurate predictions of the impact of changes in land-use and climate. Previous validation testing of catchment models is reviewed, and it is concluded that the methods used do not clearly test a model's fitness for such a purpose. A new generally applicable method is proposed. This involves the direct testing of fitness for purpose, uses established scientific techniques, and may be implemented within a quality assured programme of work. The new method is applied in Part 2 of this study (Parkin et al., J. Hydrol., 175:595-613, 1996).
7 CFR 58.133 - Methods for quality and wholesomeness determination.
Code of Federal Regulations, 2013 CFR
2013-01-01
... FOR GRADES OF DAIRY PRODUCTS 1 General Specifications for Dairy Plants Approved for USDA Inspection... any other method approved by Standard Methods for the Examination of Dairy Products (confirmatory test... and Applied Nutrition, 200 C Street SW., Washington, DC 20204. (2) Individual producer milk samples...
7 CFR 58.133 - Methods for quality and wholesomeness determination.
Code of Federal Regulations, 2012 CFR
2012-01-01
... FOR GRADES OF DAIRY PRODUCTS 1 General Specifications for Dairy Plants Approved for USDA Inspection... any other method approved by Standard Methods for the Examination of Dairy Products (confirmatory test... and Applied Nutrition, 200 C Street SW., Washington, DC 20204. (2) Individual producer milk samples...
7 CFR 58.133 - Methods for quality and wholesomeness determination.
Code of Federal Regulations, 2014 CFR
2014-01-01
... FOR GRADES OF DAIRY PRODUCTS 1 General Specifications for Dairy Plants Approved for USDA Inspection... any other method approved by Standard Methods for the Examination of Dairy Products (confirmatory test... and Applied Nutrition, 200 C Street SW., Washington, DC 20204. (2) Individual producer milk samples...
Forest Herbicide Washoff From Foliar Applications
J.L. Michael; Kevin L. Talley; H.C. Fishburn
1992-01-01
Field and laboratory experiments were conducted to develop and test methods for determining washoff of foliar applied herbicides typically used in forestry in the South.Preliminary results show good agreement between results of laboratory methods used and observations from field experiments on actual precipitation events. Methods included application of...
Fatigue crack identification method based on strain amplitude changing
NASA Astrophysics Data System (ADS)
Guo, Tiancai; Gao, Jun; Wang, Yonghong; Xu, Youliang
2017-09-01
Aiming at the difficulties in identifying the location and time of crack initiation in the castings of helicopter transmission system during fatigue tests, by introducing the classification diagnostic criteria of similar failure mode to find out the similarity of fatigue crack initiation among castings, an engineering method and quantitative criterion for detecting fatigue cracks based on strain amplitude changing is proposed. This method is applied on the fatigue test of a gearbox housing, whose results indicates: during the fatigue test, the system alarms when SC strain meter reaches the quantitative criterion. The afterwards check shows that a fatigue crack less than 5mm is found at the corresponding location of SC strain meter. The test result proves that the method can provide accurate test data for strength life analysis.
Comparative Study of Impedance Eduction Methods. Part 1; DLR Tests and Methodology
NASA Technical Reports Server (NTRS)
Busse-Gerstengarbe, Stefan; Bake, Friedrich; Enghardt, Lars; Jones, Michael G.
2013-01-01
The absorption efficiency of acoustic liners used in aircraft engines is characterized by the acoustic impedance. World wide, many grazing ow test rigs and eduction methods are available that provide values for that impedance. However, a direct comparison and assessment of the data of the di erent rigs and methods is often not possible because test objects and test conditions are quite di erent. Only a few papers provide a direct comparison. Therefore, this paper together with a companion paper, present data measured with a reference test object under similar conditions in the DLR and NASA grazing ow test rigs. Additionally, by applying the in-house methods Liner Impedance Non-Uniform ow Solving algorithm (LINUS, DLR) and Convected Helmhholtz Equation approach (CHE, NASA) on the data sets, similarities and differences due to underlying theory are identi ed and discussed.
Compare diagnostic tests using transformation-invariant smoothed ROC curves⋆
Tang, Liansheng; Du, Pang; Wu, Chengqing
2012-01-01
Receiver operating characteristic (ROC) curve, plotting true positive rates against false positive rates as threshold varies, is an important tool for evaluating biomarkers in diagnostic medicine studies. By definition, ROC curve is monotone increasing from 0 to 1 and is invariant to any monotone transformation of test results. And it is often a curve with certain level of smoothness when test results from the diseased and non-diseased subjects follow continuous distributions. Most existing ROC curve estimation methods do not guarantee all of these properties. One of the exceptions is Du and Tang (2009) which applies certain monotone spline regression procedure to empirical ROC estimates. However, their method does not consider the inherent correlations between empirical ROC estimates. This makes the derivation of the asymptotic properties very difficult. In this paper we propose a penalized weighted least square estimation method, which incorporates the covariance between empirical ROC estimates as a weight matrix. The resulting estimator satisfies all the aforementioned properties, and we show that it is also consistent. Then a resampling approach is used to extend our method for comparisons of two or more diagnostic tests. Our simulations show a significantly improved performance over the existing method, especially for steep ROC curves. We then apply the proposed method to a cancer diagnostic study that compares several newly developed diagnostic biomarkers to a traditional one. PMID:22639484
A Lagrangian meshfree method applied to linear and nonlinear elasticity.
Walker, Wade A
2017-01-01
The repeated replacement method (RRM) is a Lagrangian meshfree method which we have previously applied to the Euler equations for compressible fluid flow. In this paper we present new enhancements to RRM, and we apply the enhanced method to both linear and nonlinear elasticity. We compare the results of ten test problems to those of analytic solvers, to demonstrate that RRM can successfully simulate these elastic systems without many of the requirements of traditional numerical methods such as numerical derivatives, equation system solvers, or Riemann solvers. We also show the relationship between error and computational effort for RRM on these systems, and compare RRM to other methods to highlight its strengths and weaknesses. And to further explain the two elastic equations used in the paper, we demonstrate the mathematical procedure used to create Riemann and Sedov-Taylor solvers for them, and detail the numerical techniques needed to embody those solvers in code.
A Lagrangian meshfree method applied to linear and nonlinear elasticity
2017-01-01
The repeated replacement method (RRM) is a Lagrangian meshfree method which we have previously applied to the Euler equations for compressible fluid flow. In this paper we present new enhancements to RRM, and we apply the enhanced method to both linear and nonlinear elasticity. We compare the results of ten test problems to those of analytic solvers, to demonstrate that RRM can successfully simulate these elastic systems without many of the requirements of traditional numerical methods such as numerical derivatives, equation system solvers, or Riemann solvers. We also show the relationship between error and computational effort for RRM on these systems, and compare RRM to other methods to highlight its strengths and weaknesses. And to further explain the two elastic equations used in the paper, we demonstrate the mathematical procedure used to create Riemann and Sedov-Taylor solvers for them, and detail the numerical techniques needed to embody those solvers in code. PMID:29045443
Landsiedel, Robert; Ma-Hock, Lan; Van Ravenzwaay, Ben; Schulz, Markus; Wiench, Karin; Champ, Samantha; Schulte, Stefan; Wohlleben, Wendel; Oesch, Franz
2010-12-01
Titanium dioxide and zinc oxide nanomaterials, used as UV protecting agents in sunscreens, were investigated for their potential genotoxicity in in vitro and in vivo test systems. Since standard OECD test methods are designed for soluble materials and genotoxicity testing for nanomaterials is still under revision, a battery of standard tests was used, covering different endpoints. Additionally, a procedure to disperse the nanomaterials in the test media and careful characterization of the dispersed test item was added to the testing methods. No genotoxicity was observed in vitro (Ames' Salmonella gene mutation test and V79 micronucleus chromosome mutation test) or in vivo (mouse bone marrow micronucleus test and Comet DNA damage assay in lung cells from rats exposed by inhalation). These results add to the still limited data base on genotoxicity test results with nanomaterials and provide congruent results of a battery of standard OECD test methods applied to nanomaterials.
Stephanson, N N; Signell, P; Helander, A; Beck, O
2017-08-01
The influx of new psychoactive substances (NPS) has created a need for improved methods for drug testing in toxicology laboratories. The aim of this work was to design, validate and apply a multi-analyte liquid chromatography-high-resolution mass spectrometry (LC-HRMS) method for screening of 148 target analytes belonging to the NPS class, plant alkaloids and new psychoactive therapeutic drugs. The analytical method used a fivefold dilution of urine with nine deuterated internal standards and injection of 2 μl. The LC system involved a 2.0 μm 100 × 2.0 mm YMC-UltraHT Hydrosphere-C 18 column and gradient elution with a flow rate of 0.5 ml/min and a total analysis time of 6.0 min. Solvent A consisted of 10 mmol/l ammonium formate and 0.005% formic acid, pH 4.8, and Solvent B was methanol with 10 mmol/l ammonium formate and 0.005% formic acid. The HRMS (Q Exactive, Thermo Scientific) used a heated electrospray interface and was operated in positive mode with 70 000 resolution. The scan range was 100-650 Da, and data for extracted ion chromatograms used ± 10 ppm tolerance. Product ion monitoring was applied for confirmation analysis and for some selected analytes also for screening. Method validation demonstrated limited influence from urine matrix, linear response within the measuring range (typically 0.1-1.0 μg/ml) and acceptable imprecision in quantification (CV <15%). A few analytes were found to be unstable in urine upon storage. The method was successfully applied for routine drug testing of 17 936 unknown samples, of which 2715 (15%) contained 52 of the 148 analytes. It is concluded that the method design based on simple dilution of urine and using LC-HRMS in extracted ion chromatogram mode may offer an analytical system for urine drug testing that fulfils the requirement of a 'black box' solution and can replace immunochemical screening applied on autoanalyzers. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Patterson, Fiona; Lievens, Filip; Kerrin, Máire; Munro, Neil; Irish, Bill
2013-01-01
Background The selection methodology for UK general practice is designed to accommodate several thousand applicants per year and targets six core attributes identified in a multi-method job-analysis study Aim To evaluate the predictive validity of selection methods for entry into postgraduate training, comprising a clinical problem-solving test, a situational judgement test, and a selection centre. Design and setting A three-part longitudinal predictive validity study of selection into training for UK general practice. Method In sample 1, participants were junior doctors applying for training in general practice (n = 6824). In sample 2, participants were GP registrars 1 year into training (n = 196). In sample 3, participants were GP registrars sitting the licensing examination after 3 years, at the end of training (n = 2292). The outcome measures include: assessor ratings of performance in a selection centre comprising job simulation exercises (sample 1); supervisor ratings of trainee job performance 1 year into training (sample 2); and licensing examination results, including an applied knowledge examination and a 12-station clinical skills objective structured clinical examination (OSCE; sample 3). Results Performance ratings at selection predicted subsequent supervisor ratings of job performance 1 year later. Selection results also significantly predicted performance on both the clinical skills OSCE and applied knowledge examination for licensing at the end of training. Conclusion In combination, these longitudinal findings provide good evidence of the predictive validity of the selection methods, and are the first reported for entry into postgraduate training. Results show that the best predictor of work performance and training outcomes is a combination of a clinical problem-solving test, a situational judgement test, and a selection centre. Implications for selection methods for all postgraduate specialties are considered. PMID:24267856
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xiang; Sokolov, Mikhail A; Nanstad, Randy K
Material fracture toughness in the fully ductile region can be described by a J-integral vs. crack growth resistance curve (J-R curve). As a conventional J-R curve measurement method, the elastic unloading compliance (EUC) method becomes impractical for elevated temperature testing due to relaxation of the material and friction induced back-up shape of the J-R curve. One alternative solution of J-R curve testing applies the Direct Current Potential Drop (DCPD) technique for measuring crack extension. However, besides crack growth, potential drop can also be influenced by plastic deformation, crack tip blunting, etc., and uncertainties exist in the current DCPD methodology especiallymore » in differentiating potential drop due to stable crack growth and due to material deformation. Thus, using DCPD for J-R curve determination remains a challenging task. In this study, a new adjustment procedure for applying DCPD to derive the J-R curve has been developed for conventional fracture toughness specimens, including compact tension, three-point bend, and disk-shaped compact specimens. Data analysis has been performed on Oak Ridge National Laboratory (ORNL) and American Society for Testing and Materials (ASTM) interlaboratory results covering different specimen thicknesses, test temperatures, and materials, to evaluate the applicability of the new DCPD adjustment procedure for J-R curve characterization. After applying the newly-developed procedure, direct comparison between the DCPD method and the normalization method on the same specimens indicated close agreement for the overall J-R curves, as well as the provisional values of fracture toughness near the onset of ductile crack extension, Jq, and of tearing modulus.« less
Stability of fragrance patch test preparations applied in test chambers.
Mowitz, M; Zimerson, E; Svedman, C; Bruze, M
2012-10-01
Petrolatum patch test preparations are for practical reasons often applied in test chambers in advance, several hours or even days before the patient is tested. As many fragrance compounds are volatile it may be suspected that petrolatum preparations applied in test chambers are not stable over time. To investigate the stability of petrolatum preparations of the seven chemically defined components in the fragrance mix (FM I) when stored in test chambers. Samples of petrolatum preparations applied in test chambers stored at room temperature and in a refrigerator for between 4 and 144 h were analysed using liquid chromatographic methods. The concentration decreased by ≥ 20% within 8 h in four of seven preparations stored in Finn chambers at room temperature. When stored in a refrigerator only the preparation of cinnamal had decreased by ≥ 20% within 24 h. The stability of preparations of cinnamal stored in IQ chambers with a plastic cover was slightly better, but like the preparations applied in Finn chambers, the concentration decreased by ≥ 20% within 4 h at room temperature and within 24 h in a refrigerator. Cinnamal and cinnamyl alcohol were found to be more stable when analysed as ingredients in FM I compared with when analysed in individual preparations. Within a couple of hours several fragrance allergens evaporate from test chambers to an extent that may affect the outcome of the patch test. Application to the test chambers should be performed as close to the patch test occasion as possible and storage in a refrigerator is recommended. © 2012 The Authors. BJD © 2012 British Association of Dermatologists.
Yamashita, Kunihiko; Shinoda, Shinsuke; Hagiwara, Saori; Itagaki, Hiroshi
2015-04-01
To date, there has been no well-established local lymph node assay (LLNA) that includes an elicitation phase. Therefore, we developed a modified local lymph node assay with an elicitation phase (LLNA:DAE) to discriminate true skin sensitizers from chemicals that gave borderline positive results and previously reported this assay. To develop the LLNA:DAE method as a useful stand-alone testing method, we investigated the complete procedure for the LLNA:DAE method using hexyl cinnamic aldehyde (HCA), isoeugenol, and 2,4-dinitrochlorobenzene (DNCB) as test compounds. We defined the LLNA:DAE procedure as follows: in the dose-finding test, four concentrations of chemical applied to dorsum of the right ear on days 1, 2, and 3 and dorsum of both ears on day 10. Ear thickness and skin irritation score were measured on days 1, 3, 5, 10, and 12. Local lymph nodes were excised and weighed on day 12. The test dose for the primary LLNA:DAE study was selected as the dose that gave the highest left ear lymph node weight in the dose-finding study, or the lowest dose that produced a left ear lymph node of over 4 mg. This procedure was validated using nine different chemicals. Furthermore, qualitative relationship was observed between the degree of elicitation response in the left ear lymph node and the skin sensitizing potency of 32 chemicals tested in this study and the previous study. These results indicated that LLNA:DAE method was as first LLNA method that was able to evaluate the skin sensitizing potential and potency in elicitation response.
Using hybrid implicit Monte Carlo diffusion to simulate gray radiation hydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cleveland, Mathew A., E-mail: cleveland7@llnl.gov; Gentile, Nick
This work describes how to couple a hybrid Implicit Monte Carlo Diffusion (HIMCD) method with a Lagrangian hydrodynamics code to evaluate the coupled radiation hydrodynamics equations. This HIMCD method dynamically applies Implicit Monte Carlo Diffusion (IMD) [1] to regions of a problem that are opaque and diffusive while applying standard Implicit Monte Carlo (IMC) [2] to regions where the diffusion approximation is invalid. We show that this method significantly improves the computational efficiency as compared to a standard IMC/Hydrodynamics solver, when optically thick diffusive material is present, while maintaining accuracy. Two test cases are used to demonstrate the accuracy andmore » performance of HIMCD as compared to IMC and IMD. The first is the Lowrie semi-analytic diffusive shock [3]. The second is a simple test case where the source radiation streams through optically thin material and heats a thick diffusive region of material causing it to rapidly expand. We found that HIMCD proves to be accurate, robust, and computationally efficient for these test problems.« less
NASA Technical Reports Server (NTRS)
Jordan, F. L., Jr.
1980-01-01
As part of basic research to improve aerial applications technology, methods were developed at the Langley Vortex Research Facility to simulate and measure deposition patterns of aerially-applied sprays and granular materials by means of tests with small-scale models of agricultural aircraft and dynamically-scaled test particles. Interactions between the aircraft wake and the dispersed particles are being studied with the objective of modifying wake characteristics and dispersal techniques to increase swath width, improve deposition pattern uniformity, and minimize drift. The particle scaling analysis, test methods for particle dispersal from the model aircraft, visualization of particle trajectories, and measurement and computer analysis of test deposition patterns are described. An experimental validation of the scaling analysis and test results that indicate improved control of chemical drift by use of winglets are presented to demonstrate test methods.
Extended time-interval analysis
NASA Astrophysics Data System (ADS)
Fynbo, H. O. U.; Riisager, K.
2014-01-01
Several extensions of the halflife analysis method recently suggested by Horvat and Hardy are put forward. Goodness-of-fit testing is included, and the method is extended to cases where more information is available for each decay event which allows applications also for e.g. γ decay data. The results are tested with Monte Carlo simulations and are applied to the decays of 64Cu and 56Mn.
ERIC Educational Resources Information Center
Eckes, Thomas
2017-01-01
This paper presents an approach to standard setting that combines the prototype group method (PGM; Eckes, 2012) with a receiver operating characteristic (ROC) analysis. The combined PGM-ROC approach is applied to setting cut scores on a placement test of English as a foreign language (EFL). To implement the PGM, experts first named learners whom…
Space vehicle engine and heat shield environment review. Volume 1: Engineering analysis
NASA Technical Reports Server (NTRS)
Mcanelly, W. B.; Young, C. T. K.
1973-01-01
Methods for predicting the base heating characteristics of a multiple rocket engine installation are discussed. The environmental data is applied to the design of adequate protection system for the engine components. The methods for predicting the base region thermal environment are categorized as: (1) scale model testing, (2) extrapolation of previous and related flight test results, and (3) semiempirical analytical techniques.
It's Time for a Conceptual Change.
ERIC Educational Resources Information Center
Hausfather, Samuel J.
1992-01-01
Conceptual change teaching is an instructional method that helps students modify, extend, or exchange their alternative conceptions for the appropriate scientific conceptions. Provides activities and a diagnostic test to apply the method to the concepts of heat and temperature. (MDH)
Applying site-index curves to northern hardwoods in New Hampshire
Dale S. Solomon
1968-01-01
Describes a new method for testing site-index curves. Study results indicate that Vermont site-index curves for yellow birch, paper birch, white ash, and sugar maple, and New York-Connecticut curves for red maple, can be applied satisfactorily in New Hampshire when used with certain precautions and corrections.
Rajan, Sekar; Colaco, Socorrina; Ramesh, N; Meyyanathan, Subramania Nainar; Elango, K
2014-02-01
This study describes the development and validation of dissolution tests for sustained release Dextromethorphan hydrobromide tablets using an HPLC method. Chromatographic separation was achieved on a C18 column utilizing 0.5% triethylamine (pH 7.5) and acetonitrile in the ratio of 50:50. The detection wavelength was 280 nm. The method was validated and response was found to be linear in the drug concentration range of 10-80 microg mL(-1). The suitable conditions were clearly decided after testing sink conditions, dissolution medium and agitation intensity. The most excellent dissolution conditions tested, for the Dextromethorphan hydrobromide was applied to appraise the dissolution profiles. The method was validated and response was found to be linear in the drug concentration range of 10-80 microg mL(-1). The method was established to have sufficient intermediate precision as similar separation was achieved on another instrument handled by different operators. Mean Recovery was 101.82%. Intra precisions for three different concentrations were 1.23, 1.10 0.72 and 1.57, 1.69, 0.95 and inter run precisions were % RSD 0.83, 1.36 and 1.57%, respectively. The method was successfully applied for dissolution study of the developed Dextromethorphan hydrobromide tablets.
Narita, Kazuto; Ishii, Yuuki; Vo, Phuc Thi Hong; Nakagawa, Fumiko; Ogata, Shinichi; Yamashita, Kunihiko; Kojima, Hajime; Itagaki, Hiroshi
2018-01-01
Recently, animal testing has been affected by increasing ethical, social, and political concerns regarding animal welfare. Several in vitro safety tests for evaluating skin sensitization, such as the human cell line activation test (h-CLAT), have been proposed. However, similar to other tests, the h-CLAT has produced false-negative results, including in tests for acid anhydride and water-insoluble chemicals. In a previous study, we demonstrated that the cause of false-negative results from phthalic anhydride was hydrolysis by an aqueous vehicle, with IL-8 release from THP-1 cells, and that short-time exposure to liquid paraffin (LP) dispersion medium could reduce false-negative results from acid anhydrides. In the present study, we modified the h-CLAT by applying this exposure method. We found that the modified h-CLAT is a promising method for reducing false-negative results obtained from acid anhydrides and chemicals with octanol-water partition coefficients (LogK ow ) greater than 3.5. Based on the outcomes from the present study, a combination of the original and the modified h-CLAT is suggested for reducing false-negative results. Notably, the combination method provided a sensitivity of 95% (overall chemicals) or 93% (chemicals with LogK ow > 2.0), and an accuracy of 88% (overall chemicals) or 81% (chemicals with LogK ow > 2.0). We found that the combined method is a promising evaluation scheme for reducing false-negative results seen in existing in vitro skin-sensitization tests. In the future, we expect a combination of original and modified h-CLAT to be applied in a newly developed in vitro test for evaluating skin sensitization.
Stepwise Regression Analysis of MDOE Balance Calibration Data Acquired at DNW
NASA Technical Reports Server (NTRS)
DeLoach, RIchard; Philipsen, Iwan
2007-01-01
This paper reports a comparison of two experiment design methods applied in the calibration of a strain-gage balance. One features a 734-point test matrix in which loads are varied systematically according to a method commonly applied in aerospace research and known in the literature of experiment design as One Factor At a Time (OFAT) testing. Two variations of an alternative experiment design were also executed on the same balance, each with different features of an MDOE experiment design. The Modern Design of Experiments (MDOE) is an integrated process of experiment design, execution, and analysis applied at NASA's Langley Research Center to achieve significant reductions in cycle time, direct operating cost, and experimental uncertainty in aerospace research generally and in balance calibration experiments specifically. Personnel in the Instrumentation and Controls Department of the German Dutch Wind Tunnels (DNW) have applied MDOE methods to evaluate them in the calibration of a balance using an automated calibration machine. The data have been sent to Langley Research Center for analysis and comparison. This paper reports key findings from this analysis. The chief result is that a 100-point calibration exploiting MDOE principles delivered quality comparable to a 700+ point OFAT calibration with significantly reduced cycle time and attendant savings in direct and indirect costs. While the DNW test matrices implemented key MDOE principles and produced excellent results, additional MDOE concepts implemented in balance calibrations at Langley Research Center are also identified and described.
A Real-time Evaluation of Human-based Approaches to Safety Testing: What We Can Do Now (TDS)
Despite ever-increasing efforts in early safety assessment in all industries, there are still many chemicals that prove toxic in humans. While greater use of human in vitro test methods may serve to reduce this problem, the formal validation process applied to such tests represen...
A Guide to Computer Adaptive Testing Systems
ERIC Educational Resources Information Center
Davey, Tim
2011-01-01
Some brand names are used generically to describe an entire class of products that perform the same function. "Kleenex," "Xerox," "Thermos," and "Band-Aid" are good examples. The term "computerized adaptive testing" (CAT) is similar in that it is often applied uniformly across a diverse family of testing methods. Although the various members of…
Statistical Method to Overcome Overfitting Issue in Rational Function Models
NASA Astrophysics Data System (ADS)
Alizadeh Moghaddam, S. H.; Mokhtarzade, M.; Alizadeh Naeini, A.; Alizadeh Moghaddam, S. A.
2017-09-01
Rational function models (RFMs) are known as one of the most appealing models which are extensively applied in geometric correction of satellite images and map production. Overfitting is a common issue, in the case of terrain dependent RFMs, that degrades the accuracy of RFMs-derived geospatial products. This issue, resulting from the high number of RFMs' parameters, leads to ill-posedness of the RFMs. To tackle this problem, in this study, a fast and robust statistical approach is proposed and compared to Tikhonov regularization (TR) method, as a frequently-used solution to RFMs' overfitting. In the proposed method, a statistical test, namely, significance test is applied to search for the RFMs' parameters that are resistant against overfitting issue. The performance of the proposed method was evaluated for two real data sets of Cartosat-1 satellite images. The obtained results demonstrate the efficiency of the proposed method in term of the achievable level of accuracy. This technique, indeed, shows an improvement of 50-80% over the TR.
An IMU-to-Body Alignment Method Applied to Human Gait Analysis.
Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo
2016-12-10
This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.
NASA Astrophysics Data System (ADS)
Qiu, Feng; Dai, Guang; Zhang, Ying
According to the acoustic emission information and the appearance inspection information of tank bottom online testing, the external factors associated with tank bottom corrosion status are confirmed. Applying artificial neural network intelligent evaluation method, three tank bottom corrosion status evaluation models based on appearance inspection information, acoustic emission information, and online testing information are established. Comparing with the result of acoustic emission online testing through the evaluation of test sample, the accuracy of the evaluation model based on online testing information is 94 %. The evaluation model can evaluate tank bottom corrosion accurately and realize acoustic emission online testing intelligent evaluation of tank bottom.
Micro Dot Patterning on the Light Guide Panel Using Powder Blasting
Jang, Ho Su; Cho, Myeong Woo; Park, Dong Sam
2008-01-01
This study is to develop a micromachining technology for a light guide panel(LGP) mold, whereby micro dot patterns are formed on a LGP surface by a single injection process instead of existing screen printing processes. The micro powder blasting technique is applied to form micro dot patterns on the LGP mold surface. The optimal conditions for masking, laminating, exposure, and developing processes to form the micro dot patterns are first experimentally investigated. A LGP mold with masked micro patterns is then machined using the micro powder blasting method and the machinability of the micro dot patterns is verified. A prototype LGP is test- injected using the developed LGP mold and a shape analysis of the patterns and performance testing of the injected LGP are carried out. As an additional approach, matte finishing, a special surface treatment method, is applied to the mold surface to improve the light diffusion characteristics, uniformity and brightness of the LGP. The results of this study show that the applied powder blasting method can be successfully used to manufacture LGPs with micro patterns by just single injection using the developed mold and thereby replace existing screen printing methods. PMID:27879740
ERIC Educational Resources Information Center
Groseclose, Richard
This third in a series of six modules for a course titled Nondestructive Examination (NDE) Techniques II explains the principles of magnets and magnetic fields and how they are applied in magnetic particle testing, describes the theory and methods of magnetizing test specimens, describes the test equipment used, discusses the principles and…
Accelerated life testing of spacecraft subsystems
NASA Technical Reports Server (NTRS)
Wiksten, D.; Swanson, J.
1972-01-01
The rationale and requirements for conducting accelerated life tests on electronic subsystems of spacecraft are presented. A method for applying data on the reliability and temperature sensitivity of the parts contained in a sybsystem to the selection of accelerated life test parameters is described. Additional considerations affecting the formulation of test requirements are identified, and practical limitations of accelerated aging are described.
On sample size of the kruskal-wallis test with application to a mouse peritoneal cavity study.
Fan, Chunpeng; Zhang, Donghui; Zhang, Cun-Hui
2011-03-01
As the nonparametric generalization of the one-way analysis of variance model, the Kruskal-Wallis test applies when the goal is to test the difference between multiple samples and the underlying population distributions are nonnormal or unknown. Although the Kruskal-Wallis test has been widely used for data analysis, power and sample size methods for this test have been investigated to a much lesser extent. This article proposes new power and sample size calculation methods for the Kruskal-Wallis test based on the pilot study in either a completely nonparametric model or a semiparametric location model. No assumption is made on the shape of the underlying population distributions. Simulation results show that, in terms of sample size calculation for the Kruskal-Wallis test, the proposed methods are more reliable and preferable to some more traditional methods. A mouse peritoneal cavity study is used to demonstrate the application of the methods. © 2010, The International Biometric Society.
Variable Acceleration Force Calibration System (VACS)
NASA Technical Reports Server (NTRS)
Rhew, Ray D.; Parker, Peter A.; Johnson, Thomas H.; Landman, Drew
2014-01-01
Conventionally, force balances have been calibrated manually, using a complex system of free hanging precision weights, bell cranks, and/or other mechanical components. Conventional methods may provide sufficient accuracy in some instances, but are often quite complex and labor-intensive, requiring three to four man-weeks to complete each full calibration. To ensure accuracy, gravity-based loading is typically utilized. However, this often causes difficulty when applying loads in three simultaneous, orthogonal axes. A complex system of levers, cranks, and cables must be used, introducing increased sources of systematic error, and significantly increasing the time and labor intensity required to complete the calibration. One aspect of the VACS is a method wherein the mass utilized for calibration is held constant, and the acceleration is changed to thereby generate relatively large forces with relatively small test masses. Multiple forces can be applied to a force balance without changing the test mass, and dynamic forces can be applied by rotation or oscillating acceleration. If rotational motion is utilized, a mass is rigidly attached to a force balance, and the mass is exposed to a rotational field. A large force can be applied by utilizing a large rotational velocity. A centrifuge or rotating table can be used to create the rotational field, and fixtures can be utilized to position the force balance. The acceleration may also be linear. For example, a table that moves linearly and accelerates in a sinusoidal manner may also be utilized. The test mass does not have to move in a path that is parallel to the ground, and no re-leveling is therefore required. Balance deflection corrections may be applied passively by monitoring the orientation of the force balance with a three-axis accelerometer package. Deflections are measured during each test run, and adjustments with respect to the true applied load can be made during the post-processing stage. This paper will present the development and testing of the VASC concept.
Description and evaluation of an interference assessment for a slotted-wall wind tunnel
NASA Technical Reports Server (NTRS)
Kemp, William B., Jr.
1991-01-01
A wind-tunnel interference assessment method applicable to test sections with discrete finite-length wall slots is described. The method is based on high order panel method technology and uses mixed boundary conditions to satisfy both the tunnel geometry and wall pressure distributions measured in the slotted-wall region. Both the test model and its sting support system are represented by distributed singularities. The method yields interference corrections to the model test data as well as surveys through the interference field at arbitrary locations. These results include the equivalent of tunnel Mach calibration, longitudinal pressure gradient, tunnel flow angularity, wall interference, and an inviscid form of sting interference. Alternative results which omit the direct contribution of the sting are also produced. The method was applied to the National Transonic Facility at NASA Langley Research Center for both tunnel calibration tests and tests of two models of subsonic transport configurations.
DOT National Transportation Integrated Search
2012-04-01
This paper presents a description of efforts to disseminate findings from the Phase I study (SPR-2244), provides examples of applied maturity testing and temperature monitoring in Connecticut, reviews several State Highway Agency protocols for using ...
Alternate Test Procedures to Perform Clean Water Act Monitoring for Region 9
When performing Clean Water Act monitoring, parties interested in using a method not approved in 40 CFR Part 136 must apply to use the alternate test procedure (ATP) in the Region in which the discharging facility is located.
Testing independence of fragment lengths within VNTR loci
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geisser, S.; Johnson, W.
1993-11-01
Methods that were devised to test independence of the bivariate fragment lengths obtained from VNTR loci are applied to several population databases. It is shown that for many of the probes independence (Hardy-Weinberg equilibrium) cannot be sustained. 3 refs., 3 tabs.
Speciation of mercury in sludge solids: washed sludge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bannochie, C. J.; Lourie, A. P.
2017-10-24
The objective of this applied research task was to study the type and concentration of mercury compounds found within the contaminated Savannah River Site Liquid Waste System (SRS LWS). A method of selective sequential extraction (SSE), developed by Eurofins Frontier Global Sciences1,2 and adapted by SRNL, utilizes an extraction procedure divided into seven separate tests for different species of mercury. In the SRNL’s modified procedure four of these tests were applied to a washed sample of high level radioactive waste sludge.
Weber, Benjamin; Lee, Sau L; Delvadia, Renishkumar; Lionberger, Robert; Li, Bing V; Tsong, Yi; Hochhaus, Guenther
2015-03-01
Equivalence testing of aerodynamic particle size distribution (APSD) through multi-stage cascade impactors (CIs) is important for establishing bioequivalence of orally inhaled drug products. Recent work demonstrated that the median of the modified chi-square ratio statistic (MmCSRS) is a promising metric for APSD equivalence testing of test (T) and reference (R) products as it can be applied to a reduced number of CI sites that are more relevant for lung deposition. This metric is also less sensitive to the increased variability often observed for low-deposition sites. A method to establish critical values for the MmCSRS is described here. This method considers the variability of the R product by employing a reference variance scaling approach that allows definition of critical values as a function of the observed variability of the R product. A stepwise CI equivalence test is proposed that integrates the MmCSRS as a method for comparing the relative shapes of CI profiles and incorporates statistical tests for assessing equivalence of single actuation content and impactor sized mass. This stepwise CI equivalence test was applied to 55 published CI profile scenarios, which were classified as equivalent or inequivalent by members of the Product Quality Research Institute working group (PQRI WG). The results of the stepwise CI equivalence test using a 25% difference in MmCSRS as an acceptance criterion provided the best matching with those of the PQRI WG as decisions of both methods agreed in 75% of the 55 CI profile scenarios.
Contact sponge water absorption test implemented for in situ measures
NASA Astrophysics Data System (ADS)
Gaggero, Laura; Scrivano, Simona
2016-04-01
The contact sponge method is a non-destructive in-situ methodology used to estimate a water uptake coefficient. The procedure, unlike other in-situ measurement was proven to be directly comparable to the water uptake laboratory measurements, and was registered as UNI 11432:2011. The UNI Normal procedure requires to use a sponge with known density, soaked in water, weighed, placed on the material for 1 minute (UNI 11432, 2011; Pardini & Tiano, 2004), then weighed again. Difficulties arise in operating on test samples or on materials with porosity varied for decay. While carrying on the test, fluctuations in the bearing of the environmental parameters were negligible, but not the pressure applied to the surface, that induced the release of different water amounts towards the material. For this reason we designed a metal piece of the same diameter of the plate carrying the sponge, to be screwed at the tip of a pocket penetrometer. With this instrument the sponge was kept in contact with the surface for 1 minute applying two different loads, at first pushed with 0.3 kg/cm2 in order to press the sponge, but not its holder, against the surface. Then, a load of 1.1 kg/ cm2 was applied, still avoiding deviating the load to the sponge holder. We applied both the current and our implemented method to determine the water absorption by contact sponge on 5 fresh rock types (4 limestones: Fine - and Coarse grained Pietra di Vicenza, Rosso Verona, Breccia Aurora, and the silicoclastic Macigno sandstone). The results show that 1) the current methodology imply manual skill and experience to produce a coherent set of data; the variable involved are in fact not only the imposed pressure but also the compression mechanics. 2) The control on the applied pressure allowed reproducible measurements. Moreover, 3) the use of a thicker sponge enabled to apply the method even on rougher surfaces, as the device holding the sponge is not in contact with the tested object. Finally, 4) the implemented measurements gave the possibility of a direct comparison with the capillary water absorption method. Pardini C. & Tiano P. 2004. Valutazione in situ dei trattamenti protettivi per il materiale lapideo, proposta di una nuova semplice metodologia. ARKOS, 5, 30-36. UNI 11432. 2011. Beni culturali Materiali lapidei naturali ed artificiali - Misura della capacita di assorbimento di acqua mediante spugna di contatto. P. 6.
A risk-based classification scheme for genetically modified foods. II: Graded testing.
Chao, Eunice; Krewski, Daniel
2008-12-01
This paper presents a graded approach to the testing of crop-derived genetically modified (GM) foods based on concern levels in a proposed risk-based classification scheme (RBCS) and currently available testing methods. A graded approach offers the potential for more efficient use of testing resources by focusing less on lower concern GM foods, and more on higher concern foods. In this proposed approach to graded testing, products that are classified as Level I would have met baseline testing requirements that are comparable to what is widely applied to premarket assessment of GM foods at present. In most cases, Level I products would require no further testing, or very limited confirmatory analyses. For products classified as Level II or higher, additional testing would be required, depending on the type of the substance, prior dietary history, estimated exposure level, prior knowledge of toxicity of the substance, and the nature of the concern related to unintended changes in the modified food. Level III testing applies only to the assessment of toxic and antinutritional effects from intended changes and is tailored to the nature of the substance in question. Since appropriate test methods are not currently available for all effects of concern, future research to strengthen the testing of GM foods is discussed.
NASA Astrophysics Data System (ADS)
Masera, D.; Bocca, P.; Grazzini, A.
2011-07-01
In this experimental program the main goal is to monitor the damage evolution in masonry and concrete structures by Acoustic Emission (AE) signal analysis applying a well-know seismic method. For this reason the concept of the coda wave interferometry is applied to AE signal recorded during the tests. Acoustic Emission (AE) are very effective non-destructive techniques applied to identify micro and macro-defects and their temporal evolution in several materials. This technique permits to estimate the velocity of ultrasound waves propagation and the amount of energy released during fracture propagation to obtain information on the criticality of the ongoing process. By means of AE monitoring, an experimental analysis on a set of reinforced masonry walls under variable amplitude loading and strengthening reinforced concrete (RC) beams under monotonic static load has been carried out. In the reinforced masonry wall, cyclic fatigue stress has been applied to accelerate the static creep and to forecast the corresponding creep behaviour of masonry under static long-time loading. During the tests, the evaluation of fracture growth is monitored by coda wave interferometry which represents a novel approach in structural monitoring based on AE relative change velocity of coda signal. In general, the sensitivity of coda waves has been used to estimate velocity changes in fault zones, in volcanoes, in a mining environment, and in ultrasound experiments. This method uses multiple scattered waves, which travelled through the material along numerous paths, to infer tiny temporal changes in the wave velocity. The applied method has the potential to be used as a "damage-gauge" for monitoring velocity changes as a sign of damage evolution into masonry and concrete structures.
Micro Dot Patterning on the Light Guide Panel Using Powder Blasting.
Jang, Ho Su; Cho, Myeong Woo; Park, Dong Sam
2008-02-08
This study is to develop a micromachining technology for a light guidepanel(LGP) mold, whereby micro dot patterns are formed on a LGP surface by a singleinjection process instead of existing screen printing processes. The micro powder blastingtechnique is applied to form micro dot patterns on the LGP mold surface. The optimalconditions for masking, laminating, exposure, and developing processes to form the microdot patterns are first experimentally investigated. A LGP mold with masked micro patternsis then machined using the micro powder blasting method and the machinability of themicro dot patterns is verified. A prototype LGP is test- injected using the developed LGPmold and a shape analysis of the patterns and performance testing of the injected LGP arecarried out. As an additional approach, matte finishing, a special surface treatment method,is applied to the mold surface to improve the light diffusion characteristics, uniformity andbrightness of the LGP. The results of this study show that the applied powder blastingmethod can be successfully used to manufacture LGPs with micro patterns by just singleinjection using the developed mold and thereby replace existing screen printing methods.
A survey of methods for the evaluation of tissue engineering scaffold permeability.
Pennella, F; Cerino, G; Massai, D; Gallo, D; Falvo D'Urso Labate, G; Schiavi, A; Deriu, M A; Audenino, A; Morbiducci, Umberto
2013-10-01
The performance of porous scaffolds for tissue engineering (TE) applications is evaluated, in general, in terms of porosity, pore size and distribution, and pore tortuosity. These descriptors are often confounding when they are applied to characterize transport phenomena within porous scaffolds. On the contrary, permeability is a more effective parameter in (1) estimating mass and species transport through the scaffold and (2) describing its topological features, thus allowing a better evaluation of the overall scaffold performance. However, the evaluation of TE scaffold permeability suffers of a lack of uniformity and standards in measurement and testing procedures which makes the comparison of results obtained in different laboratories unfeasible. In this review paper we summarize the most important features influencing TE scaffold permeability, linking them to the theoretical background. An overview of methods applied for TE scaffold permeability evaluation is given, presenting experimental test benches and computational methods applied (1) to integrate experimental measurements and (2) to support the TE scaffold design process. Both experimental and computational limitations in the permeability evaluation process are also discussed.
NASA Astrophysics Data System (ADS)
Ohtsuka, N.; Shindo, Y.; Makita, A.
2010-06-01
Instrumented Charpy test was conducted on small sized specimen of 21/4Cr-1Mo steel. In the test the single specimen key curve method was applied to determine the value of fracture toughness for the initiation of crack extension with hydrogen free, KIC, and for hydrogen embrittlement cracking, KIH. Also the tearing modulus as a parameter for resistance to crack extension was determined. The role of these parameters was discussed at an upper shelf temperature and at a transition temperature. Then the key curve method combined with instrumented Charpy test was proven to be used to evaluate not only temper embrittlement but also hydrogen embrittlement.
Comparison of Modal Analysis Methods Applied to a Vibro-Acoustic Test Article
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn; Pappa, Richard; Buehrle, Ralph; Grosveld, Ferdinand
2001-01-01
Modal testing of a vibro-acoustic test article referred to as the Aluminum Testbed Cylinder (ATC) has provided frequency response data for the development of validated numerical models of complex structures for interior noise prediction and control. The ATC is an all aluminum, ring and stringer stiffened cylinder, 12 feet in length and 4 feet in diameter. The cylinder was designed to represent typical aircraft construction. Modal tests were conducted for several different configurations of the cylinder assembly under ambient and pressurized conditions. The purpose of this paper is to present results from dynamic testing of different ATC configurations using two modal analysis software methods: Eigensystem Realization Algorithm (ERA) and MTS IDEAS Polyreference method. The paper compares results from the two analysis methods as well as the results from various test configurations. The effects of pressurization on the modal characteristics are discussed.
Effectiveness of Jigsaw learning compared to lecture-based learning in dental education.
Sagsoz, O; Karatas, O; Turel, V; Yildiz, M; Kaya, E
2017-02-01
The objective of this study was to evaluate the success levels of students using the Jigsaw learning method in dental education. Fifty students with similar grade point average (GPA) scores were selected and randomly assigned into one of two groups (n = 25). A pretest concerning 'adhesion and bonding agents in dentistry' was administered to all students before classes. The Jigsaw learning method was applied to the experimental group for 3 weeks. At the same time, the control group was taking classes using the lecture-based learning method. At the end of the 3 weeks, all students were retested (post-test) on the subject. A retention test was administered 3 weeks after the post-test. Mean scores were calculated for each test for the experimental and control groups, and the data obtained were analysed using the independent samples t-test. No significant difference was determined between the Jigsaw and lecture-based methods at pretest or post-test. The highest mean test score was observed in the post-test with the Jigsaw method. In the retention test, success with the Jigsaw method was significantly higher than that with the lecture-based method. The Jigsaw method is as effective as the lecture-based method. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Somatic and gastrointestinal in vivo biotransformation rates of hydrophobic chemicals in fish.
Lo, Justin C; Campbell, David A; Kennedy, Christopher J; Gobas, Frank A P C
2015-10-01
To improve current bioaccumulation assessment methods, a methodology is developed, applied, and investigated for measuring in vivo biotransformation rates of hydrophobic organic substances in the body (soma) and gastrointestinal tract of the fish. The method resembles the Organisation for Economic Co-operation and Development (OECD) 305 dietary bioaccumulation test but includes reference chemicals to determine both somatic and gastrointestinal biotransformation rates of test chemicals. Somatic biotransformation rate constants for the test chemicals ranged between 0 d(-1) and 0.38 (standard error [SE] 0.03)/d(-1) . Gastrointestinal biotransformation rate constants varied from 0 d(-1) to 46 (SE 7) d(-1) . Gastrointestinal biotransformation contributed more to the overall biotransformation in fish than somatic biotransformation for all test substances but 1. Results suggest that biomagnification tests can reveal the full extent of biotransformation in fish. The common presumption that the liver is the main site of biotransformation may not apply to many substances exposed through the diet. The results suggest that the application of quantitative structure-activity relationships (QSARs) for somatic biotransformation rates and hepatic in vitro models to assess the effect of biotransformation on bioaccumulation can underestimate biotransformation rates and overestimate the biomagnification potential of chemicals that are biotransformed in the gastrointestinal tract. With some modifications, the OECD 305 test can generate somatic and gastrointestinal biotransformation data to develop biotransformation QSARs and test in vitro-in vivo biotransformation extrapolation methods. © 2015 SETAC.
Analysis of pressure distortion testing
NASA Technical Reports Server (NTRS)
Koch, K. E.; Rees, R. L.
1976-01-01
The development of a distortion methodology, method D, was documented, and its application to steady state and unsteady data was demonstrated. Three methodologies based upon DIDENT, a NASA-LeRC distortion methodology based upon the parallel compressor model, were investigated by applying them to a set of steady state data. The best formulation was then applied to an independent data set. The good correlation achieved with this data set showed that method E, one of the above methodologies, is a viable concept. Unsteady data were analyzed by using the method E methodology. This analysis pointed out that the method E sensitivities are functions of pressure defect level as well as corrected speed and pattern.
An analytic data analysis method for oscillatory slug tests.
Chen, Chia-Shyun
2006-01-01
An analytical data analysis method is developed for slug tests in partially penetrating wells in confined or unconfined aquifers of high hydraulic conductivity. As adapted from the van der Kamp method, the determination of the hydraulic conductivity is based on the occurrence times and the displacements of the extreme points measured from the oscillatory data and their theoretical counterparts available in the literature. This method is applied to two sets of slug test response data presented by Butler et al.: one set shows slow damping with seven discernable extremities, and the other shows rapid damping with three extreme points. The estimates of the hydraulic conductivity obtained by the analytic method are in good agreement with those determined by an available curve-matching technique.
Settivari, Raja S; Ball, Nicholas; Murphy, Lynea; Rasoulpour, Reza; Boverhof, Darrell R; Carney, Edward W
2015-03-01
Interest in applying 21st-century toxicity testing tools for safety assessment of industrial chemicals is growing. Whereas conventional toxicology uses mainly animal-based, descriptive methods, a paradigm shift is emerging in which computational approaches, systems biology, high-throughput in vitro toxicity assays, and high-throughput exposure assessments are beginning to be applied to mechanism-based risk assessments in a time- and resource-efficient fashion. Here we describe recent advances in predictive safety assessment, with a focus on their strategic application to meet the changing demands of the chemical industry and its stakeholders. The opportunities to apply these new approaches is extensive and include screening of new chemicals, informing the design of safer and more sustainable chemical alternatives, filling information gaps on data-poor chemicals already in commerce, strengthening read-across methodology for categories of chemicals sharing similar modes of action, and optimizing the design of reduced-risk product formulations. Finally, we discuss how these predictive approaches dovetail with in vivo integrated testing strategies within repeated-dose regulatory toxicity studies, which are in line with 3Rs principles to refine, reduce, and replace animal testing. Strategic application of these tools is the foundation for informed and efficient safety assessment testing strategies that can be applied at all stages of the product-development process.
Standardization of Laboratory Methods for the PERCH Study
Karron, Ruth A.; Morpeth, Susan C.; Bhat, Niranjan; Levine, Orin S.; Baggett, Henry C.; Brooks, W. Abdullah; Feikin, Daniel R.; Hammitt, Laura L.; Howie, Stephen R. C.; Knoll, Maria Deloria; Kotloff, Karen L.; Madhi, Shabir A.; Scott, J. Anthony G.; Thea, Donald M.; Adrian, Peter V.; Ahmed, Dilruba; Alam, Muntasir; Anderson, Trevor P.; Antonio, Martin; Baillie, Vicky L.; Dione, Michel; Endtz, Hubert P.; Gitahi, Caroline; Karani, Angela; Kwenda, Geoffrey; Maiga, Abdoul Aziz; McClellan, Jessica; Mitchell, Joanne L.; Morailane, Palesa; Mugo, Daisy; Mwaba, John; Mwansa, James; Mwarumba, Salim; Nyongesa, Sammy; Panchalingam, Sandra; Rahman, Mustafizur; Sawatwong, Pongpun; Tamboura, Boubou; Toure, Aliou; Whistler, Toni; O’Brien, Katherine L.; Murdoch, David R.
2017-01-01
Abstract The Pneumonia Etiology Research for Child Health study was conducted across 7 diverse research sites and relied on standardized clinical and laboratory methods for the accurate and meaningful interpretation of pneumonia etiology data. Blood, respiratory specimens, and urine were collected from children aged 1–59 months hospitalized with severe or very severe pneumonia and community controls of the same age without severe pneumonia and were tested with an extensive array of laboratory diagnostic tests. A standardized testing algorithm and standard operating procedures were applied across all study sites. Site laboratories received uniform training, equipment, and reagents for core testing methods. Standardization was further assured by routine teleconferences, in-person meetings, site monitoring visits, and internal and external quality assurance testing. Targeted confirmatory testing and testing by specialized assays were done at a central reference laboratory. PMID:28575358
Chen, Qi; Chen, Quan; Luo, Xiaobing
2014-09-01
In recent years, due to the fast development of high power light-emitting diode (LED), its lifetime prediction and assessment have become a crucial issue. Although the in situ measurement has been widely used for reliability testing in laser diode community, it has not been applied commonly in LED community. In this paper, an online testing method for LED life projection under accelerated reliability test was proposed and the prototype was built. The optical parametric data were collected. The systematic error and the measuring uncertainty were calculated to be within 0.2% and within 2%, respectively. With this online testing method, experimental data can be acquired continuously and sufficient amount of data can be gathered. Thus, the projection fitting accuracy can be improved (r(2) = 0.954) and testing duration can be shortened.
Context-Dependent Upper Limb Prosthesis Control for Natural and Robust Use.
Amsuess, Sebastian; Vujaklija, Ivan; Goebel, Peter; Roche, Aidan D; Graimann, Bernhard; Aszmann, Oskar C; Farina, Dario
2016-07-01
Pattern recognition and regression methods applied to the surface EMG have been used for estimating the user intended motor tasks across multiple degrees of freedom (DOF), for prosthetic control. While these methods are effective in several conditions, they are still characterized by some shortcomings. In this study we propose a methodology that combines these two approaches for mutually alleviating their limitations. This resulted in a control method capable of context-dependent movement estimation that switched automatically between sequential (one DOF at a time) or simultaneous (multiple DOF) prosthesis control, based on an online estimation of signal dimensionality. The proposed method was evaluated in scenarios close to real-life situations, with the control of a physical prosthesis in applied tasks of varying difficulties. Test prostheses were individually manufactured for both able-bodied and transradial amputee subjects. With these prostheses, two amputees performed the Southampton Hand Assessment Procedure test with scores of 58 and 71 points. The five able-bodied individuals performed standardized tests, such as the box&block and clothes pin test, reducing the completion times by up to 30%, with respect to using a state-of-the-art pure sequential control algorithm. Apart from facilitating fast simultaneous movements, the proposed control scheme was also more intuitive to use, since human movements are predominated by simultaneous activations across joints. The proposed method thus represents a significant step towards intelligent, intuitive and natural control of upper limb prostheses.
Method for Smoke Spread Testing of Large Premises
NASA Astrophysics Data System (ADS)
Walmerdahl, P.; Werling, P.
2001-11-01
A method for performing non-destructive smoke spread tests has been developed, tested and applied to several existing buildings. Burning methanol in different size steel trays cooled by water generates the heat source. Several tray sizes are available to cover fire sources up to nearly 1MW. The smoke is supplied by means of a suitable number of smoke generators that produce a smoke, which can be described as a non-toxic aerosol. The advantage of the method is that it provides a means for performing non-destructive tests in already existing buildings and other installations for the purpose of evaluating the functionality and design of the active fire protection measures such as smoke extraction systems, etc. In the report, the method is described in detail and experimental data from the try-out of the method are also presented in addition to a discussion on applicability and flexibility of the method.
Improvement of sampling plans for Salmonella detection in pooled table eggs by use of real-time PCR.
Pasquali, Frédérique; De Cesare, Alessandra; Valero, Antonio; Olsen, John Emerdhal; Manfreda, Gerardo
2014-08-01
Eggs and egg products have been described as the most critical food vehicles of salmonellosis. The prevalence and level of contamination of Salmonella on table eggs are low, which severely affects the sensitivity of sampling plans applied voluntarily in some European countries, where one to five pools of 10 eggs are tested by the culture based reference method ISO 6579:2004. In the current study we have compared the testing-sensitivity of the reference culture method ISO 6579:2004 and an alternative real-time PCR method on Salmonella contaminated egg-pool of different sizes (4-9 uninfected eggs mixed with one contaminated egg) and contamination levels (10°-10(1), 10(1)-10(2), 10(2)-10(3)CFU/eggshell). Two hundred and seventy samples corresponding to 15 replicates per pool size and inoculum level were tested. At the lowest contamination level real-time PCR detected Salmonella in 40% of contaminated pools vs 12% using ISO 6579. The results were used to estimate the lowest number of sample units needed to be tested in order to have a 95% certainty not falsely to accept a contaminated lot by Monte Carlo simulation. According to this simulation, at least 16 pools of 10 eggs each are needed to be tested by ISO 6579 in order to obtain this confidence level, while the minimum number of pools to be tested was reduced to 8 pools of 9 eggs each, when real-time PCR was applied as analytical method. This result underlines the importance of including analytical methods with higher sensitivity in order to improve the efficiency of sampling and reduce the number of samples to be tested. Copyright © 2013 Elsevier B.V. All rights reserved.
Preparing Lessons, Exercises and Tests for M-Learning of IT Fundamentals
ERIC Educational Resources Information Center
Djenic, S.; Vasiljevic, V.; Mitic, J.; Petkovic, V.; Miletic, A.
2014-01-01
This paper represents a result of studying the efficiency of applying mobile learning technologies, as well as the accompanying advanced teaching methods in the area of Information Technologies, at the School of Electrical and Computer Engineering of Applied Studies in Belgrade, Serbia. It contains a brief description of the form of application…
Applying Item Response Theory Methods to Examine the Impact of Different Response Formats
ERIC Educational Resources Information Center
Hohensinn, Christine; Kubinger, Klaus D.
2011-01-01
In aptitude and achievement tests, different response formats are usually used. A fundamental distinction must be made between the class of multiple-choice formats and the constructed response formats. Previous studies have examined the impact of different response formats applying traditional statistical approaches, but these influences can also…
Reliability of tanoak volume equations when applied to different areas
Norman H. Pillsbury; Philip M. McDonald; Victor Simon
1995-01-01
Tree volume equations for tanoak (Lithocarpus densiflorus) were developed for seven stands throughout its natural range and compared by a volume prediction and a parameter difference method. The objective was to test if volume estimates from a species growing in a local, relatively uniform habitat could be applied more widely. Results indicated...
Mower, Timothy E.; Higgins, Jerry D.; Yang, In C.; Peters, Charles A.
1994-01-01
Study of the hydrologic system at Yucca Mountain, Nevada, requires the extraction of pore-water samples from welded and nonwelded, unsaturated tuffs. Two compression methods (triaxial compression and one-dimensional compression) were examined to develop a repeatable extraction technique and to investigate the effects of the extraction method on the original pore-fluid composition. A commercially available triaxial cell was modified to collect pore water expelled from tuff cores. The triaxial cell applied a maximum axial stress of 193 MPa and a maximum confining stress of 68 MPa. Results obtained from triaxial compression testing indicated that pore-water samples could be obtained from nonwelded tuff cores that had initial moisture contents as small as 13 percent (by weight of dry soil). Injection of nitrogen gas while the test core was held at the maximum axial stress caused expulsion of additional pore water and reduced the required initial moisture content from 13 to 11 percent. Experimental calculations, together with experience gained from testing moderately welded tuff cores, indicated that the triaxial cell used in this study could not apply adequate axial or confining stress to expel pore water from cores of densely welded tuffs. This concern led to the design, fabrication, and testing of a one-dimensional compression cell. The one-dimensional compression cell used in this study was constructed from hardened 4340-alloy and nickel-alloy steels and could apply a maximum axial stress of 552 MPa. The major components of the device include a corpus ring and sample sleeve to confine the sample, a piston and base platen to apply axial load, and drainage plates to transmit expelled water from the test core out of the cell. One-dimensional compression extracted pore water from nonwelded tuff cores that had initial moisture contents as small as 7.6 percent; pore water was expelled from densely welded tuff cores that had initial moisture contents as small as 7.7 percent. Injection of nitrogen gas at the maximum axial stress did not produce additional pore water from nonwelded tuff cores, but was critical to recovery of pore water from densely welded tuff cores. Gas injection reduced the required initial moisture content in welded tuff cores from 7.7 to 6.5 percent. Based on the mechanical ability of a pore-water extraction method to remove water from welded and nonwelded tuff cores, one-dimensional compression is a more effective extraction method than triaxial compression. However, because the effects that one-dimensional compression has on pore-water chemistry are not completely understood, additional testing will be needed to verify that this method is suitable for pore-water extraction from Yucca Mountain tuffs.
Specific Yields Estimated from Gravity Change during Pumping Test
NASA Astrophysics Data System (ADS)
Chen, K. H.; Hwang, C.; Chang, L. C.
2017-12-01
Specific yield (Sy) is the most important parameter to describe available groundwater capacity in an unconfined aquifer. When estimating Sy by a field pumping test, aquifer heterogeneity and well performers will cause a large uncertainty. In this study, we use a gravity-based method to estimate Sy. At the time of pumping test, amounts of mass (groundwater) are forced to be taken out. If drawdown corn is big and close enough to high precision gravimeter, the gravity change can be detected. The gravity-based method use gravity observations that are independent from traditional flow computation. Only the drawdown corn should be modeled with observed head and hydrogeology data. The gravity method can be used in most groundwater field tests, such as locally pumping/injection tests initiated by active man-made or annual variations due to natural sources. We apply our gravity method at few sites in Taiwan situated over different unconfined aquifer. Here pumping tests for Sy determinations were also carried out. We will discuss why the gravity method produces different results from traditional pumping test, field designs and limitations of the gravity method.
Validity of a new feedback method for the VEMP test.
Vanspauwen, R; Wuyts, F L; Van De Heyning, P H
2006-08-01
We used a feedback method, based on a blood pressure manometer with inflatable cuff, to control the sternocleidomastoid muscle (SCM) contraction. To obtain comparable left-right VEMP responses, it is necessary (1) to determine which cuff pressures on both sides yield identical mean rectified voltage (MRV) values of the SCM contraction and (2) to apply these cuff pressures during the VEMP test. To investigate the effect of the SCM muscle contraction variability on the VEMP variables when applying the feedback method. Subjects pushed with their jaw against the hand-held inflated cuff to generate cuff pressures of subsequently 30, 40 and 50 mmHg during a MRV and VEMP measurement. When analyzing the relationship between the applied cuff pressures and the MRV values/VEMP amplitudes, we showed that (1) there was a linear relationship, (2) there was no side effect and (3) there was an interaction effect between 'side' and 'subject'. There was neither a side effect, nor an effect of the applied cuff pressure when considering the p13 latencies. As for the n23 values, there was no side effect but there was a significant difference when comparing the n23 latencies at cuff pressures of 30 vs 40 mmHg/50 mmHg.
Further evaluation of traditional icing scaling methods
NASA Technical Reports Server (NTRS)
Anderson, David N.
1996-01-01
This report provides additional evaluations of two methods to scale icing test conditions; it also describes a hybrid technique for use when scaled conditions are outside the operating envelope of the test facility. The first evaluation is of the Olsen method which can be used to scale the liquid-water content in icing tests, and the second is the AEDC (Ruff) method which is used when the test model is less than full size. Equations for both scaling methods are presented in the paper, and the methods were evaluated by performing icing tests in the NASA Lewis Icing Research Tunnel (IRT). The Olsen method was tested using 53 cm diameter NACA 0012 airfoils. Tests covered liquid-water-contents which varied by as much as a factor of 1.8. The Olsen method was generally effective in giving scale ice shapes which matched the reference shapes for these tests. The AEDC method was tested with NACA 0012 airfoils with chords from 18 cm to 53 cm. The 53 cm chord airfoils were used in reference tests, and 1/2 and 1/3 scale tests were made at conditions determined by applying the AEDC scaling method. The scale and reference airspeeds were matched in these tests. The AEDC method was found to provide fairly effective scaling for 1/2 size tests, but for 1/3 size models, scaling was generally less effective. In addition to these two scaling methods, a hybrid approach was also tested in which the Olsen method was used to adjust the LWC after size was scaled using the constant Weber number method. This approach was found to be an effective way to test when scaled conditions would otherwise be outside the capability of the test facility.
Ballesteros Martín, M M; Casas López, J L; Oller, I; Malato, S; Sánchez Pérez, J A
2010-09-01
Four biodegradability tests (Pseudomonas putida bioassay, Zahn-Wellens test, BOD5/COD ratio and respirometry assay) have been used to determine the biodegradability enhancement during the treatment of wastewater containing 200 mg L(-1) of dissolved organic carbon (DOC) of a five commercial pesticides mixture (Vydate, Metomur, Couraze, Ditumur and Scala) by an advanced oxidation process (AOP). A comparative study was carried out taking into account repeatability and precision of each biodegradability test. Solar photo-Fenton was the AOP selected for pesticide degradation up to three levels of mineralization: 20%, 40% and 60% of initial DOC. Intra- and interday precisions were evaluated conducting each biodegradability test by triplicate and they were applied three times on different dates over a period of three months. Fisher's least significant difference method was applied to the means, P. putida and Zahn-Wellens tests giving higher repeatability and precision. The P. putida test requires a shorter time to obtain reliable results using a standardized inoculum and constitutes a worthwhile alternative to estimate biodegradability in contrast to other less accurate or more time consuming methods. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Measurement of external forces and torques on a large pointing system
NASA Technical Reports Server (NTRS)
Morenus, R. C.
1980-01-01
Methods of measuring external forces and torques are discussed, in general and as applied to the Large Pointing System wind tunnel tests. The LPS tests were in two phases. The first test was a preliminary test of three models representing coelostat, heliostat, and on-gimbal telescope configurations. The second test explored the coelostat configuration in more detail. The second test used a different setup for measuring external loads. Some results are given from both tests.
NASA Technical Reports Server (NTRS)
Thomas, F. P.
2006-01-01
Aerospace structures utilize innovative, lightweight composite materials for exploration activities. These structural components, due to various reasons including size limitations, manufacturing facilities, contractual obligations, or particular design requirements, will have to be joined. The common methodologies for joining composite components are the adhesively bonded and mechanically fastened joints and, in certain instances, both methods are simultaneously incorporated into the design. Guidelines and recommendations exist for engineers to develop design criteria and analyze and test composites. However, there are no guidelines or recommendations based on analysis or test data to specify a torque or torque range to apply to metallic mechanical fasteners used to join composite components. Utilizing the torque tension machine at NASA s Marshall Space Flight Center, an initial series of tests were conducted to determine the maximum torque that could be applied to a composite specimen. Acoustic emissions were used to nondestructively assess the specimens during the tests and thermographic imaging after the tests.
Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...
2017-11-08
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Xin; Garikapati, Venu M.; You, Daehyun
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
Transmitted wavefront testing with large dynamic range based on computer-aided deflectometry
NASA Astrophysics Data System (ADS)
Wang, Daodang; Xu, Ping; Gong, Zhidong; Xie, Zhongmin; Liang, Rongguang; Xu, Xinke; Kong, Ming; Zhao, Jun
2018-06-01
The transmitted wavefront testing technique is demanded for the performance evaluation of transmission optics and transparent glass, in which the achievable dynamic range is a key issue. A computer-aided deflectometric testing method with fringe projection is proposed for the accurate testing of transmitted wavefronts with a large dynamic range. Ray tracing of the modeled testing system is carried out to achieve the virtual ‘null’ testing of transmitted wavefront aberrations. The ray aberration is obtained from the ray tracing result and measured slope, with which the test wavefront aberration can be reconstructed. To eliminate testing system modeling errors, a system geometry calibration based on computer-aided reverse optimization is applied to realize accurate testing. Both numerical simulation and experiments have been carried out to demonstrate the feasibility and high accuracy of the proposed testing method. The proposed testing method can achieve a large dynamic range compared with the interferometric method, providing a simple, low-cost and accurate way for the testing of transmitted wavefronts from various kinds of optics and a large amount of industrial transmission elements.
Detection of dependence patterns with delay.
Chevallier, Julien; Laloë, Thomas
2015-11-01
The Unitary Events (UE) method is a popular and efficient method used this last decade to detect dependence patterns of joint spike activity among simultaneously recorded neurons. The first introduced method is based on binned coincidence count (Grün, 1996) and can be applied on two or more simultaneously recorded neurons. Among the improvements of the methods, a transposition to the continuous framework has recently been proposed by Muiño and Borgelt (2014) and fully investigated by Tuleau-Malot et al. (2014) for two neurons. The goal of the present paper is to extend this study to more than two neurons. The main result is the determination of the limit distribution of the coincidence count. This leads to the construction of an independence test between L≥2 neurons. Finally, we propose a multiple test procedure via a Benjamini and Hochberg approach (Benjamini and Hochberg, 1995). All the theoretical results are illustrated by a simulation study, and compared to the UE method proposed by Grün et al. (2002). Furthermore our method is applied on real data. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The Infeasibility of Quantifying the Reliability of Life-Critical Real-Time Software
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Finelli, George B.
1991-01-01
This paper affirms that the quantification of life-critical software reliability is infeasible using statistical methods whether applied to standard software or fault-tolerant software. The classical methods of estimating reliability are shown to lead to exhorbitant amounts of testing when applied to life-critical software. Reliability growth models are examined and also shown to be incapable of overcoming the need for excessive amounts of testing. The key assumption of software fault tolerance separately programmed versions fail independently is shown to be problematic. This assumption cannot be justified by experimentation in the ultrareliability region and subjective arguments in its favor are not sufficiently strong to justify it as an axiom. Also, the implications of the recent multiversion software experiments support this affirmation.
Aerospace reliability applied to biomedicine.
NASA Technical Reports Server (NTRS)
Lalli, V. R.; Vargo, D. J.
1972-01-01
An analysis is presented that indicates that the reliability and quality assurance methodology selected by NASA to minimize failures in aerospace equipment can be applied directly to biomedical devices to improve hospital equipment reliability. The Space Electric Rocket Test project is used as an example of NASA application of reliability and quality assurance (R&QA) methods. By analogy a comparison is made to show how these same methods can be used in the development of transducers, instrumentation, and complex systems for use in medicine.
REMARKS ON THE MAXIMUM ENTROPY METHOD APPLIED TO FINITE TEMPERATURE LATTICE QCD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
UMEDA, T.; MATSUFURU, H.
2005-07-25
We make remarks on the Maximum Entropy Method (MEM) for studies of the spectral function of hadronic correlators in finite temperature lattice QCD. We discuss the virtues and subtlety of MEM in the cases that one does not have enough number of data points such as at finite temperature. Taking these points into account, we suggest several tests which one should examine to keep the reliability for the results, and also apply them using mock and lattice QCD data.
A Comparison of Trajectory Optimization Methods for the Impulsive Minimum Fuel Rendezvous Problem
NASA Technical Reports Server (NTRS)
Hughes, Steven P.; Mailhe, Laurie M.; Guzman, Jose J.
2003-01-01
In this paper we present, a comparison of trajectory optimization approaches for the minimum fuel rendezvous problem. Both indirect and direct methods are compared for a variety of test cases. The indirect approach is based on primer vector theory. The direct approaches are implemented numerically and include Sequential Quadratic Programming (SQP). Quasi- Newton and Nelder-Meade Simplex. Several cost function parameterizations are considered for the direct approach. We choose one direct approach that appears to be the most flexible. Both the direct and indirect methods are applied to a variety of test cases which are chosen to demonstrate the performance of each method in different flight regimes. The first test case is a simple circular-to-circular coplanar rendezvous. The second test case is an elliptic-to-elliptic line of apsides rotation. The final test case is an orbit phasing maneuver sequence in a highly elliptic orbit. For each test case we present a comparison of the performance of all methods we consider in this paper.
Wafer level reliability testing: An idea whose time has come
NASA Technical Reports Server (NTRS)
Trapp, O. D.
1987-01-01
Wafer level reliability testing has been nurtured in the DARPA supported workshops, held each autumn since 1982. The seeds planted in 1982 have produced an active crop of very large scale integration manufacturers applying wafer level reliability test methods. Computer Aided Reliability (CAR) is a new seed being nurtured. Users are now being awakened by the huge economic value of the wafer reliability testing technology.
On the prediction of far field computational aeroacoustics of advanced propellers
NASA Technical Reports Server (NTRS)
Jaeger, Stephen M.; Korkan, Kenneth D.
1990-01-01
A numerical method for determining the acoustic far field generated by a high-speed subsonic aircraft propeller was developed. The approach used in this method was to generate the entire three-dimensional pressure field about the propeller (using an Euler flowfield solver) and then to apply a solution of the wave equation on a cylindrical surface enveloping the propeller. The method is applied to generate the three-dimensional flowfield between two blades of an advanced propeller. The results are compared with experimental data obtained in a wind-tunnel test at a Mach number of 0.6.
Bivariate sub-Gaussian model for stock index returns
NASA Astrophysics Data System (ADS)
Jabłońska-Sabuka, Matylda; Teuerle, Marek; Wyłomańska, Agnieszka
2017-11-01
Financial time series are commonly modeled with methods assuming data normality. However, the real distribution can be nontrivial, also not having an explicitly formulated probability density function. In this work we introduce novel parameter estimation and high-powered distribution testing methods which do not rely on closed form densities, but use the characteristic functions for comparison. The approach applied to a pair of stock index returns demonstrates that such a bivariate vector can be a sample coming from a bivariate sub-Gaussian distribution. The methods presented here can be applied to any nontrivially distributed financial data, among others.
Nondestructive online testing method for friction stir welding using acoustic emission
NASA Astrophysics Data System (ADS)
Levikhina, Anastasiya
2017-12-01
The paper reviews the possibility of applying the method of acoustic emission for online monitoring of the friction stir welding process. It is shown that acoustic emission allows the detection of weld defects and their location in real time. The energy of an acoustic signal and the median frequency are suggested to be used as informative parameters. The method of calculating the median frequency with the use of a short time Fourier transform is applied for the identification of correlations between the defective weld structure and properties of the acoustic emission signals received during welding.
Cloke, Jonathan; Arizanova, Julia; Crabtree, David; Simpson, Helen; Evans, Katharine; Vaahtoranta, Laura; Palomäki, Jukka-Pekka; Artimo, Paulus; Huang, Feng; Liikanen, Maria; Koskela, Suvi; Chen, Yi
2016-01-01
The Thermo Scientific™ SureTect™ Listeria species Real-Time PCR Assay was certified during 2013 by the AOAC Research Institute (RI) Performance Tested Methods(SM) program as a rapid method for the detection of Listeria species from a wide range of food matrixes and surface samples. A method modification study was conducted in 2015 to extend the matrix claims of the product to a wider range of food matrixes. This report details the method modification study undertaken to extend the use of this PCR kit to the Applied Biosystems™ 7500 Fast PCR Instrument and Applied Biosystems RapidFinder™ Express 2.0 software allowing use of the assay on a 96-well format PCR cycler in addition to the current workflow, using the 24-well Thermo Scientific PikoReal™ PCR Instrument and Thermo Scientific SureTect software. The method modification study presented in this report was assessed by the AOAC-RI as being a level 2 method modification study, necessitating a method developer study on a representative range of food matrixes covering raw ground turkey, 2% fat pasteurized milk, and bagged lettuce as well as stainless steel surface samples. All testing was conducted in comparison to the reference method detailed in International Organization for Standardization (ISO) 6579:2002. No significant difference by probability of detection statistical analysis was found between the SureTect Listeria species PCR Assay or the ISO reference method methods for any of the three food matrixes and the surface samples analyzed during the study.
Thermal barrier coating life-prediction model development
NASA Technical Reports Server (NTRS)
Strangman, T. E.; Neumann, J.; Liu, A.
1986-01-01
The program focuses on predicting the lives of two types of strain-tolerant and oxidation-resistant thermal barrier coating (TBC) systems that are produced by commercial coating suppliers to the gas turbine industry. The plasma-sprayed TBC system, composed of a low-pressure plasma-spray (LPPS) or an argon shrouded plasma-spray (ASPS) applied oxidation resistant NiCrAlY or (CoNiCrAlY) bond coating and an air-plasma-sprayed yttria partially stabilized zirconia insulative layer, is applied by both Chromalloy, Klock, and Union Carbide. The second type of TBS is applied by the electron beam-physical vapor deposition (EB-PVD) process by Temescal. The second year of the program was focused on specimen procurement, TMC system characterization, nondestructive evaluation methods, life prediction model development, and TFE731 engine testing of thermal barrier coated blades. Materials testing is approaching completion. Thermomechanical characterization of the TBC systems, with toughness, and spalling strain tests, was completed. Thermochemical testing is approximately two-thirds complete. Preliminary materials life models for the bond coating oxidation and zirconia sintering failure modes were developed. Integration of these life models with airfoil component analysis methods is in progress. Testing of high pressure turbine blades coated with the program TBS systems is in progress in a TFE731 turbofan engine. Eddy current technology feasibility was established with respect to nondestructively measuring zirconia layer thickness of a TBC system.
NASA Technical Reports Server (NTRS)
Bond, Thomas H. (Technical Monitor); Anderson, David N.
2004-01-01
This manual reviews the derivation of the similitude relationships believed to be important to ice accretion and examines ice-accretion data to evaluate their importance. Both size scaling and test-condition scaling methods employing the resulting similarity parameters are described, and experimental icing tests performed to evaluate scaling methods are reviewed with results. The material included applies primarily to unprotected, unswept geometries, but some discussion of how to approach other situations is included as well. The studies given here and scaling methods considered are applicable only to Appendix-C icing conditions. Nearly all of the experimental results presented have been obtained in sea-level tunnels. Recommendations are given regarding which scaling methods to use for both size scaling and test-condition scaling, and icing test results are described to support those recommendations. Facility limitations and size-scaling restrictions are discussed. Finally, appendices summarize the air, water and ice properties used in NASA scaling studies, give expressions for each of the similarity parameters used and provide sample calculations for the size-scaling and test-condition scaling methods advocated.
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Statistical tools, previously developed for nonlinear least-squares estimation of multivariate sensor calibration parameters and the associated calibration uncertainty analysis, have been applied to single- and multiple-axis inertial model attitude sensors used in wind tunnel testing to measure angle of attack and roll angle. The analysis provides confidence and prediction intervals of calibrated sensor measurement uncertainty as functions of applied input pitch and roll angles. A comparative performance study of various experimental designs for inertial sensor calibration is presented along with corroborating experimental data. The importance of replicated calibrations over extended time periods has been emphasized; replication provides independent estimates of calibration precision and bias uncertainties, statistical tests for calibration or modeling bias uncertainty, and statistical tests for sensor parameter drift over time. A set of recommendations for a new standardized model attitude sensor calibration method and usage procedures is included. The statistical information provided by these procedures is necessary for the uncertainty analysis of aerospace test results now required by users of industrial wind tunnel test facilities.
NASA Technical Reports Server (NTRS)
Hozman, Aron D.; Hughes, William O.
2014-01-01
It is important to realize that some test-articles may have significant sound absorption that may challenge the acoustic power capabilities of a test facility. Therefore, to mitigate this risk of not being able to meet the customers target spectrum, it is prudent to demonstrate early-on an increased acoustic power capability which compensates for this test-article absorption. This paper describes a concise method to reduce this risk when testing aerospace test-articles which have significant absorption. This method was successfully applied during the SpaceX Falcon 9 Payload Fairing acoustic test program at the NASA Glenn Research Center Plum Brook Stations RATF.
Janssen, Ellen M; Marshall, Deborah A; Hauber, A Brett; Bridges, John F P
2017-12-01
The recent endorsement of discrete-choice experiments (DCEs) and other stated-preference methods by regulatory and health technology assessment (HTA) agencies has placed a greater focus on demonstrating the validity and reliability of preference results. Areas covered: We present a practical overview of tests of validity and reliability that have been applied in the health DCE literature and explore other study qualities of DCEs. From the published literature, we identify a variety of methods to assess the validity and reliability of DCEs. We conceptualize these methods to create a conceptual model with four domains: measurement validity, measurement reliability, choice validity, and choice reliability. Each domain consists of three categories that can be assessed using one to four procedures (for a total of 24 tests). We present how these tests have been applied in the literature and direct readers to applications of these tests in the health DCE literature. Based on a stakeholder engagement exercise, we consider the importance of study characteristics beyond traditional concepts of validity and reliability. Expert commentary: We discuss study design considerations to assess the validity and reliability of a DCE, consider limitations to the current application of tests, and discuss future work to consider the quality of DCEs in healthcare.
ERIC Educational Resources Information Center
Choi, Sae Il
2009-01-01
This study used simulation (a) to compare the kernel equating method to traditional equipercentile equating methods under the equivalent-groups (EG) design and the nonequivalent-groups with anchor test (NEAT) design and (b) to apply the parametric bootstrap method for estimating standard errors of equating. A two-parameter logistic item response…
Research of carbon composite material for nonlinear finite element method
NASA Astrophysics Data System (ADS)
Kim, Jung Ho; Garg, Mohit; Kim, Ji Hoon
2012-04-01
Works on the absorption of collision energy in the structural members are carried out widely with various material and cross-sections. And, with ever increasing safety concerns, they are presently applied in various fields including railroad trains, air crafts and automobiles. In addition to this, problem of lighting structural members became important subject by control of exhaust gas emission, fuel economy and energy efficiency. CFRP(Carbon Fiber Reinforced Plastics) usually is applying the two primary structural members because of different result each design parameter as like stacking thickness, stacking angle, moisture absorption ect. We have to secure the data for applying primary structural members. But it always happens to test design parameters each for securing the data. So, it has much more money and time. We can reduce the money and the time, if can ensure the CFRP material properties each design parameters. In this study, we experiment the coupon test each tension, compression and shear using CFRP prepreg sheet and simulate non-linear analyze at the sources - test result, Caron longitudinal modulus and matrix poisson's ratio using GENOAMQC is specialized at Composite analysis. And then we predict the result that specimen manufacture changing stacking angle and experiment in such a way of test method using GENOA-MCQ.
Efficient Iterative Methods Applied to the Solution of Transonic Flows
NASA Astrophysics Data System (ADS)
Wissink, Andrew M.; Lyrintzis, Anastasios S.; Chronopoulos, Anthony T.
1996-02-01
We investigate the use of an inexact Newton's method to solve the potential equations in the transonic regime. As a test case, we solve the two-dimensional steady transonic small disturbance equation. Approximate factorization/ADI techniques have traditionally been employed for implicit solutions of this nonlinear equation. Instead, we apply Newton's method using an exact analytical determination of the Jacobian with preconditioned conjugate gradient-like iterative solvers for solution of the linear systems in each Newton iteration. Two iterative solvers are tested; a block s-step version of the classical Orthomin(k) algorithm called orthogonal s-step Orthomin (OSOmin) and the well-known GMRES method. The preconditioner is a vectorizable and parallelizable version of incomplete LU (ILU) factorization. Efficiency of the Newton-Iterative method on vector and parallel computer architectures is the main issue addressed. In vectorized tests on a single processor of the Cray C-90, the performance of Newton-OSOmin is superior to Newton-GMRES and a more traditional monotone AF/ADI method (MAF) for a variety of transonic Mach numbers and mesh sizes. Newton-GMRES is superior to MAF for some cases. The parallel performance of the Newton method is also found to be very good on multiple processors of the Cray C-90 and on the massively parallel thinking machine CM-5, where very fast execution rates (up to 9 Gflops) are found for large problems.
Development of an automated ultrasonic testing system
NASA Astrophysics Data System (ADS)
Shuxiang, Jiao; Wong, Brian Stephen
2005-04-01
Non-Destructive Testing is necessary in areas where defects in structures emerge over time due to wear and tear and structural integrity is necessary to maintain its usability. However, manual testing results in many limitations: high training cost, long training procedure, and worse, the inconsistent test results. A prime objective of this project is to develop an automatic Non-Destructive testing system for a shaft of the wheel axle of a railway carriage. Various methods, such as the neural network, pattern recognition methods and knowledge-based system are used for the artificial intelligence problem. In this paper, a statistical pattern recognition approach, Classification Tree is applied. Before feature selection, a thorough study on the ultrasonic signals produced was carried out. Based on the analysis of the ultrasonic signals, three signal processing methods were developed to enhance the ultrasonic signals: Cross-Correlation, Zero-Phase filter and Averaging. The target of this step is to reduce the noise and make the signal character more distinguishable. Four features: 1. The Auto Regressive Model Coefficients. 2. Standard Deviation. 3. Pearson Correlation 4. Dispersion Uniformity Degree are selected. And then a Classification Tree is created and applied to recognize the peak positions and amplitudes. Searching local maximum is carried out before feature computing. This procedure reduces much computation time in the real-time testing. Based on this algorithm, a software package called SOFRA was developed to recognize the peaks, calibrate automatically and test a simulated shaft automatically. The automatic calibration procedure and the automatic shaft testing procedure are developed.
40 CFR 60.59b - Reporting and recordkeeping requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... monitor mercury or dioxin/furan instead of conducting performance testing using EPA manual test methods, all integrated 24-hour mercury concentrations or all integrated 2-week dioxin/furan concentrations as... actions taken. (4) For affected facilities that apply activated carbon for mercury or dioxin/furan control...
Leak Detection by Acoustic Emission Monitoring. Phase 1. Feasibility Study
1994-05-26
various signal processing and noise descrimInation techniques during the Data Processing task. C. TEST DESCRIPTION 1. Laboratory Tests Following normal...success in applying these methods to descriminating between the AE bursts generated by two close AE sources In a section of an aircraft structure
Kim, Seungjin; Krajmalnik-Brown, Rosa; Kim, Jong-Oh; Chung, Jinwook
2014-11-01
The application of effective remediation technologies can benefit from adequate preliminary testing, such as in lab-scale and Pilot-scale systems. Bioremediation technologies have demonstrated tremendous potential with regards to cost, but they cannot be used for all contaminated sites due to limitations in biological activity. The purpose of this study was to develop a DNA diagnostic method that reduces the time to select contaminated sites that are good candidates for bioremediation. We applied an oligonucleotide microarray method to detect and monitor genes that lead to aliphatic and aromatic degradation. Further, the bioremediation of a contaminated site, selected based on the results of the genetic diagnostic method, was achieved successfully by applying bioslurping in field tests. This gene-based diagnostic technique is a powerful tool to evaluate the potential for bioremediation in petroleum hydrocarbon contaminated soil. Copyright © 2014 Elsevier B.V. All rights reserved.
One-way ANOVA based on interval information
NASA Astrophysics Data System (ADS)
Hesamian, Gholamreza
2016-08-01
This paper deals with extending the one-way analysis of variance (ANOVA) to the case where the observed data are represented by closed intervals rather than real numbers. In this approach, first a notion of interval random variable is introduced. Especially, a normal distribution with interval parameters is introduced to investigate hypotheses about the equality of interval means or test the homogeneity of interval variances assumption. Moreover, the least significant difference (LSD method) for investigating multiple comparison of interval means is developed when the null hypothesis about the equality of means is rejected. Then, at a given interval significance level, an index is applied to compare the interval test statistic and the related interval critical value as a criterion to accept or reject the null interval hypothesis of interest. Finally, the method of decision-making leads to some degrees to accept or reject the interval hypotheses. An applied example will be used to show the performance of this method.
Islam, M T; Trevorah, R M; Appadoo, D R T; Best, S P; Chantler, C T
2017-04-15
We present methodology for the first FTIR measurements of ferrocene using dilute wax solutions for dispersion and to preserve non-crystallinity; a new method for removal of channel spectra interference for high quality data; and a consistent approach for the robust estimation of a defined uncertainty for advanced structural χ r 2 analysis and mathematical hypothesis testing. While some of these issues have been investigated previously, the combination of novel approaches gives markedly improved results. Methods for addressing these in the presence of a modest signal and how to quantify the quality of the data irrespective of preprocessing for subsequent hypothesis testing are applied to the FTIR spectra of Ferrocene (Fc) and deuterated ferrocene (dFc, Fc-d 10 ) collected at the THz/Far-IR beam-line of the Australian Synchrotron at operating temperatures of 7K through 353K. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Islam, M. T.; Trevorah, R. M.; Appadoo, D. R. T.; Best, S. P.; Chantler, C. T.
2017-04-01
We present methodology for the first FTIR measurements of ferrocene using dilute wax solutions for dispersion and to preserve non-crystallinity; a new method for removal of channel spectra interference for high quality data; and a consistent approach for the robust estimation of a defined uncertainty for advanced structural χr2 analysis and mathematical hypothesis testing. While some of these issues have been investigated previously, the combination of novel approaches gives markedly improved results. Methods for addressing these in the presence of a modest signal and how to quantify the quality of the data irrespective of preprocessing for subsequent hypothesis testing are applied to the FTIR spectra of Ferrocene (Fc) and deuterated ferrocene (dFc, Fc-d10) collected at the THz/Far-IR beam-line of the Australian Synchrotron at operating temperatures of 7 K through 353 K.
Deng, Jun; Liu, Du-ren
2002-12-01
Objective. To improve the quality of ultrasonic therapeutical treatment by improving the accuracy of temperature control. Method. Adaptive canceling methods were used to reduce the noise of temperature signal gained, and enhance signal-to-noise ratio. Result. The test's result corresponds basically to the theoretical curve. Conclusion. Adaptive canceling methods can be applied to clinic treatment.
Patterson, Fiona; Lievens, Filip; Kerrin, Máire; Munro, Neil; Irish, Bill
2013-11-01
The selection methodology for UK general practice is designed to accommodate several thousand applicants per year and targets six core attributes identified in a multi-method job-analysis study To evaluate the predictive validity of selection methods for entry into postgraduate training, comprising a clinical problem-solving test, a situational judgement test, and a selection centre. A three-part longitudinal predictive validity study of selection into training for UK general practice. In sample 1, participants were junior doctors applying for training in general practice (n = 6824). In sample 2, participants were GP registrars 1 year into training (n = 196). In sample 3, participants were GP registrars sitting the licensing examination after 3 years, at the end of training (n = 2292). The outcome measures include: assessor ratings of performance in a selection centre comprising job simulation exercises (sample 1); supervisor ratings of trainee job performance 1 year into training (sample 2); and licensing examination results, including an applied knowledge examination and a 12-station clinical skills objective structured clinical examination (OSCE; sample 3). Performance ratings at selection predicted subsequent supervisor ratings of job performance 1 year later. Selection results also significantly predicted performance on both the clinical skills OSCE and applied knowledge examination for licensing at the end of training. In combination, these longitudinal findings provide good evidence of the predictive validity of the selection methods, and are the first reported for entry into postgraduate training. Results show that the best predictor of work performance and training outcomes is a combination of a clinical problem-solving test, a situational judgement test, and a selection centre. Implications for selection methods for all postgraduate specialties are considered.
NASA Astrophysics Data System (ADS)
Hirano, Taichi; Sakai, Keiji
2017-07-01
Viscoelasticity is a unique characteristic of soft materials and describes its dynamic response to mechanical stimulations. A creep test is an experimental method for measuring the strain ratio/rate against an applied stress, thereby assessing the viscoelasticity of the materials. We propose two advanced experimental systems suitable for the creep test, adopting our original electromagnetically spinning (EMS) technique. This technique can apply a constant torque by a noncontact mechanism, thereby allowing more sensitive and rapid measurements. The viscosity and elasticity of a semidilute wormlike micellar solution were determined using two setups, and the consistency between the results was assessed.
Nondestructive testing of a weld repair on the I-65 Bridge over the Ohio River at Louisville.
DOT National Transportation Integrated Search
2009-06-01
Nondestructive evaluation methods were applied to verify the structural integrity of a fracture critical structural member on the I-65 John F. Kennedy Memorial Bridge over the Ohio River at Louisville. Several nondestructive evaluation methods includ...
Cluster detection methods applied to the Upper Cape Cod cancer data.
Ozonoff, Al; Webster, Thomas; Vieira, Veronica; Weinberg, Janice; Ozonoff, David; Aschengrau, Ann
2005-09-15
A variety of statistical methods have been suggested to assess the degree and/or the location of spatial clustering of disease cases. However, there is relatively little in the literature devoted to comparison and critique of different methods. Most of the available comparative studies rely on simulated data rather than real data sets. We have chosen three methods currently used for examining spatial disease patterns: the M-statistic of Bonetti and Pagano; the Generalized Additive Model (GAM) method as applied by Webster; and Kulldorff's spatial scan statistic. We apply these statistics to analyze breast cancer data from the Upper Cape Cancer Incidence Study using three different latency assumptions. The three different latency assumptions produced three different spatial patterns of cases and controls. For 20 year latency, all three methods generally concur. However, for 15 year latency and no latency assumptions, the methods produce different results when testing for global clustering. The comparative analyses of real data sets by different statistical methods provides insight into directions for further research. We suggest a research program designed around examining real data sets to guide focused investigation of relevant features using simulated data, for the purpose of understanding how to interpret statistical methods applied to epidemiological data with a spatial component.
Promoted Combustion Test Data Re-Examined
NASA Technical Reports Server (NTRS)
Lewis, Michelle; Jeffers, Nathan; Stoltzfus, Joel
2010-01-01
Promoted combustion testing of metallic materials has been performed by NASA since the mid-1980s to determine the burn resistance of materials in oxygen-enriched environments. As the technolo gy has advanced, the method of interpreting, presenting, and applying the promoted combustion data has advanced as well. Recently NASA changed the bum criterion from 15 cm (6 in.) to 3 cm (1.2 in.). This new burn criterion was adopted for ASTM G 124, Standard Test Method for Determining the Combustion Behavior- of Metallic Materials in Oxygen-Enriched Atmospheres. Its effect on the test data and the latest method to display the test data will be discussed. Two specific examples that illustrate how this new criterion affects the burn/no-bum thresholds of metal alloys will also be presented.
Application of interactive computer graphics in wind-tunnel dynamic model testing
NASA Technical Reports Server (NTRS)
Doggett, R. V., Jr.; Hammond, C. E.
1975-01-01
The computer-controlled data-acquisition system recently installed for use with a transonic dynamics tunnel was described. This includes a discussion of the hardware/software features of the system. A subcritical response damping technique, called the combined randomdec/moving-block method, for use in windtunnel-model flutter testing, that has been implemented on the data-acquisition system, is described in some detail. Some results using the method are presented and the importance of using interactive graphics in applying the technique in near real time during wind-tunnel test operations is discussed.
NASA Technical Reports Server (NTRS)
Kolyer, J. M.; Mann, N. R.
1977-01-01
Methods of accelerated and abbreviated testing were developed and applied to solar cell encapsulants. These encapsulants must provide protection for as long as 20 years outdoors at different locations within the United States. Consequently, encapsulants were exposed for increasing periods of time to the inherent climatic variables of temperature, humidity, and solar flux. Property changes in the encapsulants were observed. The goal was to predict long term behavior of encapsulants based upon experimental data obtained over relatively short test periods.
Test method for telescopes using a point source at a finite distance
NASA Technical Reports Server (NTRS)
Griner, D. B.; Zissa, D. E.; Korsch, D.
1985-01-01
A test method for telescopes that makes use of a focused ring formed by an annular aperture when using a point source at a finite distance is evaluated theoretically and experimentally. The results show that the concept can be applied to near-normal, as well as grazing incidence. It is particularly suited for X-ray telescopes because of their intrinsically narrow annular apertures, and because of the largely reduced diffraction effects.
Code of Federal Regulations, 2012 CFR
2012-07-01
... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10... (Reapproved 2008) c. Oxides of nitrogen 53 parts per million dry volume 3-run average (1 hour minimum sample... average (1 hour minimum sample time per run) Performance test (Method 6 or 6c at 40 CFR part 60, appendix...
Code of Federal Regulations, 2011 CFR
2011-07-01
... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10... (Reapproved 2008) c. Oxides of nitrogen 53 parts per million dry volume 3-run average (1 hour minimum sample... average (1 hour minimum sample time per run) Performance test (Method 6 or 6c at 40 CFR part 60, appendix...
NASA Astrophysics Data System (ADS)
Haaf, Ezra; Barthel, Roland
2018-04-01
Classification and similarity based methods, which have recently received major attention in the field of surface water hydrology, namely through the PUB (prediction in ungauged basins) initiative, have not yet been applied to groundwater systems. However, it can be hypothesised, that the principle of "similar systems responding similarly to similar forcing" applies in subsurface hydrology as well. One fundamental prerequisite to test this hypothesis and eventually to apply the principle to make "predictions for ungauged groundwater systems" is efficient methods to quantify the similarity of groundwater system responses, i.e. groundwater hydrographs. In this study, a large, spatially extensive, as well as geologically and geomorphologically diverse dataset from Southern Germany and Western Austria was used, to test and compare a set of 32 grouping methods, which have previously only been used individually in local-scale studies. The resulting groupings are compared to a heuristic visual classification, which serves as a baseline. A performance ranking of these classification methods is carried out and differences in homogeneity of grouping results were shown, whereby selected groups were related to hydrogeological indices and geological descriptors. This exploratory empirical study shows that the choice of grouping method has a large impact on the object distribution within groups, as well as on the homogeneity of patterns captured in groups. The study provides a comprehensive overview of a large number of grouping methods, which can guide researchers when attempting similarity-based groundwater hydrograph classification.
NASA Astrophysics Data System (ADS)
Zima, W.; Kolenberg, K.; Briquet, M.; Breger, M.
2004-06-01
We have carried out a Hare-and-Hound test to determine the reliability of the Moment Method (Briquet & Aerts 2003) and the Pixel-by-Pixel Method (Mantegazza 2000) for the identification of pulsation modes in Delta Scuti stars. For this purpose we calculated synthetic line profiles, exhibiting six pulsation modes of low degree and with input parameters initially unknown to us. The aim was to test and increase the quality of the mode identification by applying both methods independently and by using a combined technique. Our results show that, whereas the azimuthal order m and its sign can be fixed by both methods, the degree l is not determined unambiguously. Both identification methods show a better reliability if multiple modes are fitted simultaneously. In particular, the inclination angle is better determined. We have to emphasize that the outcome of this test is only meaningful for stars having pulsational velocities below 0.2 vsini. This is the first part of a series of articles, in which we will test these spectroscopic identification methods.
Method and apparatus for nondestructive testing. [using high frequency arc discharges
NASA Technical Reports Server (NTRS)
Hoop, J. M. (Inventor)
1974-01-01
High voltage is applied to an arc gap adjacent to a test specimen to develop a succession of high frequency arc discharges. Those high frequency arc discharges generate pulses of ultrasonic energy within the test specimen without requiring the arc discharges to contact that test specimen and without requiring a coupling medium. Those pulses can be used for detection of flaws and measurements of certain properties and stresses within the test specimen.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tecza, J.
1998-06-01
'Safe and efficient clean up of hazardous and radioactive waste sites throughout the DOE complex will require extensive use of robots. This research effort focuses on developing Monitoring and Diagnostic (M and D) methods for robots that will provide early detection, isolation, and tracking of impending faults before they result in serious failure. The utility and effectiveness of applying M and D methods to hydraulic robots has never been proven. The present research program is utilizing seeded faults in a laboratory test rig that is representative of an existing hydraulically-powered remediation robot. This report summarizes activity conducted in the firstmore » 9 months of the project. The research team has analyzed the Rosie Mobile Worksystem as a representative hydraulic robot, developed a test rig for implanted fault testing, developed a test plan and agenda, and established methods for acquiring and analyzing the test data.'« less
Johnson, Ian; Hutchings, Matt; Benstead, Rachel; Thain, John; Whitehouse, Paul
2004-07-01
In the UK Direct Toxicity Assessment Programme, carried out in 1998-2000, a series of internationally recognised short-term toxicity test methods for algae, invertebrates and fishes, and rapid methods (ECLOX and Microtox) were used extensively. Abbreviated versions of conventional tests (algal growth inhibition tests, Daphnia magna immobilisation test and the oyster embryo-larval development test) were valuable for toxicity screening of effluent discharges and the identification of causes and sources of toxicity. Rapid methods based on chemiluminescence and bioluminescence were not generally useful in this programme, but may have a role where the rapid test has been shown to be an acceptable surrogate for a standardised test method. A range of quality assurance and control measures were identified. Requirements for quality control/assurance are most stringent when deriving data for characterising the toxic hazards of effluents and monitoring compliance against a toxicity reduction target. Lower quality control/assurance requirements can be applied to discharge screening and the identification of causes and sources of toxicity.
In Situ Elevated Temperature Testing of Fly Ash Based Geopolymer Composites.
Vickers, Les; Pan, Zhu; Tao, Zhong; van Riessen, Arie
2016-06-03
In situ elevated temperature investigations using fly ash based geopolymers filled with alumina aggregate were undertaken. Compressive strength and short term creep tests were carried out to determine the onset temperature of viscous flow. Fire testing using the standard cellulose curve was performed. Applying a load to the specimen as the temperature increased reduced the temperature at which viscous flow occurred (compared to test methods with no applied stress). Compressive strength increased at the elevated temperature and is attributed to viscous flow and sintering forming a more compact microstructure. The addition of alumina aggregate and reduction of water content reduced the thermal conductivity. This led to the earlier onset and shorter dehydration plateau duration times. However, crack formation was reduced and is attributed to smaller thermal gradients across the fire test specimen.
Yeast identification: reassessment of assimilation tests as sole universal identifiers.
Spencer, J; Rawling, S; Stratford, M; Steels, H; Novodvorska, M; Archer, D B; Chandra, S
2011-11-01
To assess whether assimilation tests in isolation remain a valid method of identification of yeasts, when applied to a wide range of environmental and spoilage isolates. Seventy-one yeast strains were isolated from a soft drinks factory. These were identified using assimilation tests and by D1/D2 rDNA sequencing. When compared to sequencing, assimilation test identifications (MicroLog™) were 18·3% correct, a further 14·1% correct within the genus and 67·6% were incorrectly identified. The majority of the latter could be attributed to the rise in newly reported yeast species. Assimilation tests alone are unreliable as a universal means of yeast identification, because of numerous new species, variability of strains and increasing coincidence of assimilation profiles. Assimilation tests still have a useful role in the identification of common species, such as the majority of clinical isolates. It is probable, based on these results, that many yeast identifications reported in older literature are incorrect. This emphasizes the crucial need for accurate identification in present and future publications. © 2011 The Authors. Letters in Applied Microbiology © 2011 The Society for Applied Microbiology.
Yuan, Naiming; Fu, Zuntao; Zhang, Huan; Piao, Lin; Xoplaki, Elena; Luterbacher, Juerg
2015-01-01
In this paper, a new method, detrended partial-cross-correlation analysis (DPCCA), is proposed. Based on detrended cross-correlation analysis (DCCA), this method is improved by including partial-correlation technique, which can be applied to quantify the relations of two non-stationary signals (with influences of other signals removed) on different time scales. We illustrate the advantages of this method by performing two numerical tests. Test I shows the advantages of DPCCA in handling non-stationary signals, while Test II reveals the “intrinsic” relations between two considered time series with potential influences of other unconsidered signals removed. To further show the utility of DPCCA in natural complex systems, we provide new evidence on the winter-time Pacific Decadal Oscillation (PDO) and the winter-time Nino3 Sea Surface Temperature Anomaly (Nino3-SSTA) affecting the Summer Rainfall over the middle-lower reaches of the Yangtze River (SRYR). By applying DPCCA, better significant correlations between SRYR and Nino3-SSTA on time scales of 6 ~ 8 years are found over the period 1951 ~ 2012, while significant correlations between SRYR and PDO on time scales of 35 years arise. With these physically explainable results, we have confidence that DPCCA is an useful method in addressing complex systems. PMID:25634341
Establishment and validation of a method for multi-dose irradiation of cells in 96-well microplates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abatzoglou, Ioannis; Zois, Christos E.; Pouliliou, Stamatia
2013-02-15
Highlights: ► We established a method for multi-dose irradiation of cell cultures within a 96-well plate. ► Equations to adjust to preferable dose levels are produced and provided. ► Up to eight different dose levels can be tested in one microplate. ► This method results in fast and reliable estimation of radiation dose–response curves. -- Abstract: Microplates are useful tools in chemistry, biotechnology and molecular biology. In radiobiology research, these can be also applied to assess the effect of a certain radiation dose delivered to the whole microplate, to test radio-sensitivity, radio-sensitization or radio-protection. Whether different radiation doses can bemore » accurately applied to a single 96-well plate to further facilitate and accelerated research by one hand and spare funds on the other, is a question dealt in the current paper. Following repeated ion-chamber, TLD and radiotherapy planning dosimetry we established a method for multi-dose irradiation of cell cultures within a 96-well plate, which allows an accurate delivery of desired doses in sequential columns of the microplate. Up to eight different dose levels can be tested in one microplate. This method results in fast and reliable estimation of radiation dose–response curves.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seong W. Lee
The project entitled, ''Innovative Instrumentation and Analysis of the Temperature Measurement for High Temperature Gasification'', was successfully completed by the Principal Investigator, Dr. S. Lee and his research team in the Center for Advanced Energy Systems and Environmental Control Technologies at Morgan State University. The major results and outcomes were presented in semi-annual progress reports and annual project review meetings/presentations. Specifically, the literature survey including the gasifier temperature measurement, the ultrasonic application in cleaning application, and spray coating process and the gasifier simulator (cold model) testing has been successfully conducted during the first year. The results show that four factorsmore » (blower voltage, ultrasonic application, injection time intervals, particle weight) were considered as significant factors that affect the temperature measurement. Then the gasifier simulator (hot model) design and the fabrication as well as the systematic tests on hot model were completed to test the significant factors on temperature measurement in the second year. The advanced Industrial analytic methods such as statistics-based experimental design, analysis of variance (ANOVA) and regression methods were applied in the hot model tests. The results show that operational parameters (i.e. air flow rate, water flow rate, fine dust particle amount, ammonia addition) presented significant impact on the temperature measurement inside the gasifier simulator. The experimental design and ANOVA are very efficient way to design and analyze the experiments. The results show that the air flow rate and fine dust particle amount are statistically significant to the temperature measurement. The regression model provided the functional relation between the temperature and these factors with substantial accuracy. In the last year of the project period, the ultrasonic and subsonic cleaning methods and coating materials were tested/applied on the thermocouple cleaning according to the proposed approach. Different frequency, application time and power of the ultrasonic/subsonic output were tested. The results show that the ultrasonic approach is one of the best methods to clean the thermocouple tips during the routine operation of the gasifier. In addition, the real time data acquisition system was also designed and applied in the experiments. This advanced instrumentation provided the efficient and accurate data acquisition for this project. In summary, the accomplishment of the project provided useful information of the ultrasonic cleaning method applied in thermocouple tip cleaning. The temperature measurement could be much improved both in accuracy and duration provided that the proposed approach is widely used in the gasification facilities.« less
NASA Technical Reports Server (NTRS)
Lundquist, Eugene E; Rossman, Carl A; Houbolt, John C
1943-01-01
The results are presented of a theoretical study for the determination of the column curve from tests of column specimens having ends equally restrained against rotation. The theory of this problem is studied and a curve is shown relating the fixity coefficient c to the critical load, the length of the column, and the magnitude of the elastic restraint. A method of using this curve for the determination of the column curve for columns with pin ends from tests of columns with elastically restrained ends is presented. The results of the method as applied to a series of tests on thin-strip columns of stainless steel are also given.
Semi-quantitative MALDI-TOF for antimicrobial susceptibility testing in Staphylococcus aureus.
Maxson, Tucker; Taylor-Howell, Cheryl L; Minogue, Timothy D
2017-01-01
Antibiotic resistant bacterial infections are a significant problem in the healthcare setting, in many cases requiring the rapid administration of appropriate and effective antibiotic therapy. Diagnostic assays capable of quickly and accurately determining the pathogen resistance profile are therefore crucial to initiate or modify care. Matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) mass spectrometry (MS) is a standard method for species identification in many clinical microbiology laboratories and is well positioned to be applied towards antimicrobial susceptibility testing. One recently reported approach utilizes semi-quantitative MALDI-TOF MS for growth rate analysis to provide a resistance profile independent of resistance mechanism. This method was previously successfully applied to Gram-negative pathogens and mycobacteria; here, we evaluated this method with the Gram-positive pathogen Staphylococcus aureus. Specifically, we used 35 strains of S. aureus and four antibiotics to optimize and test the assay, resulting in an overall accuracy rate of 95%. Application of the optimized assay also successfully determined susceptibility from mock blood cultures, allowing both species identification and resistance determination for all four antibiotics within 3 hours of blood culture positivity.
Bimczok, R; Gers-Barlag, H; Mundt, C; Klette, E; Bielfeldt, S; Rudolph, T; Pflucker, F; Heinrich, U; Tronnier, H; Johncock, W; Klebon, B; Westenfelder, H; Flosser-Muller, H; Jenni, K; Kockott, D; Lademann, J; Herzog, B; Rohr, M
2007-01-01
It is often debated that the protection against solar-induced erythema under real conditions is dependent upon the amount of sunscreen applied. It is believed that when too little is applied a lower sun protection than indicated on the label will result. The aim of this study was to quantify this effect. In this multicenter study, the influence of three different amounts (0.5, 1.0, 2.0 mg/cm(2)) of three commercial sunscreen products in three reliable test centers was investigated according to the test protocol of The International Sun Protection Factor Test Method. The main result was a linear dependence of the SPF on the quantity applied. Taking into consideration the volunteer-specific variations, an exponential dependence of confidence interval of the in vivo SPF and amount applied was found. The highest amount applied (2.0 mg/cm(2)) was linked to the lowest confidence intervals. Thus, from the point of view of producing reliable and reproducible in vivo results under laboratory conditions, the recommendation of this multicenter study is an application quantity of 2.0 mg/cm(2).
A novel 3D deformation measurement method under optical microscope for micro-scale bulge-test
NASA Astrophysics Data System (ADS)
Wu, Dan; Xie, Huimin
2017-11-01
A micro-scale 3D deformation measurement method combined with optical microscope is proposed in this paper. The method is based on gratings and phase shifting algorithm. By recording the grating images before and after deformation from two symmetrical angles and calculating the phases of the grating patterns, the 3D deformation field of the specimen can be extracted from the phases of the grating patterns. The proposed method was applied to the micro-scale bulge test. A micro-scale thermal/mechanical coupling bulge-test apparatus matched with the super-depth microscope was exploited. With the gratings fabricated onto the film, the deformed morphology of the bulged film was measured reliably. The experimental results show that the proposed method and the exploited bulge-test apparatus can be used to characterize the thermal/mechanical properties of the films at micro-scale successfully.
16 CFR 1508.4 - Spacing of crib components.
Code of Federal Regulations, 2010 CFR
2010-01-01
... by 4-inch high by 4-inch long) rectangular block which shall not pass through the space. (b) The...) direct force is applied in accordance with the test method in § 1508.5. For contoured or irregular slats... below the loading wedge when a 9-kilogram (20-pound) direct force is applied in accordance with said...
New technology in postfire rehab
Joe Sabel
2007-01-01
PAM-12⢠is a recycled office paper byproduct made into a spreadable mulch with added Water Soluble Polyacrylamide (WSPAM), a previously difficult polymer to apply. PAM-12 is extremely versatile and can be applied through several methods. In a field test, PAM-12 outperformed straw in every targeted performance area: erosion control, improving soil hydrophobicity, and...
Univariate and Bivariate Loglinear Models for Discrete Test Score Distributions.
ERIC Educational Resources Information Center
Holland, Paul W.; Thayer, Dorothy T.
2000-01-01
Applied the theory of exponential families of distributions to the problem of fitting the univariate histograms and discrete bivariate frequency distributions that often arise in the analysis of test scores. Considers efficient computation of the maximum likelihood estimates of the parameters using Newton's Method and computationally efficient…
The Long-Term Sustainability of Different Item Response Theory Scaling Methods
ERIC Educational Resources Information Center
Keller, Lisa A.; Keller, Robert R.
2011-01-01
This article investigates the accuracy of examinee classification into performance categories and the estimation of the theta parameter for several item response theory (IRT) scaling techniques when applied to six administrations of a test. Previous research has investigated only two administrations; however, many testing programs equate tests…
SIMCA T 1.0: A SAS Computer Program for Simulating Computer Adaptive Testing
ERIC Educational Resources Information Center
Raiche, Gilles; Blais, Jean-Guy
2006-01-01
Monte Carlo methodologies are frequently applied to study the sampling distribution of the estimated proficiency level in adaptive testing. These methods eliminate real situational constraints. However, these Monte Carlo methodologies are not currently supported by the available software programs, and when these programs are available, their…
Measuring the orthogonality error of coil systems
Heilig, B.; Csontos, A.; Pajunpää, K.; White, Tim; St. Louis, B.; Calp, D.
2012-01-01
Recently, a simple method was proposed for the determination of pitch angle between two coil axes by means of a total field magnetometer. The method is applicable when the homogeneous volume in the centre of the coil system is large enough to accommodate the total field sensor. Orthogonality of calibration coil systems used for calibrating vector magnetometers can be attained by this procedure. In addition, the method can be easily automated and applied to the calibration of delta inclination–delta declination (dIdD) magnetometers. The method was tested by several independent research groups, having a variety of test equipment, and located at differing geomagnetic observatories, including: Nurmijärvi, Finland; Hermanus, South Africa; Ottawa, Canada; Tihany, Hungary. This paper summarizes the test results, and discusses the advantages and limitations of the method.
Determining the best method of Nellcor pulse oximeter sensor application in neonates.
Saraswat, A; Simionato, L; Dawson, J A; Thio, M; Kamlin, C O F; Owen, L; Schmölzer, G; Davis, P G
2012-05-01
To identify the optimal sensor application method that gave the quickest display of accurate heart rate (HR) data using the Nellcor OxiMax N-600x pulse oximeter (PO). Stable infants who were monitored with an electrocardiograph were included. Three sensor application techniques were studied: (i) sensor connected to cable, then applied to infant; (ii) sensor connected to cable, applied to investigator's finger, and then to infant; (iii) sensor applied to infant, then connected to cable. The order of techniques tested was randomized for each infant. Time taken to apply the PO sensor, to display data and to display accurate data (HR(PO) = HR(ECG) ± 3 bpm) were recorded using a stopwatch. Forty infants were studied [mean (SD) birthweight, 1455 (872) g; gestational age, 31 (4) weeks; post-menstrual age, 34 (4) weeks]. Method 3 acquired any data significantly faster than methods 1 (p = 0.013; CI, -9.6 to -3.0 sec) and 2 (p = 0.004; CI, -5.9 to -1.2 sec). Method 3 acquired accurate data significantly faster than method 1 (p = 0.016; CI, -9.4 to -1.0 sec), but not method 2 (p = 0.28). Applying the sensor to the infant before connecting it to the cable yields the fastest acquisition of accurate HR data from the Nellcor PO. © 2011 The Author(s)/Acta Paediatrica © 2011 Foundation Acta Paediatrica.
Higher-Order Spectral Analysis of F-18 Flight Flutter Data
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Dunn, Shane
2005-01-01
Royal Australian Air Force (RAAF) F/A-18 flight flutter test data is presented and analyzed using various techniques. The data includes high-quality measurements of forced responses and limit cycle oscillation (LCO) phenomena. Standard correlation and power spectral density (PSD) techniques are applied to the data and presented. Novel applications of experimentally-identified impulse responses and higher-order spectral techniques are also applied to the data and presented. The goal of this research is to develop methods that can identify the onset of nonlinear aeroelastic phenomena, such as LCO, during flutter testing.
A Comparison of Methods of Vertical Equating.
ERIC Educational Resources Information Center
Loyd, Brenda H.; Hoover, H. D.
Rasch model vertical equating procedures were applied to three mathematics computation tests for grades six, seven, and eight. Each level of the test was composed of 45 items in three sets of 15 items, arranged in such a way that tests for adjacent grades had two sets (30 items) in common, and the sixth and eighth grades had 15 items in common. In…
A refined method for multivariate meta-analysis and meta-regression.
Jackson, Daniel; Riley, Richard D
2014-02-20
Making inferences about the average treatment effect using the random effects model for meta-analysis is problematic in the common situation where there is a small number of studies. This is because estimates of the between-study variance are not precise enough to accurately apply the conventional methods for testing and deriving a confidence interval for the average effect. We have found that a refined method for univariate meta-analysis, which applies a scaling factor to the estimated effects' standard error, provides more accurate inference. We explain how to extend this method to the multivariate scenario and show that our proposal for refined multivariate meta-analysis and meta-regression can provide more accurate inferences than the more conventional approach. We explain how our proposed approach can be implemented using standard output from multivariate meta-analysis software packages and apply our methodology to two real examples. Copyright © 2013 John Wiley & Sons, Ltd.
A real-time interferometer technique for compressible flow research
NASA Technical Reports Server (NTRS)
Bachalo, W. D.; Houser, M. J.
1984-01-01
Strengths and shortcomings in the application of interferometric techniques to transonic flow fields are examined and an improved method is elaborated. Such applications have demonstrated the value of interferometry in obtaining data for compressible flow research. With holographic techniques, interferometry may be applied in large scale facilities without the use of expensive optics or elaborate vibration isolation equipment. Results obtained using holographic interferometry and other methods demonstrate that reliable qualitative and quantitative data can be acquired. Nevertheless, the conventional method can be difficult to set up and apply, and it cannot produce real-time data. A new interferometry technique is investigated that promises to be easier to apply and can provide real-time information. This single-beam technique has the necessary insensitivity to vibration for large scale wind tunnel operations. Capabilities of the method and preliminary tests on some laboratory scale flow fluids are described.
A narrative review of manual muscle testing and implications for muscle testing research
Conable, Katharine M.; Rosner, Anthony L.
2011-01-01
Objective Manual muscle testing (MMT) is used for a variety of purposes in health care by medical, osteopathic, chiropractic, physical therapy, rehabilitation, and athletic training professionals. The purpose of this study is to provide a narrative review of variations in techniques, durations, and forces used in MMT putting applied kinesiology (AK) muscle testing in context and highlighting aspects of muscle testing important to report in MMT research. Method PubMed, the Collected Papers of the International College of Applied Kinesiology–USA, and related texts were searched on the subjects of MMT, maximum voluntary isometric contraction testing, and make/break testing. Force parameters (magnitude, duration, timing of application), testing variations of MMT, and normative data were collected and evaluated. Results “Break” tests aim to evaluate the muscle's ability to resist a gradually increasing pressure and may test different aspects of neuromuscular control than tests against fixed resistances. Applied kinesiologists use submaximal manual break tests and a binary grading scale to test short-term changes in muscle function in response to challenges. Many of the studies reviewed were not consistent in reporting parameters for testing. Conclusions To increase the chances for replication, studies using MMT should specify parameters of the tests used, such as exact procedures and instrumentation, duration of test, peak force, and timing of application of force. PMID:22014904
Good Enough for the X-38, but Made for Commercial Aircraft
NASA Technical Reports Server (NTRS)
2001-01-01
Aircraft Belts, Inc. (ABI), of Kemah, Texas, was looking for a way to ensure the safety of its customers by developing a thorough test system for aviation restraint systems. Previous safety restraint test methods did not properly measure the load distribution placed on the restraints, leaving an unknown factor in meeting safety standards. ABI needed to improve its testing methods and update its test equipment. Through a partnership with NASA's Johnson Space Center Technical Outreach Program, the need was met. With the assistance of NASA engineers, ABI developed a hydraulic test system that provides the consumer with in-depth data about the load placed on the restraint system throughout the duration of the test. The old systems were only able to detect if the belts could sustain the applied force and could not target the problem of providing load data. In comparison, the new system modeled after the one used by NASA, can collect data that tells exactly what went wrong with belts that break and why. Depending on the test requirements of various restraint components, the system can exert a subjected force ranging from merely a few pounds to thousands. The test force can be applied to an entire safety restraint system or to its individual parts, including, stitching, webbing, and hardware.
Epistasis analysis for quantitative traits by functional regression model.
Zhang, Futao; Boerwinkle, Eric; Xiong, Momiao
2014-06-01
The critical barrier in interaction analysis for rare variants is that most traditional statistical methods for testing interactions were originally designed for testing the interaction between common variants and are difficult to apply to rare variants because of their prohibitive computational time and poor ability. The great challenges for successful detection of interactions with next-generation sequencing (NGS) data are (1) lack of methods for interaction analysis with rare variants, (2) severe multiple testing, and (3) time-consuming computations. To meet these challenges, we shift the paradigm of interaction analysis between two loci to interaction analysis between two sets of loci or genomic regions and collectively test interactions between all possible pairs of SNPs within two genomic regions. In other words, we take a genome region as a basic unit of interaction analysis and use high-dimensional data reduction and functional data analysis techniques to develop a novel functional regression model to collectively test interactions between all possible pairs of single nucleotide polymorphisms (SNPs) within two genome regions. By intensive simulations, we demonstrate that the functional regression models for interaction analysis of the quantitative trait have the correct type 1 error rates and a much better ability to detect interactions than the current pairwise interaction analysis. The proposed method was applied to exome sequence data from the NHLBI's Exome Sequencing Project (ESP) and CHARGE-S study. We discovered 27 pairs of genes showing significant interactions after applying the Bonferroni correction (P-values < 4.58 × 10(-10)) in the ESP, and 11 were replicated in the CHARGE-S study. © 2014 Zhang et al.; Published by Cold Spring Harbor Laboratory Press.
Role of Microstructure in High Temperature Oxidation.
1980-05-01
Surface Prepartion Upon Oxidation ......... .................. 20 EXPERIMENTAL METHODS 21 Speciemen Preparation...angle sectioning method 26 Figure 3. Application of the test line upon the image of NiO scale to determine the number of the NiO grain boundary...of knowledge in this field was readily accounted for by extreme experimental difficulty in applying standard methods of microscopy to the thin
Transport of ions using RF Carpets in Helium Gas
NASA Astrophysics Data System (ADS)
Lambert, Keenan; Kelly, James; Brodeur, Maxime
2017-09-01
Radio-Frequency (RF) carpet are critical components of large volume gas cells used to thermalize radioactive ion beams produced at in-flight facilities. RF carpets are formed by a series of co-centric conductive rings on which an alternating potential (in the radio-frequency range) is applied with opposite polarity on adjacent rings. This results in a strong repelling force that keep the ions a certain distance from the carpet. The transport of ions using RF carpet is accomplished using either a potential gradient applied on the individual all strips or traveling wave (using the so-called `ion surfing method'). A test setup has been constructed at the University of Notre Dame to perform studies on the repelling of ions using RF carpets. This test setup has recently been improved by the addiction of circuitry elements allowing the transport of ions using the ion surfing method. The developed circuitry, together with transport results for various ion beam currents, electric force applied on the ions, and traveling wave amplitude and speed will be presented
Occupancy mapping and surface reconstruction using local Gaussian processes with Kinect sensors.
Kim, Soohwan; Kim, Jonghyuk
2013-10-01
Although RGB-D sensors have been successfully applied to visual SLAM and surface reconstruction, most of the applications aim at visualization. In this paper, we propose a noble method of building continuous occupancy maps and reconstructing surfaces in a single framework for both navigation and visualization. Particularly, we apply a Bayesian nonparametric approach, Gaussian process classification, to occupancy mapping. However, it suffers from high-computational complexity of O(n(3))+O(n(2)m), where n and m are the numbers of training and test data, respectively, limiting its use for large-scale mapping with huge training data, which is common with high-resolution RGB-D sensors. Therefore, we partition both training and test data with a coarse-to-fine clustering method and apply Gaussian processes to each local clusters. In addition, we consider Gaussian processes as implicit functions, and thus extract iso-surfaces from the scalar fields, continuous occupancy maps, using marching cubes. By doing that, we are able to build two types of map representations within a single framework of Gaussian processes. Experimental results with 2-D simulated data show that the accuracy of our approximated method is comparable to previous work, while the computational time is dramatically reduced. We also demonstrate our method with 3-D real data to show its feasibility in large-scale environments.
Applications of Taylor-Galerkin finite element method to compressible internal flow problems
NASA Technical Reports Server (NTRS)
Sohn, Jeong L.; Kim, Yongmo; Chung, T. J.
1989-01-01
A two-step Taylor-Galerkin finite element method with Lapidus' artificial viscosity scheme is applied to several test cases for internal compressible inviscid flow problems. Investigations for the effect of supersonic/subsonic inlet and outlet boundary conditions on computational results are particularly emphasized.
Lessons learned applying CASE methods/tools to Ada software development projects
NASA Technical Reports Server (NTRS)
Blumberg, Maurice H.; Randall, Richard L.
1993-01-01
This paper describes the lessons learned from introducing CASE methods/tools into organizations and applying them to actual Ada software development projects. This paper will be useful to any organization planning to introduce a software engineering environment (SEE) or evolving an existing one. It contains management level lessons learned, as well as lessons learned in using specific SEE tools/methods. The experiences presented are from Alpha Test projects established under the STARS (Software Technology for Adaptable and Reliable Systems) project. They reflect the front end efforts by those projects to understand the tools/methods, initial experiences in their introduction and use, and later experiences in the use of specific tools/methods and the introduction of new ones.
An IMU-to-Body Alignment Method Applied to Human Gait Analysis
Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo
2016-01-01
This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis. PMID:27973406
Unidirectional Fabric Drape Testing Method
Mei, Zaihuan; Yang, Jingzhi; Zhou, Ting; Zhou, Hua
2015-01-01
In most cases, fabrics such as curtains, skirts, suit pants and so on are draped under their own gravity parallel to fabric plane while the gravity is perpendicular to fabric plane in traditional drape testing method. As a result, it does not conform to actual situation and the test data is not convincing enough. To overcome this problem, this paper presents a novel method which simulates the real mechanical conditions and ensures the gravity is parallel to the fabric plane. This method applied a low-cost Kinect Sensor device to capture the 3-dimensional (3D) drape profile, thus we obtained the drape degree parameters and aesthetic parameters by 3D reconstruction and image processing and analysis techniques. The experiment was conducted on our self-devised drape-testing instrument by choosing different kinds of weave structure fabrics as our testing samples and the results were compared with those of traditional method and subjective evaluation. Through regression and correlation analysis we found that this novel testing method was significantly correlated with the traditional and subjective evaluation method. We achieved a new, non-contact 3D measurement method for drape testing, namely unidirectional fabric drape testing method. This method is more suitable for evaluating drape behavior because it is more in line with actual mechanical conditions of draped fabrics and has a well consistency with the requirements of visual and aesthetic style of fabrics. PMID:26600387
A deep learning-based multi-model ensemble method for cancer prediction.
Xiao, Yawen; Wu, Jun; Lin, Zongli; Zhao, Xiaodong
2018-01-01
Cancer is a complex worldwide health problem associated with high mortality. With the rapid development of the high-throughput sequencing technology and the application of various machine learning methods that have emerged in recent years, progress in cancer prediction has been increasingly made based on gene expression, providing insight into effective and accurate treatment decision making. Thus, developing machine learning methods, which can successfully distinguish cancer patients from healthy persons, is of great current interest. However, among the classification methods applied to cancer prediction so far, no one method outperforms all the others. In this paper, we demonstrate a new strategy, which applies deep learning to an ensemble approach that incorporates multiple different machine learning models. We supply informative gene data selected by differential gene expression analysis to five different classification models. Then, a deep learning method is employed to ensemble the outputs of the five classifiers. The proposed deep learning-based multi-model ensemble method was tested on three public RNA-seq data sets of three kinds of cancers, Lung Adenocarcinoma, Stomach Adenocarcinoma and Breast Invasive Carcinoma. The test results indicate that it increases the prediction accuracy of cancer for all the tested RNA-seq data sets as compared to using a single classifier or the majority voting algorithm. By taking full advantage of different classifiers, the proposed deep learning-based multi-model ensemble method is shown to be accurate and effective for cancer prediction. Copyright © 2017 Elsevier B.V. All rights reserved.
Development of EPA`s new methods to quantify vector attraction of wastewater sludges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrell, J.B.; Bhide, V.; Smith, J.E. Jr.
1996-05-01
EPA`s 1979 and 1993 sludge regulations require that sewage sludge be reduced in vector attraction before it can be applied to the land. In the 1979 regulation, satisfactory vector attraction reduction (VAR) could be demonstrated if treatment processes reduced the volatile solids content of sludge by 38%. The 1993 regulation adds two alternative test methods for aerobic sludges for determining whether VAR has been adequate. In the first method, specific oxygen uptake rate (SOUR) of the sludge must be <1.5 mg O{sub 2}/hr/g total solids, and in the second method, the additional volatile solids reduction (AVSR) that occurs when themore » sludge is further digested for 30 days must be <15%. Experimentation with the new tests is described. Comparisons among the three methods showed that the 38% VSR requirement and the SOUR test were equivalent only near 20{degree}C. The AVSR test was more conservative than either of the other tests. 18 refs., 7 figs., 3 tabs.« less
Diagnosis of Brucellosis in Livestock and Wildlife
Godfroid, Jacques; Nielsen, Klaus; Saegerman, Claude
2010-01-01
Aim To describe and discuss the merits of various direct and indirect methods applied in vitro (mainly on blood or milk) or in vivo (allergic test) for the diagnosis of brucellosis in animals. Methods The recent literature on brucellosis diagnostic tests was reviewed. These diagnostic tests are applied with different goals, such as national screening, confirmatory diagnosis, certification, and international trade. The validation of such diagnostic tests is still an issue, particularly in wildlife. The choice of the testing strategy depends on the prevailing brucellosis epidemiological situation and the goal of testing. Results Measuring the kinetics of antibody production after Brucella spp. infection is essential for analyzing serological results correctly and may help to predict abortion. Indirect ELISAs help to discriminate 1) between false positive serological reactions and true brucellosis and 2) between vaccination and infection. Biotyping of Brucella spp. provides valuable epidemiological information that allows tracing an infection back to the sources in instances where several biotypes of a given Brucella species are circulating. Polymerase chain reaction and new molecular methods are likely to be used as routine typing and fingerprinting methods in the coming years. Conclusion The diagnosis of brucellosis in livestock and wildlife is complex and serological results need to be carefully analyzed. The B. abortus S19 and B. melitensis Rev. 1 vaccines are the cornerstones of control programs in cattle and small ruminants, respectively. There is no vaccine available for pigs or for wildlife. In the absence of a human brucellosis vaccine, prevention of human brucellosis depends on the control of the disease in animals. PMID:20718082
Settivari, Raja S; Ball, Nicholas; Murphy, Lynea; Rasoulpour, Reza; Boverhof, Darrell R; Carney, Edward W
2015-01-01
Interest in applying 21st-century toxicity testing tools for safety assessment of industrial chemicals is growing. Whereas conventional toxicology uses mainly animal-based, descriptive methods, a paradigm shift is emerging in which computational approaches, systems biology, high-throughput in vitro toxicity assays, and high-throughput exposure assessments are beginning to be applied to mechanism-based risk assessments in a time- and resource-efficient fashion. Here we describe recent advances in predictive safety assessment, with a focus on their strategic application to meet the changing demands of the chemical industry and its stakeholders. The opportunities to apply these new approaches is extensive and include screening of new chemicals, informing the design of safer and more sustainable chemical alternatives, filling information gaps on data-poor chemicals already in commerce, strengthening read-across methodology for categories of chemicals sharing similar modes of action, and optimizing the design of reduced-risk product formulations. Finally, we discuss how these predictive approaches dovetail with in vivo integrated testing strategies within repeated-dose regulatory toxicity studies, which are in line with 3Rs principles to refine, reduce, and replace animal testing. Strategic application of these tools is the foundation for informed and efficient safety assessment testing strategies that can be applied at all stages of the product-development process. PMID:25836969
Accelerated Testing Methodology for the Determination of Slow Crack Growth of Advanced Ceramics
NASA Technical Reports Server (NTRS)
Choi, Sung R.; Salem, Jonathan A.; Gyekenyesi, John P.
1997-01-01
Constant stress-rate (dynamic fatigue) testing has been used for several decades to characterize slow crack growth behavior of glass and ceramics at both ambient and elevated temperatures. The advantage of constant stress-rate testing over other methods lies in its simplicity: Strengths are measured in a routine manner at four or more stress rates by applying a constant crosshead speed or constant loading rate. The slow crack growth parameters (n and A) required for design can be estimated from a relationship between strength and stress rate. With the proper use of preloading in constant stress-rate testing, an appreciable saving of test time can be achieved. If a preload corresponding to 50 % of the strength is applied to the specimen prior to testing, 50 % of the test time can be saved as long as the strength remains unchanged regardless of the applied preload. In fact, it has been a common, empirical practice in strength testing of ceramics or optical fibers to apply some preloading (less then 40%). The purpose of this work is to study the effect of preloading on the strength to lay a theoretical foundation on such an empirical practice. For this purpose, analytical and numerical solutions of strength as a function of preloading were developed. To verify the solution, constant stress-rate testing using glass and alumina at room temperature and alumina silicon nitride, and silicon carbide at elevated temperatures was conducted in a range of preloadings from O to 90 %.
Design, fabrication, and bench testing of a solar chemical receiver
NASA Technical Reports Server (NTRS)
Summers, W. A.; Pierre, J. F.
1981-01-01
Solar thermal energy can be effectively collected, transported, stored, and utilized by means of a chemical storage and transport system employing the reversible SO2 oxidation reaction. A solar chemical receiver for SO3 thermal decomposition to SO2 and oxygen was analyzed. Bench tests of a ten foot section of a receiver module were conducted with dissociated sulfuric acid (SO3 and H2O) in an electrical furnace. Measured percent conversion of SO3 was 85% of the equilibrium value. Methods were developed to fabricate and assemble a complete receiver module. These methods included applying an aluminide coating to certain exposed surfaces, assembling concentric tubes with a wire spacer, applying a platinum catalyst to the tubing wall, and coiling the entire assembly into the desired configuration.
Parent, Guillaume; Penin, Rémi; Lecointe, Jean-Philippe; Brudny, Jean-François; Belgrand, Thierry
2016-01-01
An experimental method to characterize the magnetic properties of Grain Oriented Electrical Steel in the rolling direction is proposed in this paper. It relies on the use of three 25 cm Epstein frames combined to generate three test-frames of different lengths. This enables the identification of the effective specific losses of the electrical steel when magnetization is applied along the rolling direction. As a consequence, it evidences the deviation of the loss figures obtained using the standardised Epstein test. The difference in losses is explained by the fact that the described method gives “only” the losses attached to the straight parts. The concept of the magnetic path length as defined by the standard is discussed. PMID:27271637
Parent, Guillaume; Penin, Rémi; Lecointe, Jean-Philippe; Brudny, Jean-François; Belgrand, Thierry
2016-06-04
An experimental method to characterize the magnetic properties of Grain Oriented Electrical Steel in the rolling direction is proposed in this paper. It relies on the use of three 25 cm Epstein frames combined to generate three test-frames of different lengths. This enables the identification of the effective specific losses of the electrical steel when magnetization is applied along the rolling direction. As a consequence, it evidences the deviation of the loss figures obtained using the standardised Epstein test. The difference in losses is explained by the fact that the described method gives "only" the losses attached to the straight parts. The concept of the magnetic path length as defined by the standard is discussed.
Meta-learning framework applied in bioinformatics inference system design.
Arredondo, Tomás; Ormazábal, Wladimir
2015-01-01
This paper describes a meta-learner inference system development framework which is applied and tested in the implementation of bioinformatic inference systems. These inference systems are used for the systematic classification of the best candidates for inclusion in bacterial metabolic pathway maps. This meta-learner-based approach utilises a workflow where the user provides feedback with final classification decisions which are stored in conjunction with analysed genetic sequences for periodic inference system training. The inference systems were trained and tested with three different data sets related to the bacterial degradation of aromatic compounds. The analysis of the meta-learner-based framework involved contrasting several different optimisation methods with various different parameters. The obtained inference systems were also contrasted with other standard classification methods with accurate prediction capabilities observed.
A new ex vivo method for the study of nasal drops on ciliary function.
Levrier, J; Molon-Noblot, S; Duval, D; Lloyd, K G
1989-01-01
Any pharmaceutical nasal preparation should respect the physiological function of the mucociliary transport system and should undergo testing to this effect. An experimental protocol has been developed using the guinea pig in order to assess the effects of commercial nasal drop preparations on mucociliary function. The method presented here consists of applying in vivo the test solution on the nasal respiratory epithelium. After a specified contact time and following rapid sacrifice of the animal, the mucosa is removed; the beating frequency of the cilia is then recorded ex vivo by micro-photo-oscillography. The method is sensitive to compounds known to diminish mucociliary function as sodium mercurothiolate inhibits ciliary movement of the nasal epithelium ex vivo. This inhibition of ciliary movement is long-lasting, although reversible. This method can be used to test the action of intranasally administered pharmaceutical preparations on mucociliary function. Commercially available solutions of the nasal vasoconstrictors tymazoline, fenoxazoline or oxymetazoline do not alter ciliary movement ex vivo at dose levels equal to or greater than those clinically utilized. ATP significantly enhances nasal ciliary frequency in instances where a low basal rate occurred. Thus, this method can be used for the testing of the maintenance of nasal ciliary function in the presence of compounds and preparations which will be applied into the nostrils. The advantages over previous techniques include a closer approach to the therapeutic utilization and the maintained physiological conditions of the mucosa during drug administration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kubouchi, Masatoshi; Hojo, Hidemitsu
The thermal shock resistance of epoxy resin specimens toughened with carboxy-terminated poly(butadiene-acrylonitrile) (CTBN) and poly-glycol were tested using a new notched disk-type specimen. The new thermal shock testing method consists of quenching a notched disk-type specimen and applying a theoretical analysis to the test results to determine crack propagation conditions. For both toughened epoxy resins, this test method evaluated improvements in thermal shock resistance. The thermal shock resistance of epoxy resin toughened with CTBN exhibited a maximum at a 35 parts per hundred resin content of CTBN. The epoxy resin toughened with polyglycol exhibited improved thermal shock resistance with increasingmore » glycol content. 7 refs., 14 figs., 1 tab.« less
Bayesian inference for disease prevalence using negative binomial group testing
Pritchard, Nicholas A.; Tebbs, Joshua M.
2011-01-01
Group testing, also known as pooled testing, and inverse sampling are both widely used methods of data collection when the goal is to estimate a small proportion. Taking a Bayesian approach, we consider the new problem of estimating disease prevalence from group testing when inverse (negative binomial) sampling is used. Using different distributions to incorporate prior knowledge of disease incidence and different loss functions, we derive closed form expressions for posterior distributions and resulting point and credible interval estimators. We then evaluate our new estimators, on Bayesian and classical grounds, and apply our methods to a West Nile Virus data set. PMID:21259308
NASA Astrophysics Data System (ADS)
Lamie, Nesrine T.
2015-10-01
Four, accurate, precise, and sensitive spectrophotometric methods are developed for simultaneous determination of a binary mixture of amlodipine besylate (AM) and atenolol (AT). AM is determined at its λmax 360 nm (0D), while atenolol can be determined by four different methods. Method (A) is absorption factor (AF). Method (B) is the new ratio difference method (RD) which measures the difference in amplitudes between 210 and 226 nm. Method (C) is novel constant center spectrophotometric method (CC). Method (D) is mean centering of the ratio spectra (MCR) at 284 nm. The methods are tested by analyzing synthetic mixtures of the cited drugs and they are applied to their commercial pharmaceutical preparation. The validity of results is assessed by applying standard addition technique. The results obtained are found to agree statistically with those obtained by official methods, showing no significant difference with respect to accuracy and precision.
Diagnostics and Active Control of Aircraft Interior Noise
NASA Technical Reports Server (NTRS)
Fuller, C. R.
1998-01-01
This project deals with developing advanced methods for investigating and controlling interior noise in aircraft. The work concentrates on developing and applying the techniques of Near Field Acoustic Holography (NAH) and Principal Component Analysis (PCA) to the aircraft interior noise dynamic problem. This involves investigating the current state of the art, developing new techniques and then applying them to the particular problem being studied. The knowledge gained under the first part of the project was then used to develop and apply new, advanced noise control techniques for reducing interior noise. A new fully active control approach based on the PCA was developed and implemented on a test cylinder. Finally an active-passive approach based on tunable vibration absorbers was to be developed and analytically applied to a range of test structures from simple plates to aircraft fuselages.
Adaptive Discontinuous Galerkin Methods in Multiwavelets Bases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Archibald, Richard K; Fann, George I; Shelton Jr, William Allison
2011-01-01
We use a multiwavelet basis with the Discontinuous Galerkin (DG) method to produce a multi-scale DG method. We apply this Multiwavelet DG method to convection and convection-diffusion problems in multiple dimensions. Merging the DG method with multiwavelets allows the adaptivity in the DG method to be resolved through manipulation of multiwavelet coefficients rather than grid manipulation. Additionally, the Multiwavelet DG method is tested on non-linear equations in one dimension and on the cubed sphere.
Evaluation of clinical methods for peroneal muscle testing.
Sarig-Bahat, Hilla; Krasovsky, Andrei; Sprecher, Elliot
2013-03-01
Manual muscle testing of the peroneal muscles is well accepted as a testing method in musculoskeletal physiotherapy for the assessment of the foot and ankle. The peroneus longus and brevis are primary evertors and secondary plantar flexors of the ankle joint. However, some international textbooks describe them as dorsi flexors, when instructing peroneal muscle testing. The identified variability raised a question whether these educational texts are reflected in the clinical field. The purposes of this study were to investigate what are the methods commonly used in the clinical field for peroneal muscle testing and to evaluate their compatibility with functional anatomy. A cross-sectional study was conducted, using an electronic questionnaire sent to 143 Israeli physiotherapists in the musculoskeletal field. The survey questioned on the anatomical location of manual resistance and the combination of motions resisted. Ninety-seven responses were received. The majority (69%) of respondents related correctly to the peronei as evertors, but asserted that resistance should be located over the dorsal aspect of the fifth metatarsus, thereby disregarding the peroneus longus. Moreover, 38% of the respondents described the peronei as dorsi flexors, rather than plantar flexors. Only 2% selected the correct method of resisting plantarflexion and eversion at the base of the first metatarsus. We consider this technique to be the most compatible with the anatomy of the peroneus longus and brevis. The Fisher-Freeman-Halton test indicated that there was a significant relationship between responses on the questions (P = 0.0253, 95% CI 0.0249-0.0257), thus justifying further correspondence analysis. The correspondence analysis found no clustering of the answers that were compatible with anatomical evidence and were applied in the correct technique, but did demonstrate a common error, resisting dorsiflexion rather than plantarflexion, which was in agreement with the described frequencies. Inconsistencies were identified between the instruction method commonly provided for peroneal muscle testing in textbook and the functional anatomy of these muscles. Results reflect the lack of accuracy in applying functional anatomy to peroneal testing. This may be due to limited use of peroneal muscle testing or to inadequate investigation of the existing evaluation methods and their validity. Accordingly, teaching materials and clinical methods used for this test should be re-evaluated. Further research should investigate the value of peroneal muscle testing in clinical ankle evaluation. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Styk, Adam
2014-07-01
Classical time-averaging and stroboscopic interferometry are widely used for MEMS/MOEMS dynamic behavior investigations. Unfortunately both methods require an amplitude magnitude of at least 0.19λ to be able to detect resonant frequency of the object. Moreover the precision of measurement is limited. That puts strong constrains on the type of element to be tested. In this paper the comparison of two methods of microobject vibration measurements that overcome aforementioned problems are presented. Both methods maintain high speed measurement time and extend the range of amplitudes to be measured (below 0.19λ), moreover can be easily applied to MEMS/MOEMS dynamic parameters measurements.
Reconstruction of ensembles of coupled time-delay systems from time series.
Sysoev, I V; Prokhorov, M D; Ponomarenko, V I; Bezruchko, B P
2014-06-01
We propose a method to recover from time series the parameters of coupled time-delay systems and the architecture of couplings between them. The method is based on a reconstruction of model delay-differential equations and estimation of statistical significance of couplings. It can be applied to networks composed of nonidentical nodes with an arbitrary number of unidirectional and bidirectional couplings. We test our method on chaotic and periodic time series produced by model equations of ensembles of diffusively coupled time-delay systems in the presence of noise, and apply it to experimental time series obtained from electronic oscillators with delayed feedback coupled by resistors.
DNA-Based Methods in the Immunohematology Reference Laboratory
Denomme, Gregory A
2010-01-01
Although hemagglutination serves the immunohematology reference laboratory well, when used alone, it has limited capability to resolve complex problems. This overview discusses how molecular approaches can be used in the immunohematology reference laboratory. In order to apply molecular approaches to immunohematology, knowledge of genes, DNA-based methods, and the molecular bases of blood groups are required. When applied correctly, DNA-based methods can predict blood groups to resolve ABO/Rh discrepancies, identify variant alleles, and screen donors for antigen-negative units. DNA-based testing in immunohematology is a valuable tool used to resolve blood group incompatibilities and to support patients in their transfusion needs. PMID:21257350
Time and temperature dependent modulus of pyrrone and polyimide moldings
NASA Technical Reports Server (NTRS)
Lander, L. L.
1972-01-01
A method is presented by which the modulus obtained from a stress relaxation test can be used to estimate the modulus which would be obtained from a sonic vibration test. The method was applied to stress relaxation, sonic vibration, and high speed stress-strain data which was obtained on a flexible epoxy. The modulus as measured by the three test methods was identical for identical test times, and a change of test temperature was equivalent to a shift in the logarithmic time scale. An estimate was then made of the dynamic modulus of moldings of two Pyrrones and two polyimides, using stress relaxation data and the method of analysis which was developed for the epoxy. Over the common temperature range (350 to 500 K) in which data from both types of tests were available, the estimated dynamic modulus value differed by only a few percent from the measured value. As a result, it is concluded that, over the 500 to 700 K temperature range, the estimated dynamic modulus values are accurate.
An entropy-based nonparametric test for the validation of surrogate endpoints.
Miao, Xiaopeng; Wang, Yong-Cheng; Gangopadhyay, Ashis
2012-06-30
We present a nonparametric test to validate surrogate endpoints based on measure of divergence and random permutation. This test is a proposal to directly verify the Prentice statistical definition of surrogacy. The test does not impose distributional assumptions on the endpoints, and it is robust to model misspecification. Our simulation study shows that the proposed nonparametric test outperforms the practical test of the Prentice criterion in terms of both robustness of size and power. We also evaluate the performance of three leading methods that attempt to quantify the effect of surrogate endpoints. The proposed method is applied to validate magnetic resonance imaging lesions as the surrogate endpoint for clinical relapses in a multiple sclerosis trial. Copyright © 2012 John Wiley & Sons, Ltd.
Quirós, Elia; Felicísimo, Angel M; Cuartero, Aurora
2009-01-01
This work proposes a new method to classify multi-spectral satellite images based on multivariate adaptive regression splines (MARS) and compares this classification system with the more common parallelepiped and maximum likelihood (ML) methods. We apply the classification methods to the land cover classification of a test zone located in southwestern Spain. The basis of the MARS method and its associated procedures are explained in detail, and the area under the ROC curve (AUC) is compared for the three methods. The results show that the MARS method provides better results than the parallelepiped method in all cases, and it provides better results than the maximum likelihood method in 13 cases out of 17. These results demonstrate that the MARS method can be used in isolation or in combination with other methods to improve the accuracy of soil cover classification. The improvement is statistically significant according to the Wilcoxon signed rank test.
Sum of top-hat transform based algorithm for vessel enhancement in MRA images
NASA Astrophysics Data System (ADS)
Ouazaa, Hibet-Allah; Jlassi, Hajer; Hamrouni, Kamel
2018-04-01
The Magnetic Resonance Angiography (MRA) is rich with information's. But, they suffer from poor contrast, illumination and noise. Thus, it is required to enhance the images. But, these significant information can be lost if improper techniques are applied. Therefore, in this paper, we propose a new method of enhancement. We applied firstly the CLAHE method to increase the contrast of the image. Then, we applied the sum of Top-Hat Transform to increase the brightness of vessels. It is performed with the structuring element oriented in different angles. The methodology is tested and evaluated on the publicly available database BRAINIX. And, we used the measurement methods MSE (Mean Square Error), PSNR (Peak Signal to Noise Ratio) and SNR (Signal to Noise Ratio) for the evaluation. The results demonstrate that the proposed method could efficiently enhance the image details and is comparable with state of the art algorithms. Hence, the proposed method could be broadly used in various applications.
Korimbocus, Jehanara; Dehay, Nicolas; Tordo, Noël; Cano, François; Morgeaux, Sylvie
2016-06-14
In case of a bite by a rabies infected animal, the World Health Organisation recommends a prophylactic treatment including the administration of Human Rabies Immunoglobulins (HRIGs) or highly purified F(ab')2 fragments produced from Equine Rabies Immunoglobulin (F(ab')2 - ERIGs). According to international regulation, quality control of F(ab')2 - ERIGs lots requires potency testing by the in vivo Mouse Neutralisation Test (MNT) prior marketing. However, the strategy of the 3Rs (Reduce, Refine, Replace) for animal testing required by the European Directive encourages the replacement of the in vivo potency test by an in vitro assay. In this context, a competitive ELISA method (c-ELISA) has been developed by the Agence Nationale de Sécurité du Médicament et des Produits de Santé where F(ab')2 - ERIGs are in competition with a monoclonal antibody recognizing the trimeric native form of the rabies glycoprotein. After a full validation study, the c-ELISA has been applied to commercial batches of F(ab')2 - ERIGs. A correlation study with the MNT demonstrated a similarity between the two methods (r=0.751). Moreover, the c-ELISA method which does not need any species specific reagent has been applied to HRIGs potency testing as an alternative method to Rapid Fluorescent Focus Inhibition Test (RFFIT), thus avoiding the handling of live rabies virus in BSL3 containment. In conclusion, the c-ELISA has shown its potential to replace MNT and possibly RFFIT for the quantification of rabies immunoglobulin. After optimisation it may be used for the quantification of rabies immunoglobulin in any animal species, notably for rabies immunogenicity assay in mice. Copyright © 2016 Elsevier Ltd. All rights reserved.
Modified sugar adulteration test applied to New Zealand honey.
Frew, Russell; McComb, Kiri; Croudis, Linda; Clark, Dianne; Van Hale, Robert
2013-12-15
The carbon isotope method (AOAC 998.12) compares the bulk honey carbon isotope value with that of the extracted protein; a difference greater than 1‰ suggesting that the protein and the bulk carbohydrate have different origins. New Zealand Manuka honey is a high value product and often fails this test. It has been suggested such failures are due to the pollen in the Manuka honey and an adaptation of the method to remove pollen prior to testing has been proposed. Here we test 64 authentic honey samples collected directly from the hives and find that a large proportion (37%) of Manuka honeys fail the test. Of these 60% still fail the adapted method. These honey samples were collected and processed under stringent conditions and have not been adulterated post-harvest. More work is required to ascertain the cause of these test failures. Copyright © 2013 Elsevier Ltd. All rights reserved.
Electrocoagulation and decolorization of landfill leachate
NASA Astrophysics Data System (ADS)
Mussa, Zainab Haider; Othman, Mohamed Rozali; Abdullah, Md Pauzi
2013-11-01
In this study, several operating conditions such as electrode material, treatment time, applied voltage, Cl□ concentration and PH of solution were tested on treatability of landfill leachate by using electrocoagulation (EC) method. According to the results, EC method can be used efficiently for the treatment of landfill leachate by using proper operating conditions. The best removal rats were obtained when C (rod) electrode as anode, operating time is 120 min, voltage applied is 10 V, NaCl concentration is 5.85 g/L and the raw PH, for these conditions, 70% color removal was obtained.
Attitude algorithm and initial alignment method for SINS applied in short-range aircraft
NASA Astrophysics Data System (ADS)
Zhang, Rong-Hui; He, Zhao-Cheng; You, Feng; Chen, Bo
2017-07-01
This paper presents an attitude solution algorithm based on the Micro-Electro-Mechanical System and quaternion method. We completed the numerical calculation and engineering practice by adopting fourth-order Runge-Kutta algorithm in the digital signal processor. The state space mathematical model of initial alignment in static base was established, and the initial alignment method based on Kalman filter was proposed. Based on the hardware in the loop simulation platform, the short-range flight simulation test and the actual flight test were carried out. The results show that the error of pitch, yaw and roll angle is fast convergent, and the fitting rate between flight simulation and flight test is more than 85%.
García-Hernández, J; Moreno, Y; Amorocho, C M; Hernández, M
2012-03-01
We have developed a direct viable count (DVC)-FISH procedure for quickly and easily discriminating between viable and nonviable cells of Lactobacillus delbrueckii subsp. bulgaricus and Streptococcus thermophilus strains, the traditional yogurt bacteria. direct viable count method has been modified and adapted for Lact. delbrueckii subsp. bulgaricus and Strep. thermophilus analysis by testing different times of incubation and concentrations of DNA-gyrase inhibitors. DVC procedure has been combined with fluorescent in situ hybridization (FISH) for the specific detection of viable cells of both bacteria with specific rRNA oligonucleotide probes (DVC-FISH). Of the four antibiotics tested (novobiocin, nalidixic acid, pipemidic acid and ciprofloxacin), novobiocin was the most effective for DVC method and the optimum incubation time was 7 h for both bacteria. The number of viable cells was obtained by the enumeration of specific hybridized cells that were elongated at least twice their original length for Lactobacillus and twice their original size for Streptococcus. This technique was successfully applied to detect viable cells in inoculated faeces. Results showed that this DVC-FISH procedure is a quick and culture-independent useful method to specifically detect viable Lact. delbrueckii subsp. bulgaricus and Strep. thermophilus in different samples, being applied for the first time to lactic acid bacteria. © 2011 The Authors. Letters in Applied Microbiology © 2011 The Society for Applied Microbiology.
A Comparison of Fuzzy Models in Similarity Assessment of Misregistered Area Class Maps
NASA Astrophysics Data System (ADS)
Brown, Scott
Spatial uncertainty refers to unknown error and vagueness in geographic data. It is relevant to land change and urban growth modelers, soil and biome scientists, geological surveyors and others, who must assess thematic maps for similarity, or categorical agreement. In this paper I build upon prior map comparison research, testing the effectiveness of similarity measures on misregistered data. Though several methods compare uncertain thematic maps, few methods have been tested on misregistration. My objective is to test five map comparison methods for sensitivity to misregistration, including sub-pixel errors in both position and rotation. Methods included four fuzzy categorical models: fuzzy kappa's model, fuzzy inference, cell aggregation, and the epsilon band. The fifth method used conventional crisp classification. I applied these methods to a case study map and simulated data in two sets: a test set with misregistration error, and a control set with equivalent uniform random error. For all five methods, I used raw accuracy or the kappa statistic to measure similarity. Rough-set epsilon bands report the most similarity increase in test maps relative to control data. Conversely, the fuzzy inference model reports a decrease in test map similarity.
CR-Calculus and adaptive array theory applied to MIMO random vibration control tests
NASA Astrophysics Data System (ADS)
Musella, U.; Manzato, S.; Peeters, B.; Guillaume, P.
2016-09-01
Performing Multiple-Input Multiple-Output (MIMO) tests to reproduce the vibration environment in a user-defined number of control points of a unit under test is necessary in applications where a realistic environment replication has to be achieved. MIMO tests require vibration control strategies to calculate the required drive signal vector that gives an acceptable replication of the target. This target is a (complex) vector with magnitude and phase information at the control points for MIMO Sine Control tests while in MIMO Random Control tests, in the most general case, the target is a complete spectral density matrix. The idea behind this work is to tailor a MIMO random vibration control approach that can be generalized to other MIMO tests, e.g. MIMO Sine and MIMO Time Waveform Replication. In this work the approach is to use gradient-based procedures over the complex space, applying the so called CR-Calculus and the adaptive array theory. With this approach it is possible to better control the process performances allowing the step-by-step Jacobian Matrix update. The theoretical bases behind the work are followed by an application of the developed method to a two-exciter two-axis system and by performance comparisons with standard methods.
ERIC Educational Resources Information Center
Lonchamp, F.
This is a presentation of the results of a factor analysis of a battery of tests intended to measure listening and reading comprehension in English as a second language. The analysis sought to answer the following questions: (1) whether the factor analysis method yields results when applied to tests which are not specifically designed for this…
Ahn, Eunjong; Kim, Hyunjun; Sim, Sung-Han; Shin, Sung Woo; Shin, Myoungsu
2017-01-01
Recently, self-healing technologies have emerged as a promising approach to extend the service life of social infrastructure in the field of concrete construction. However, current evaluations of the self-healing technologies developed for cementitious materials are mostly limited to lab-scale experiments to inspect changes in surface crack width (by optical microscopy) and permeability. Furthermore, there is a universal lack of unified test methods to assess the effectiveness of self-healing technologies. Particularly, with respect to the self-healing of concrete applied in actual construction, nondestructive test methods are required to avoid interrupting the use of the structures under evaluation. This paper presents a review of all existing research on the principles of ultrasonic test methods and case studies pertaining to self-healing concrete. The main objective of the study is to examine the applicability and limitation of various ultrasonic test methods in assessing the self-healing performance. Finally, future directions on the development of reliable assessment methods for self-healing cementitious materials are suggested. PMID:28772640
A brief introduction to the word associate test
Verplanck, William S.
1992-01-01
An examination format assessing the intraverbal repertoire of individuals in psychology is described and results using it reported. The Associate Test is easy to prepare, to take, and to grade. Its reliability measures are satisfactory; its ability to predict later behavior is reported upon. The Associate Test is computer friendly, and its methods can be applied for examination in any field, and at any level. PMID:22477050
Fayers, Peter M
2007-01-01
We review the papers presented at the NCI/DIA conference, to identify areas of controversy and uncertainty, and to highlight those aspects of item response theory (IRT) and computer adaptive testing (CAT) that require theoretical or empirical research in order to justify their application to patient reported outcomes (PROs). IRT and CAT offer exciting potential for the development of a new generation of PRO instruments. However, most of the research into these techniques has been in non-healthcare settings, notably in education. Educational tests are very different from PRO instruments, and consequently problematic issues arise when adapting IRT and CAT to healthcare research. Clinical scales differ appreciably from educational tests, and symptoms have characteristics distinctly different from examination questions. This affects the transferring of IRT technology. Particular areas of concern when applying IRT to PROs include inadequate software, difficulties in selecting models and communicating results, insufficient testing of local independence and other assumptions, and a need of guidelines for estimating sample size requirements. Similar concerns apply to differential item functioning (DIF), which is an important application of IRT. Multidimensional IRT is likely to be advantageous only for closely related PRO dimensions. Although IRT and CAT provide appreciable potential benefits, there is a need for circumspection. Not all PRO scales are necessarily appropriate targets for this methodology. Traditional psychometric methods, and especially qualitative methods, continue to have an important role alongside IRT. Research should be funded to address the specific concerns that have been identified.
Thermal barrier coating life-prediction model development. Annual report no. 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strangman, T. E.; Neumann, J.; Liu, A.
1986-10-01
The program focuses on predicting the lives of two types of strain-tolerant and oxidation-resistant thermal barrier coating (TBC) systems that are produced by commercial coating suppliers to the gas turbine industry. The plasma-sprayed TBC system, composed of a low-pressure plasma-spray (LPPS) or an argon shrouded plasma-spray (ASPS) applied oxidation resistant NiCrAlY or (CoNiCrAlY) bond coating and an air-plasma-sprayed yttria partially stabilized zirconia insulative layer, is applied by both Chromalloy, Klock, and Union Carbide. The second type of TBS is applied by the electron beam-physical vapor deposition (EB-PVD) process by Temescal. The second year of the program was focused on specimenmore » procurement, TMC system characterization, nondestructive evaluation methods, life prediction model development, and TFE731 engine testing of thermal barrier coated blades. Materials testing is approaching completion. Thermomechanical characterization of the TBC systems, with toughness, and spalling strain tests, was completed. Thermochemical testing is approximately two-thirds complete. Preliminary materials life models for the bond coating oxidation and zirconia sintering failure modes were developed. Integration of these life models with airfoil component analysis methods is in progress. Testing of high pressure turbine blades coated with the program TBS systems is in progress in a TFE731 turbofan engine. Eddy current technology feasibility was established with respect to nondestructively measuring zirconia layer thickness of a TBC system.« less
MUSIC electromagnetic imaging with enhanced resolution for small inclusions
NASA Astrophysics Data System (ADS)
Chen, Xudong; Zhong, Yu
2009-01-01
This paper investigates the influence of the test dipole on the resolution of the multiple signal classification (MUSIC) imaging method applied to the electromagnetic inverse scattering problem of determining the locations of a collection of small objects embedded in a known background medium. Based on the analysis of the induced electric dipoles in eigenstates, an algorithm is proposed to determine the test dipole that generates a pseudo-spectrum with enhanced resolution. The amplitudes in three directions of the optimal test dipole are not necessarily in phase, i.e., the optimal test dipole may not correspond to a physical direction in the real three-dimensional space. In addition, the proposed test-dipole-searching algorithm is able to deal with some special scenarios, due to the shapes and materials of objects, to which the standard MUSIC does not apply.
In Situ Elevated Temperature Testing of Fly Ash Based Geopolymer Composites
Vickers, Les; Pan, Zhu; Tao, Zhong; van Riessen, Arie
2016-01-01
In situ elevated temperature investigations using fly ash based geopolymers filled with alumina aggregate were undertaken. Compressive strength and short term creep tests were carried out to determine the onset temperature of viscous flow. Fire testing using the standard cellulose curve was performed. Applying a load to the specimen as the temperature increased reduced the temperature at which viscous flow occurred (compared to test methods with no applied stress). Compressive strength increased at the elevated temperature and is attributed to viscous flow and sintering forming a more compact microstructure. The addition of alumina aggregate and reduction of water content reduced the thermal conductivity. This led to the earlier onset and shorter dehydration plateau duration times. However, crack formation was reduced and is attributed to smaller thermal gradients across the fire test specimen. PMID:28773568
Bayesian models based on test statistics for multiple hypothesis testing problems.
Ji, Yuan; Lu, Yiling; Mills, Gordon B
2008-04-01
We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.
A Monte Carlo Approach for Adaptive Testing with Content Constraints
ERIC Educational Resources Information Center
Belov, Dmitry I.; Armstrong, Ronald D.; Weissman, Alexander
2008-01-01
This article presents a new algorithm for computerized adaptive testing (CAT) when content constraints are present. The algorithm is based on shadow CAT methodology to meet content constraints but applies Monte Carlo methods and provides the following advantages over shadow CAT: (a) lower maximum item exposure rates, (b) higher utilization of the…
Ultrasonic Testing, Aviation Quality Control (Advanced): 9227.03.
ERIC Educational Resources Information Center
Dade County Public Schools, Miami, FL.
This unit of instruction covers the theory of ultrasonic sound, methods of applying soundwaves to test specimens and interpreting results, calibrating the ultrasonic equipment, and the use of standards. Study periods, group discussions, and extensive use of textbooks and training manuals are to be used. These are listed along with references and…
Investigation of the Behavior of Thin-Walled Panels with Cutouts
NASA Technical Reports Server (NTRS)
Podorozhny, A. A.
1946-01-01
The present paper deals with the computation and methods of reinforcement of stiffened panels with cutouts under bending loads such as are applied to the sides of a fuselage. A comparison is maade between the computed and test results. Results are presented of tests on panels with cutouts under tensile and compressive loads.
Aeroservoelastic Modeling of Body Freedom Flutter for Control System Design
NASA Technical Reports Server (NTRS)
Ouellette, Jeffrey
2017-01-01
The communication of this method is being used by NASA in the ongoing collaborations with groups interested in the X-56A flight test program. Model generation for body freedom flutter Addressing issues in: State Consistency, Low frequency dynamics, Unsteady aerodynamics. Applied approach to X-56A MUTT: Comparing to flight test data.
Code of Federal Regulations, 2014 CFR
2014-01-01
... hardness or less) using 27.0 grams + 4.0 grams per pound of cloth load of AHAM Standard detergent Formula 3... repellent finishes, such as fluoropolymer stain resistant finishes shall not be applied to the test cloth...
Airflow Resistance of Loose-Fill Mineral Fiber Insulations in Retrofit Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schumacher, C. J.; Fox, M. J.; Lstiburek, J.
2015-02-01
This report expands on Building America Report 1109 by applying the experimental apparatus and test method to dense-pack retrofit applications using mineral fiber insulation materials. Three fiber glass insulation materials and one stone wool insulation material were tested, and the results compared to the cellulose results from the previous study.
This paper demonstrates the usefulness of representing a chemical by its structural features and the use of these features to profile a battery of tests rather than relying on a single toxicity test of a given chemical. This paper presents data mining/profiling methods applied in...
SS-HORSE method for studying resonances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blokhintsev, L. D.; Mazur, A. I.; Mazur, I. A., E-mail: 008043@pnu.edu.ru
A new method for analyzing resonance states based on the Harmonic-Oscillator Representation of Scattering Equations (HORSE) formalism and analytic properties of partial-wave scattering amplitudes is proposed. The method is tested by applying it to the model problem of neutral-particle scattering and can be used to study resonance states on the basis of microscopic calculations performed within various versions of the shell model.
Efficient iterative methods applied to the solution of transonic flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wissink, A.M.; Lyrintzis, A.S.; Chronopoulos, A.T.
1996-02-01
We investigate the use of an inexact Newton`s method to solve the potential equations in the transonic regime. As a test case, we solve the two-dimensional steady transonic small disturbance equation. Approximate factorization/ADI techniques have traditionally been employed for implicit solutions of this nonlinear equation. Instead, we apply Newton`s method using an exact analytical determination of the Jacobian with preconditioned conjugate gradient-like iterative solvers for solution of the linear systems in each Newton iteration. Two iterative solvers are tested; a block s-step version of the classical Orthomin(k) algorithm called orthogonal s-step Orthomin (OSOmin) and the well-known GIVIRES method. The preconditionermore » is a vectorizable and parallelizable version of incomplete LU (ILU) factorization. Efficiency of the Newton-Iterative method on vector and parallel computer architectures is the main issue addressed. In vectorized tests on a single processor of the Cray C-90, the performance of Newton-OSOmin is superior to Newton-GMRES and a more traditional monotone AF/ADI method (MAF) for a variety of transonic Mach numbers and mesh sizes. Newton- GIVIRES is superior to MAF for some cases. The parallel performance of the Newton method is also found to be very good on multiple processors of the Cray C-90 and on the massively parallel thinking machine CM-5, where very fast execution rates (up to 9 Gflops) are found for large problems. 38 refs., 14 figs., 7 tabs.« less
Derayea, Sayed M; Badr El-Din, Khalid M; Mohammed, Fatma F
2017-12-01
A novel sensitive and cost-effective spectrofluorimetric method has been developed and validated for determination of lisinopril (an angiotensin converting enzyme inhibitor) in its pure form and pharmaceutical preparations. The method is based on the reaction of the drug with ninhydrin and phenylacetaldehyde in buffered medium (pH 7.0) to form a highly fluorescent product measured at 460 nm after excitation at 390 nm. Different experimental parameters were optimized and calibration curve was constructed. The fluorescence-concentration relationship was linear in the range of 0.15-4.0 μg mL -1 . The calculated Limit of detection (LOD) and Limit of quantitation (LOQ) were 0.04 and 0.12 μg mL -1 , respectively. The method was successfully applied for the analysis of pharmaceutical preparations containing the studied drug either alone or co-formulated with hydrochlorothiazide. The obtained results were in agreement with those of the reported method in respect to accuracy and precession. Moreover, the method was applied content uniformity testing according to United States Pharmacopeia (USP) guidelines. Copyright © 2017 John Wiley & Sons, Ltd.
Testing Method for External Cladding Systems - Incerc Romania
NASA Astrophysics Data System (ADS)
Simion, A.; Dragne, H.
2017-06-01
This research presents a new testing method in a natural scale for external cladding systems tested on buildings with minimum than 3 floors [1]. The testing method is unique in Romania and it is similar about many fire testing current methods from European Union states. Also, presents the fire propagation and the effect of fire smoke on the building façade composed of thermal insulation. Laboratory of testing and research for building fire safety from National Institute INCERC Bucharest, provides a test method for determining the fire performance characteristics of non-loadbearing external cladding systems and external wall insulation systems when applied to the face of a building and exposed to an external fire under controlled conditions [2]. The fire exposure is representative of an external fire source or a fully-developed (post-flashover) fire in a room, venting through an opening such as a window aperture that exposes the cladding to the effects of external flames, or an external fire source. On the future, fire tests will be experimented for answer demande a number of high-profile fires where the external facade of tall buildings provided a route for vertical fire spread.
Formal methods for test case generation
NASA Technical Reports Server (NTRS)
Rushby, John (Inventor); De Moura, Leonardo Mendonga (Inventor); Hamon, Gregoire (Inventor)
2011-01-01
The invention relates to the use of model checkers to generate efficient test sets for hardware and software systems. The method provides for extending existing tests to reach new coverage targets; searching *to* some or all of the uncovered targets in parallel; searching in parallel *from* some or all of the states reached in previous tests; and slicing the model relative to the current set of coverage targets. The invention provides efficient test case generation and test set formation. Deep regions of the state space can be reached within allotted time and memory. The approach has been applied to use of the model checkers of SRI's SAL system and to model-based designs developed in Stateflow. Stateflow models achieving complete state and transition coverage in a single test case are reported.
Use of system identification techniques for improving airframe finite element models using test data
NASA Technical Reports Server (NTRS)
Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.
1993-01-01
A method for using system identification techniques to improve airframe finite element models using test data was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in the total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all of the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.
NASA Technical Reports Server (NTRS)
Chen, T.; Raju, I. S.
2002-01-01
A coupled finite element (FE) method and meshless local Petrov-Galerkin (MLPG) method for analyzing two-dimensional potential problems is presented in this paper. The analysis domain is subdivided into two regions, a finite element (FE) region and a meshless (MM) region. A single weighted residual form is written for the entire domain. Independent trial and test functions are assumed in the FE and MM regions. A transition region is created between the two regions. The transition region blends the trial and test functions of the FE and MM regions. The trial function blending is achieved using a technique similar to the 'Coons patch' method that is widely used in computer-aided geometric design. The test function blending is achieved by using either FE or MM test functions on the nodes in the transition element. The technique was evaluated by applying the coupled method to two potential problems governed by the Poisson equation. The coupled method passed all the patch test problems and gave accurate solutions for the problems studied.
Horbowy, Jan; Tomczak, Maciej T
2017-01-01
Biomass reconstructions to pre-assessment periods for commercially important and exploitable fish species are important tools for understanding long-term processes and fluctuation on stock and ecosystem level. For some stocks only fisheries statistics and fishery dependent data are available, for periods before surveys were conducted. The methods for the backward extension of the analytical assessment of biomass for years for which only total catch volumes are available were developed and tested in this paper. Two of the approaches developed apply the concept of the surplus production rate (SPR), which is shown to be stock density dependent if stock dynamics is governed by classical stock-production models. The other approach used a modified form of the Schaefer production model that allows for backward biomass estimation. The performance of the methods was tested on the Arctic cod and North Sea herring stocks, for which analytical biomass estimates extend back to the late 1940s. Next, the methods were applied to extend biomass estimates of the North-east Atlantic mackerel from the 1970s (analytical biomass estimates available) to the 1950s, for which only total catch volumes were available. For comparison with other methods which employs a constant SPR estimated as an average of the observed values, was also applied. The analyses showed that the performance of the methods is stock and data specific; the methods that work well for one stock may fail for the others. The constant SPR method is not recommended in those cases when the SPR is relatively high and the catch volumes in the reconstructed period are low.
Horbowy, Jan
2017-01-01
Biomass reconstructions to pre-assessment periods for commercially important and exploitable fish species are important tools for understanding long-term processes and fluctuation on stock and ecosystem level. For some stocks only fisheries statistics and fishery dependent data are available, for periods before surveys were conducted. The methods for the backward extension of the analytical assessment of biomass for years for which only total catch volumes are available were developed and tested in this paper. Two of the approaches developed apply the concept of the surplus production rate (SPR), which is shown to be stock density dependent if stock dynamics is governed by classical stock-production models. The other approach used a modified form of the Schaefer production model that allows for backward biomass estimation. The performance of the methods was tested on the Arctic cod and North Sea herring stocks, for which analytical biomass estimates extend back to the late 1940s. Next, the methods were applied to extend biomass estimates of the North-east Atlantic mackerel from the 1970s (analytical biomass estimates available) to the 1950s, for which only total catch volumes were available. For comparison with other methods which employs a constant SPR estimated as an average of the observed values, was also applied. The analyses showed that the performance of the methods is stock and data specific; the methods that work well for one stock may fail for the others. The constant SPR method is not recommended in those cases when the SPR is relatively high and the catch volumes in the reconstructed period are low. PMID:29131850
Estimating the proportion of true null hypotheses when the statistics are discrete.
Dialsingh, Isaac; Austin, Stefanie R; Altman, Naomi S
2015-07-15
In high-dimensional testing problems π0, the proportion of null hypotheses that are true is an important parameter. For discrete test statistics, the P values come from a discrete distribution with finite support and the null distribution may depend on an ancillary statistic such as a table margin that varies among the test statistics. Methods for estimating π0 developed for continuous test statistics, which depend on a uniform or identical null distribution of P values, may not perform well when applied to discrete testing problems. This article introduces a number of π0 estimators, the regression and 'T' methods that perform well with discrete test statistics and also assesses how well methods developed for or adapted from continuous tests perform with discrete tests. We demonstrate the usefulness of these estimators in the analysis of high-throughput biological RNA-seq and single-nucleotide polymorphism data. implemented in R. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Kaftan, Ilknur; Sindirgi, Petek
2013-04-01
Self-potential (SP) is one of the oldest geophysical methods that provides important information about near-surface structures. Several methods have been developed to interpret SP data using simple geometries. This study investigated inverse solution of a buried, polarized sphere-shaped self-potential (SP ) anomaly via Multilayer Perceptron Neural Networks ( MLPNN ). The polarization angle ( α ) and depth to the centre of sphere ( h )were estimated. The MLPNN is applied to synthetic and field SP data. In order to see the capability of the method in detecting the number of sources, MLPNN was applied to different spherical models at different depths and locations.. Additionally, the performance of MLPNN was tested by adding random noise to the same synthetic test data. The sphere model successfully obtained similar parameters under different S/N ratios. Then, MLPNN method was applied to two field examples. The first one is the cross section taken from the SP anomaly map of the Ergani-Süleymanköy (Turkey) copper mine. MLPNN was also applied to SP data from Seferihisar Izmir (Western Turkey) geothermal field. The MLPNN results showed good agreement with the original synthetic data set. The effect of The technique gave satisfactory results following the addition of 5% and 10% Gaussian noise levels. The MLPNN results were compared to other SP interpretation techniques, such as Normalized Full Gradient (NFG), inverse solution and nomogram methods. All of the techniques showed strong similarity. Consequently, the synthetic and field applications of this study show that MLPNN provides reliable evaluation of the self potential data modelled by the sphere model.
HYPOTHESIS SETTING AND ORDER STATISTIC FOR ROBUST GENOMIC META-ANALYSIS.
Song, Chi; Tseng, George C
2014-01-01
Meta-analysis techniques have been widely developed and applied in genomic applications, especially for combining multiple transcriptomic studies. In this paper, we propose an order statistic of p-values ( r th ordered p-value, rOP) across combined studies as the test statistic. We illustrate different hypothesis settings that detect gene markers differentially expressed (DE) "in all studies", "in the majority of studies", or "in one or more studies", and specify rOP as a suitable method for detecting DE genes "in the majority of studies". We develop methods to estimate the parameter r in rOP for real applications. Statistical properties such as its asymptotic behavior and a one-sided testing correction for detecting markers of concordant expression changes are explored. Power calculation and simulation show better performance of rOP compared to classical Fisher's method, Stouffer's method, minimum p-value method and maximum p-value method under the focused hypothesis setting. Theoretically, rOP is found connected to the naïve vote counting method and can be viewed as a generalized form of vote counting with better statistical properties. The method is applied to three microarray meta-analysis examples including major depressive disorder, brain cancer and diabetes. The results demonstrate rOP as a more generalizable, robust and sensitive statistical framework to detect disease-related markers.
Nelson, Todd G.; Zimmerman, Trent K.; Fernelius, Janette D.; Magleby, Spencer P.; Howell, Larry L.
2016-01-01
Packing soft-sheet materials of approximately zero bending stiffness using Soft Origami (origami patterns applied to soft-sheet materials) into cylindrical volumes and their deployment via mechanisms or internal pressure (inflation) is of interest in fields including automobile airbags, deployable heart stents, inflatable space habitats, and dirigible and parachute packing. This paper explores twofold patterns, the ‘flasher’ and the ‘inverted-cone fold’, for packing soft-sheet materials into cylindrical volumes. Two initial packing methods and mechanisms are examined for each of the flasher and inverted-cone fold patterns. An application to driver’s side automobile airbags is performed, and deployment tests are completed to compare the influence of packing method and origami pattern on deployment performance. Following deployment tests, two additional packing methods for the inverted-cone fold pattern are explored and applied to automobile airbags. It is shown that modifying the packing method (using different methods to impose the same base pattern on the soft-sheet material) can lead to different deployment performance. In total, two origami patterns and six packing methods are examined, and the benefits of using Soft Origami patterns and packing methods are discussed. Soft Origami is presented as a viable method for efficiently packing soft-sheet materials into cylindrical volumes. PMID:27703707
NASA Astrophysics Data System (ADS)
Bruton, Jared T.; Nelson, Todd G.; Zimmerman, Trent K.; Fernelius, Janette D.; Magleby, Spencer P.; Howell, Larry L.
2016-09-01
Packing soft-sheet materials of approximately zero bending stiffness using Soft Origami (origami patterns applied to soft-sheet materials) into cylindrical volumes and their deployment via mechanisms or internal pressure (inflation) is of interest in fields including automobile airbags, deployable heart stents, inflatable space habitats, and dirigible and parachute packing. This paper explores twofold patterns, the `flasher' and the `inverted-cone fold', for packing soft-sheet materials into cylindrical volumes. Two initial packing methods and mechanisms are examined for each of the flasher and inverted-cone fold patterns. An application to driver's side automobile airbags is performed, and deployment tests are completed to compare the influence of packing method and origami pattern on deployment performance. Following deployment tests, two additional packing methods for the inverted-cone fold pattern are explored and applied to automobile airbags. It is shown that modifying the packing method (using different methods to impose the same base pattern on the soft-sheet material) can lead to different deployment performance. In total, two origami patterns and six packing methods are examined, and the benefits of using Soft Origami patterns and packing methods are discussed. Soft Origami is presented as a viable method for efficiently packing soft-sheet materials into cylindrical volumes.
Bruton, Jared T; Nelson, Todd G; Zimmerman, Trent K; Fernelius, Janette D; Magleby, Spencer P; Howell, Larry L
2016-09-01
Packing soft-sheet materials of approximately zero bending stiffness using Soft Origami (origami patterns applied to soft-sheet materials) into cylindrical volumes and their deployment via mechanisms or internal pressure (inflation) is of interest in fields including automobile airbags, deployable heart stents, inflatable space habitats, and dirigible and parachute packing. This paper explores twofold patterns, the 'flasher' and the 'inverted-cone fold', for packing soft-sheet materials into cylindrical volumes. Two initial packing methods and mechanisms are examined for each of the flasher and inverted-cone fold patterns. An application to driver's side automobile airbags is performed, and deployment tests are completed to compare the influence of packing method and origami pattern on deployment performance. Following deployment tests, two additional packing methods for the inverted-cone fold pattern are explored and applied to automobile airbags. It is shown that modifying the packing method (using different methods to impose the same base pattern on the soft-sheet material) can lead to different deployment performance. In total, two origami patterns and six packing methods are examined, and the benefits of using Soft Origami patterns and packing methods are discussed. Soft Origami is presented as a viable method for efficiently packing soft-sheet materials into cylindrical volumes.
Determination of orthotropic material properties by modal analysis
NASA Astrophysics Data System (ADS)
Lai, Junpeng
The methodology for determination of orthotropic material properties in plane stress condition will be presented. It is applied to orthotropic laminated plates like printed wiring boards. The first part of the thesis will focus on theories and methodologies. The static beam model and vibratory plate model is presented. The methods are validated by operating a series of test on aluminum. In the static tests, deflection and two directions of strain are measured, thus four of the properties will be identified: Ex, Ey, nuxy, nuyx. Moving on to dynamic test, the first ten modes' resonance frequencies are obtained. The technique of modal analysis is adopted. The measured data is processed by FFT and analyzed by curve fitting to extract natural frequencies and mode shapes. With the last material property to be determined, a finite element method using ANSYS is applied. Along with the identified material properties in static tests, and proper initial guess of the unknown shear modulus, an iterative process creates finite element model and conducts modal analysis with the updating model. When the modal analysis result produced by ANSYS matches the natural frequencies acquired by dynamic test, the process will halt. Then we obtained the last material property in plane stress condition.
A New Test Method of Circuit Breaker Spring Telescopic Characteristics Based Image Processing
NASA Astrophysics Data System (ADS)
Huang, Huimin; Wang, Feifeng; Lu, Yufeng; Xia, Xiaofei; Su, Yi
2018-06-01
This paper applied computer vision technology to the fatigue condition monitoring of springs, and a new telescopic characteristics test method is proposed for circuit breaker operating mechanism spring based on image processing technology. High-speed camera is utilized to capture spring movement image sequences when high voltage circuit breaker operated. Then the image-matching method is used to obtain the deformation-time curve and speed-time curve, and the spring expansion and deformation parameters are extracted from it, which will lay a foundation for subsequent spring force analysis and matching state evaluation. After performing simulation tests at the experimental site, this image analyzing method could solve the complex problems of traditional mechanical sensor installation and monitoring online, status assessment of the circuit breaker spring.
Modeling and Simulation of Nanoindentation
NASA Astrophysics Data System (ADS)
Huang, Sixie; Zhou, Caizhi
2017-11-01
Nanoindentation is a hardness test method applied to small volumes of material which can provide some unique effects and spark many related research activities. To fully understand the phenomena observed during nanoindentation tests, modeling and simulation methods have been developed to predict the mechanical response of materials during nanoindentation. However, challenges remain with those computational approaches, because of their length scale, predictive capability, and accuracy. This article reviews recent progress and challenges for modeling and simulation of nanoindentation, including an overview of molecular dynamics, the quasicontinuum method, discrete dislocation dynamics, and the crystal plasticity finite element method, and discusses how to integrate multiscale modeling approaches seamlessly with experimental studies to understand the length-scale effects and microstructure evolution during nanoindentation tests, creating a unique opportunity to establish new calibration procedures for the nanoindentation technique.
HARADA, Kazuki; USUI, Masaru; ASAI, Tetsuo
2014-01-01
ABSTRACT In this study, susceptibilities of Pasteurella multocida, Mannheimia haemolytica and Actinobacillus pleuropneumoniae to enrofloxacin and orbifloxacin were tested using an agar diffusion method with the commercial disks and a broth microdilution method. Good correlation between the 2 methods for enrofloxacin and orbifloxacin was observed for P. multocida (r = −0.743 and −0.818, respectively), M. haemolytica (r = −0.739 and −0.800, respectively) and A. pleuropneumoniae (r = −0.785 and −0.809, respectively). Based on the Clinical and Laboratory Standards Institute interpretive criteria for enrofloxacin, high-level categorical agreement between the 2 methods was found for P. multocida (97.9%), M. haemolytica (93.8%) and A. pleuropneumoniae (92.0%). Our findings indicate that the tested commercial disks can be applied for susceptibility testing of veterinary respiratory pathogens. PMID:25008965
The Development of High Order Methods for Real World Applications
2015-12-03
current method has been applied to aerodynamic problems. Numerical tests show that significant savings in the number of DOFs can be achieved through... current element Vi, and the normal flux Fn(Qi) at the interface is Fn(Qi) = ~F (Qi) · ~n. In order to eliminate the test function, the boundary integral...provision of law, no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently
The Development of High-Order Methods for Real World Applications
2015-12-03
current method has been applied to aerodynamic problems. Numerical tests show that significant savings in the number of DOFs can be achieved through... current element Vi, and the normal flux Fn(Qi) at the interface is Fn(Qi) = ~F (Qi) · ~n. In order to eliminate the test function, the boundary integral...provision of law, no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently
A Study of Applying Pulsed Remote Field Eddy Current in Ferromagnetic Pipes Testing
Luo, Qingwang; Shi, Yibing; Wang, Zhigang; Zhang, Wei; Li, Yanjun
2017-01-01
Pulsed Remote Field Eddy Current Testing (PRFECT) attracts the attention in the testing of ferromagnetic pipes because of its continuous spectrum. This paper simulated the practical PRFECT of pipes by using ANSYS software and employed Least Squares Support Vector Regression (LSSVR) to extract the zero-crossing time to analyze the pipe thickness. As a result, a secondary peak is found in zero-crossing time when transmitter passed by a defect. The secondary peak will lead to wrong quantification and the localization of defects, especially when defects are found only at the transmitter location. Aiming to eliminate the secondary peaks, double sensing coils are set in the transition zone and Wiener deconvolution filter is applied. In the proposed method, position dependent response of the differential signals from the double sensing coils is calibrated by employing zero-mean normalization. The methods proposed in this paper are validated by analyzing the simulation signals and can improve the practicality of PRFECT of ferromagnetic pipes. PMID:28475141
2017-01-01
Traditional techniques of active thermography require an external source of energy used for excitation, usually in the form of high power lamps or ultrasonic devices. In this paper, the author presents an alternative approach based on the self-heating effect observable in polymer-based structures during cyclic loading. The presented approach is based on, firstly, determination of bending resonance frequencies of a tested structure, and then, on excitation of a structure with a multi-harmonic signal constructed from the harmonics with frequencies of determined resonances. Following this, heating-up of a tested structure occurs in the location of stress concentration and mechanical energy dissipation due to the viscoelastic response of a structure. By applying multi-harmonic signal, one ensures coverage of the structure by such heated regions. The concept is verified experimentally on artificially damaged composite specimens. The results demonstrate the presented approach and indicate its potential, especially when traditional methods of excitation with an external structure for thermographic inspection cannot be applied. PMID:29283430
Katunin, Andrzej
2017-12-28
Traditional techniques of active thermography require an external source of energy used for excitation, usually in the form of high power lamps or ultrasonic devices. In this paper, the author presents an alternative approach based on the self-heating effect observable in polymer-based structures during cyclic loading. The presented approach is based on, firstly, determination of bending resonance frequencies of a tested structure, and then, on excitation of a structure with a multi-harmonic signal constructed from the harmonics with frequencies of determined resonances. Following this, heating-up of a tested structure occurs in the location of stress concentration and mechanical energy dissipation due to the viscoelastic response of a structure. By applying multi-harmonic signal, one ensures coverage of the structure by such heated regions. The concept is verified experimentally on artificially damaged composite specimens. The results demonstrate the presented approach and indicate its potential, especially when traditional methods of excitation with an external structure for thermographic inspection cannot be applied.
A study and evaluation of image analysis techniques applied to remotely sensed data
NASA Technical Reports Server (NTRS)
Atkinson, R. J.; Dasarathy, B. V.; Lybanon, M.; Ramapriyan, H. K.
1976-01-01
An analysis of phenomena causing nonlinearities in the transformation from Landsat multispectral scanner coordinates to ground coordinates is presented. Experimental results comparing rms errors at ground control points indicated a slight improvement when a nonlinear (8-parameter) transformation was used instead of an affine (6-parameter) transformation. Using a preliminary ground truth map of a test site in Alabama covering the Mobile Bay area and six Landsat images of the same scene, several classification methods were assessed. A methodology was developed for automatic change detection using classification/cluster maps. A coding scheme was employed for generation of change depiction maps indicating specific types of changes. Inter- and intraseasonal data of the Mobile Bay test area were compared to illustrate the method. A beginning was made in the study of data compression by applying a Karhunen-Loeve transform technique to a small section of the test data set. The second part of the report provides a formal documentation of the several programs developed for the analysis and assessments presented.
A Study of Applying Pulsed Remote Field Eddy Current in Ferromagnetic Pipes Testing.
Luo, Qingwang; Shi, Yibing; Wang, Zhigang; Zhang, Wei; Li, Yanjun
2017-05-05
Pulsed Remote Field Eddy Current Testing (PRFECT) attracts the attention in the testing of ferromagnetic pipes because of its continuous spectrum. This paper simulated the practical PRFECT of pipes by using ANSYS software and employed Least Squares Support Vector Regression (LSSVR) to extract the zero-crossing time to analyze the pipe thickness. As a result, a secondary peak is found in zero-crossing time when transmitter passed by a defect. The secondary peak will lead to wrong quantification and the localization of defects, especially when defects are found only at the transmitter location. Aiming to eliminate the secondary peaks, double sensing coils are set in the transition zone and Wiener deconvolution filter is applied. In the proposed method, position dependent response of the differential signals from the double sensing coils is calibrated by employing zero-mean normalization. The methods proposed in this paper are validated by analyzing the simulation signals and can improve the practicality of PRFECT of ferromagnetic pipes.
NASA Astrophysics Data System (ADS)
Mueanploy, Wannapa
2015-06-01
The objective of this research was to offer the way to improve engineering students in Physics topic of vector product. The sampling of this research was the engineering students at Pathumwan Institute of Technology during the first semester of academic year 2013. 1) Select 120 students by random sampling are asked to fill in a satisfaction questionnaire scale, to select size of three dimensions vector card in order to apply in the classroom. 2) Select 60 students by random sampling to do achievement test and take the test to be used in the classroom. The methods used in analysis of achievement test by the Kuder-Richardson Method (KR- 20). The results show that 12 items of achievement test are appropriate to be applied in the classroom. The achievement test gets Difficulty (P) = 0.40-0.67, Discrimination = 0.33-0.73 and Reliability (r) = 0.70.The experimental in the classroom. 3) Select 60 students by random sampling divide into two groups; group one (the controlled group) with 30 students was chosen to study in the vector product lesson by the regular teaching method. Group two (the experimental group) with 30 students was chosen to learn the vector product lesson with three dimensions vector card. 4) Analyzed data between the controlled group and the experimental group, the result showed that experimental group got higher achievement test than the controlled group significant at .01 level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glass, Samuel W.; Jones, Anthony M.; Fifield, Leonard S.
This Pacific Northwest National Laboratory milestone report describes progress to date on the investigation of nondestructive test methods focusing particularly on bulk electrical test methods that provide key indicators of cable aging and damage. The work includes a review of relevant literature as well as hands-on experimental verification of inspection capabilities. As nuclear power plants consider applying for second, or subsequent, license renewal to extend their operating period from 60 years to 80 years, it is important to understand how the materials installed in plant systems and components will age during that time and develop aging management programs to assuremore » continued safe operation under normal and design basis events (DBE). Normal component and system tests typically confirm the cables can perform their normal operational function. The focus of the cable test program, however, is directed toward the more demanding challenge of assuring the cable function under accident or DBE. The industry has adopted 50% elongation at break (EAB) relative to the un-aged cable condition as the acceptability standard. All tests are benchmarked against the cable EAB test. EAB, however, is a destructive test so the test programs must apply an array of other nondestructive examination (NDE) tests to assure or infer the overall set of cable’s system integrity. Assessment of cable integrity is further complicated in many cases by vendor’s use of dissimilar material for jacket and insulation. Frequently the jacket will degrade more rapidly than the underlying insulation. Although this can serve as an early alert to cable damage, direct test of the cable insulation without violating the protective jacket becomes problematic. This report addresses the range of bulk electrical NDE cable tests that are or could be practically implemented in a field-test situation with a particular focus on frequency domain reflectometry (FDR). The FDR test method offers numerous advantages over many other bulk electrical tests. Two commercial FDR systems plus a laboratory vector network analyzer are used to test an array of aged and un-aged cables under identical conditions. Several conclusions are set forth, and a number of knowledge gaps are identified.« less
García-Sesnich, Jocelyn N; Flores, Mauricio Garrido; Ríos, Marcela Hernández; Aravena, Jorge Gamonal
2017-01-01
Context: Stress is defined as an alteration of an organism's balance in response to a demand perceived from the environment. Diverse methods exist to evaluate physiological response. A noninvasive method is salivary measurement of cortisol and alpha-amylase. A growing body of evidence suggests that the regular practice of Yoga would be an effective treatment for stress. Aims: To determine the Kundalini Yoga (KY) effect, immediate and after 3 months of regular practice, on the perception of psychological stress and the salivary levels of cortisol and alpha-amylase activity. Settings and Design: To determine the psychological perceived stress, levels of cortisol and alpha-amylase activity in saliva, and compare between the participants to KY classes performed for 3 months and a group that does not practice any type of yoga. Subjects and Methods: The total sample consisted of 26 people between 18 and 45-year-old; 13 taking part in KY classes given at the Faculty of Dentistry, University of Chile and 13 controls. Salivary samples were collected, enzyme-linked immunosorbent assay was performed to quantify cortisol and kinetic reaction test was made to determine alpha-amylase activity. Perceived Stress Scale was applied at the beginning and at the end of the intervention. Statistical Analysis Used: Statistical analysis was applied using Stata v11.1 software. Shapiro–Wilk test was used to determine data distribution. The paired analysis was fulfilled by t-test or Wilcoxon signed-rank test. T-test or Mann–Whitney's test was applied to compare longitudinal data. A statistical significance was considered when P < 0.05. Results: KY practice had an immediate effect on salivary cortisol. The activity of alpha-amylase did not show significant changes. A significant decrease of perceived stress in the study group was found. Conclusions: KY practice shows an immediate effect on salivary cortisol levels and on perceived stress after 3 months of practice. PMID:28546677
NASA Astrophysics Data System (ADS)
Bellver, Fernando Gimeno; Garratón, Manuel Caravaca; Soto Meca, Antonio; López, Juan Antonio Vera; Guirao, Juan L. G.; Fernández-Martínez, Manuel
In this paper, we explore the chaotic behavior of resistively and capacitively shunted Josephson junctions via the so-called Network Simulation Method. Such a numerical approach establishes a formal equivalence among physical transport processes and electrical networks, and hence, it can be applied to efficiently deal with a wide range of differential systems. The generality underlying that electrical equivalence allows to apply the circuit theory to several scientific and technological problems. In this work, the Fast Fourier Transform has been applied for chaos detection purposes and the calculations have been carried out in PSpice, an electrical circuit software. Overall, it holds that such a numerical approach leads to quickly computationally solve Josephson differential models. An empirical application regarding the study of the Josephson model completes the paper.
Verweij, Jaco J; Stensvold, C Rune
2014-04-01
Over the past few decades, nucleic acid-based methods have been developed for the diagnosis of intestinal parasitic infections. Advantages of nucleic acid-based methods are numerous; typically, these include increased sensitivity and specificity and simpler standardization of diagnostic procedures. DNA samples can also be stored and used for genetic characterization and molecular typing, providing a valuable tool for surveys and surveillance studies. A variety of technologies have been applied, and some specific and general pitfalls and limitations have been identified. This review provides an overview of the multitude of methods that have been reported for the detection of intestinal parasites and offers some guidance in applying these methods in the clinical laboratory and in epidemiological studies.
Stensvold, C. Rune
2014-01-01
SUMMARY Over the past few decades, nucleic acid-based methods have been developed for the diagnosis of intestinal parasitic infections. Advantages of nucleic acid-based methods are numerous; typically, these include increased sensitivity and specificity and simpler standardization of diagnostic procedures. DNA samples can also be stored and used for genetic characterization and molecular typing, providing a valuable tool for surveys and surveillance studies. A variety of technologies have been applied, and some specific and general pitfalls and limitations have been identified. This review provides an overview of the multitude of methods that have been reported for the detection of intestinal parasites and offers some guidance in applying these methods in the clinical laboratory and in epidemiological studies. PMID:24696439
Ultraviolet resonance Raman spectroscopy for the detection of cocaine in oral fluid
NASA Astrophysics Data System (ADS)
D'Elia, Valentina; Montalvo, Gemma; Ruiz, Carmen García; Ermolenkov, Vladimir V.; Ahmed, Yasmine; Lednev, Igor K.
2018-01-01
Detecting and quantifying cocaine in oral fluid is of significant importance for practical forensics. Up to date, mainly destructive methods or biochemical tests have been used, while spectroscopic methods were only applied to pretreated samples. In this work, the possibility of using resonance Raman spectroscopy to detect cocaine in oral fluid without pretreating samples was tested. It was found that ultraviolet resonance Raman spectroscopy with 239-nm excitation allows for the detection of cocaine in oral fluid at 10 μg/mL level. Further method development will be needed for reaching the practically useful levels of cocaine detection.
Viscoelastic material inversion using Sierra-SD and ROL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walsh, Timothy; Aquino, Wilkins; Ridzal, Denis
2014-11-01
In this report we derive frequency-domain methods for inverse characterization of the constitutive parameters of viscoelastic materials. The inverse problem is cast in a PDE-constrained optimization framework with efficient computation of gradients and Hessian vector products through matrix free operations. The abstract optimization operators for first and second derivatives are derived from first principles. Various methods from the Rapid Optimization Library (ROL) are tested on the viscoelastic inversion problem. The methods described herein are applied to compute the viscoelastic bulk and shear moduli of a foam block model, which was recently used in experimental testing for viscoelastic property characterization.
Non-Invasive Seismic Methods for Earthquake Site Classification Applied to Ontario Bridge Sites
NASA Astrophysics Data System (ADS)
Bilson Darko, A.; Molnar, S.; Sadrekarimi, A.
2017-12-01
How a site responds to earthquake shaking and its corresponding damage is largely influenced by the underlying ground conditions through which it propagates. The effects of site conditions on propagating seismic waves can be predicted from measurements of the shear wave velocity (Vs) of the soil layer(s) and the impedance ratio between bedrock and soil. Currently the seismic design of new buildings and bridges (2015 Canadian building and bridge codes) requires determination of the time-averaged shear-wave velocity of the upper 30 metres (Vs30) of a given site. In this study, two in situ Vs profiling methods; Multichannel Analysis of Surface Waves (MASW) and Ambient Vibration Array (AVA) methods are used to determine Vs30 at chosen bridge sites in Ontario, Canada. Both active-source (MASW) and passive-source (AVA) surface wave methods are used at each bridge site to obtain Rayleigh-wave phase velocities over a wide frequency bandwidth. The dispersion curve is jointly inverted with each site's amplification function (microtremor horizontal-to-vertical spectral ratio) to obtain shear-wave velocity profile(s). We apply our non-invasive testing at three major infrastructure projects, e.g., five bridge sites along the Rt. Hon. Herb Gray Parkway in Windsor, Ontario. Our non-invasive testing is co-located with previous invasive testing, including Standard Penetration Test (SPT), Cone Penetration Test and downhole Vs data. Correlations between SPT blowcount and Vs are developed for the different soil types sampled at our Ontario bridge sites. A robust earthquake site classification procedure (reliable Vs30 estimates) for bridge sites across Ontario is evaluated from available combinations of invasive and non-invasive site characterization methods.
Anaerobic Threshold by Mathematical Model in Healthy and Post-Myocardial Infarction Men.
Novais, L D; Silva, E; Simões, R P; Sakabe, D I; Martins, L E B; Oliveira, L; Diniz, C A R; Gallo, L; Catai, A M
2016-02-01
The aim of this study was to determine the anaerobic threshold (AT) in a population of healthy and post-myocardial infarction men by applying Hinkley's mathematical method and comparing its performance to the ventilatory visual method. This mathematical model, in lieu of observer-dependent visual determination, can produce more reliable results due to the uniformity of the procedure. 17 middle-aged men (55±3 years) were studied in 2 groups: 9 healthy men (54±2 years); and 8 men with previous myocardial infarction (57±3 years). All subjects underwent an incremental ramp exercise test until physical exhaustion. Breath-by-breath ventilatory variables, heart rate (HR), and vastus lateralis surface electromyography (sEMG) signal were collected throughout the test. Carbon dioxide output (V˙CO2), HR, and sEMG were studied, and the AT determination methods were compared using correlation coefficients and Bland-Altman plots. Parametric statistical tests were applied with significance level set at 5%. No significant differences were found in the HR, sEMG, and ventilatory variables at AT between the different methods, such as the intensity of effort relative to AT. Moreover, important concordance and significant correlations were observed between the methods. We concluded that the mathematical model was suitable for detecting the AT in both healthy and myocardial infarction subjects. © Georg Thieme Verlag KG Stuttgart · New York.
Improvement for enhancing effectiveness of universal power system (UPS) continuous testing process
NASA Astrophysics Data System (ADS)
Sriratana, Lerdlekha
2018-01-01
This experiment aims to enhance the effectiveness of the Universal Power System (UPS) continuous testing process of the Electrical and Electronic Institute by applying work scheduling and time study methods. Initially, the standard time of testing process has not been considered that results of unaccurate testing target and also time wasting has been observed. As monitoring and reducing waste time for improving the efficiency of testing process, Yamazumi chart and job scheduling theory (North West Corner Rule) were applied to develop new work process. After the improvements, the overall efficiency of the process possibly increased from 52.8% to 65.6% or 12.7%. Moreover, the waste time could reduce from 828.3 minutes to 653.6 minutes or 21%, while testing units per batch could increase from 3 to 4 units. Therefore, the number of testing units would increase from 12 units up to 20 units per month that also contribute to increase of net income of UPS testing process by 72%.
NASA Astrophysics Data System (ADS)
Attia, Khalid A. M.; El-Abasawi, Nasr M.; El-Olemy, Ahmed; Serag, Ahmed
2018-02-01
Five simple spectrophotometric methods were developed for the determination of simeprevir in the presence of its oxidative degradation product namely, ratio difference, mean centering, derivative ratio using the Savitsky-Golay filters, second derivative and continuous wavelet transform. These methods are linear in the range of 2.5-40 μg/mL and validated according to the ICH guidelines. The obtained results of accuracy, repeatability and precision were found to be within the acceptable limits. The specificity of the proposed methods was tested using laboratory prepared mixtures and assessed by applying the standard addition technique. Furthermore, these methods were statistically comparable to RP-HPLC method and good results were obtained. So, they can be used for the routine analysis of simeprevir in quality-control laboratories.
Effectiveness of Cool Roof Coatings with Ceramic Particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brehob, Ellen G; Desjarlais, Andre Omer; Atchley, Jerald Allen
2011-01-01
Liquid applied coatings promoted as cool roof coatings, including several with ceramic particles, were tested at Oak Ridge National Laboratory (ORNL), Oak Ridge, Tenn., for the purpose of quantifying their thermal performances. Solar reflectance measurements were made for new samples and aged samples using a portable reflectometer (ASTM C1549, Standard Test Method for Determination of Solar Reflectance Near Ambient Temperature Using a Portable Solar Reflectometer) and for new samples using the integrating spheres method (ASTM E903, Standard Test Method for Solar Absorptance, Reflectance, and Transmittance of Materials Using Integrating Spheres). Thermal emittance was measured for the new samples using amore » portable emissometer (ASTM C1371, Standard Test Method for Determination of Emittance of Materials Near Room 1 Proceedings of the 2011 International Roofing Symposium Temperature Using Portable Emissometers). Thermal conductivity of the coatings was measured using a FOX 304 heat flow meter (ASTM C518, Standard Test Method for Steady-State Thermal Transmission Properties by Means of the Heat Flow Meter Apparatus). The surface properties of the cool roof coatings had higher solar reflectance than the reference black and white material, but there were no significant differences among coatings with and without ceramics. The coatings were applied to EPDM (ethylene propylene diene monomer) membranes and installed on the Roof Thermal Research Apparatus (RTRA), an instrumented facility at ORNL for testing roofs. Roof temperatures and heat flux through the roof were obtained for a year of exposure in east Tennessee. The field tests showed significant reduction in cooling required compared with the black reference roof (~80 percent) and a modest reduction in cooling compared with the white reference roof (~33 percent). The coating material with the highest solar reflectivity (no ceramic particles) demonstrated the best overall thermal performance (combination of reducing the cooling load cost and not incurring a large heating penalty cost) and suggests solar reflectivity is the significant characteristic for selecting cool roof coatings.« less
[Inappropriate test methods in allergy].
Kleine-Tebbe, J; Herold, D A
2010-11-01
Inappropriate test methods are increasingly utilized to diagnose allergy. They fall into two categories: I. Tests with obscure theoretical basis, missing validity and lacking reproducibility, such as bioresonance, electroacupuncture, applied kinesiology and the ALCAT-test. These methods lack both the technical and clinical validation needed to justify their use. II. Tests with real data, but misleading interpretation: Detection of IgG or IgG4-antibodies or lymphocyte proliferation tests to foods do not allow to separate healthy from diseased subjects, neither in case of food intolerance, allergy or other diagnoses. The absence of diagnostic specificity induces many false positive findings in healthy subjects. As a result unjustified diets might limit quality of life and lead to malnutrition. Proliferation of lymphocytes in response to foods can show elevated rates in patients with allergies. These values do not allow individual diagnosis of hypersensitivity due to their broad variation. Successful internet marketing, infiltration of academic programs and superficial reporting by the media promote the popularity of unqualified diagnostic tests; also in allergy. Therefore, critical observation and quick analysis of and clear comments to unqualified methods by the scientific medical societies are more important than ever.
Su, Chun-Lung; Gardner, Ian A; Johnson, Wesley O
2004-07-30
The two-test two-population model, originally formulated by Hui and Walter, for estimation of test accuracy and prevalence estimation assumes conditionally independent tests, constant accuracy across populations and binomial sampling. The binomial assumption is incorrect if all individuals in a population e.g. child-care centre, village in Africa, or a cattle herd are sampled or if the sample size is large relative to population size. In this paper, we develop statistical methods for evaluating diagnostic test accuracy and prevalence estimation based on finite sample data in the absence of a gold standard. Moreover, two tests are often applied simultaneously for the purpose of obtaining a 'joint' testing strategy that has either higher overall sensitivity or specificity than either of the two tests considered singly. Sequential versions of such strategies are often applied in order to reduce the cost of testing. We thus discuss joint (simultaneous and sequential) testing strategies and inference for them. Using the developed methods, we analyse two real and one simulated data sets, and we compare 'hypergeometric' and 'binomial-based' inferences. Our findings indicate that the posterior standard deviations for prevalence (but not sensitivity and specificity) based on finite population sampling tend to be smaller than their counterparts for infinite population sampling. Finally, we make recommendations about how small the sample size should be relative to the population size to warrant use of the binomial model for prevalence estimation. Copyright 2004 John Wiley & Sons, Ltd.
Does Choice of Multicriteria Method Matter? An Experiment in Water Resources Planning
NASA Astrophysics Data System (ADS)
Hobbs, Benjamin F.; Chankong, Vira; Hamadeh, Wael; Stakhiv, Eugene Z.
1992-07-01
Many multiple criteria decision making methods have been proposed and applied to water planning. Their purpose is to provide information on tradeoffs among objectives and to help users articulate value judgments in a systematic, coherent, and documentable manner. The wide variety of available techniques confuses potential users, causing inappropriate matching of methods with problems. Experiments in which water planners apply more than one multicriteria procedure to realistic problems can help dispel this confusion by testing method appropriateness, ease of use, and validity. We summarize one such experiment where U.S. Army Corps of Engineers personnel used several methods to screen urban water supply plans. The methods evaluated include goal programming, ELECTRE I, additive value functions, multiplicative utility functions, and three techniques for choosing weights (direct rating, indifference tradeoff, and the analytical hierarchy process). Among the conclusions we reach are the following. First, experienced planners generally prefer simpler, more transparent methods. Additive value functions are favored. Yet none of the methods are endorsed by a majority of the participants; many preferred to use no formal method at all. Second, there is strong evidence that rating, the most commonly applied weight selection method, is likely to lead to weights that fail to represent the trade-offs that users are willing to make among criteria. Finally, we show that decisions can be as or more sensitive to the method used as to which person applies it. Therefore, if who chooses is important, then so too is how a choice is made.
Problem Solving & Comprehension. Fourth Edition.
ERIC Educational Resources Information Center
Whimbey, Arthur; Lochhead, Jack
This book shows how to increase one's power to analyze and comprehend problems. First, it outlines and illustrates the methods that good problem solvers use in attacking complex ideas. Then it gives some practice in applying these methods to a variety of questions in comprehension and reasoning. Chapters include: (1) "Test Your Mind--See How…
21 CFR 118.8 - Testing methodology for Salmonella Enteritidis (SE).
Code of Federal Regulations, 2010 CFR
2010-04-01
... Salmonella Web site is located at http://www.fda.gov/Food/ScienceResearch/LaboratoryMethods/ucm114716.htm... you may examine a copy at the Center for Food Safety and Applied Nutrition's Library, 5100 Paint... Edition, is located at http://www.fda.gov/Food/ScienceResearch/LaboratoryMethods/BacteriologicalAnalytical...
40 CFR 1065.275 - N2O measurement devices.
Code of Federal Regulations, 2010 CFR
2010-07-01
... for interpretation of infrared spectra. For example, EPA Test Method 320 is considered a valid method... and length to achieve adequate resolution of the N2O peak for analysis. Examples of acceptable columns....550(b) that would otherwise apply. For example, you may perform a span gas measurement before and...
40 CFR 1065.275 - N2O measurement devices.
Code of Federal Regulations, 2011 CFR
2011-07-01
... for interpretation of infrared spectra. For example, EPA Test Method 320 is considered a valid method... and length to achieve adequate resolution of the N2O peak for analysis. Examples of acceptable columns....550(b) that would otherwise apply. For example, you may perform a span gas measurement before and...
Discussion and Outline of a Course on Methods of Teaching a Foreign Language.
ERIC Educational Resources Information Center
Heiser, Mary
This course, designed for instructing potential teaching assistants to teach college students a foreign language, concentrates on six major areas of preparation. A detailed outline covers: (1) course introduction and definitions, (2) applied linguistics, (3) approaches and methods, (4) testing, (5) classroom techniques, and (6) demonstrations.…
Evaluating Equating Results: Percent Relative Error for Chained Kernel Equating
ERIC Educational Resources Information Center
Jiang, Yanlin; von Davier, Alina A.; Chen, Haiwen
2012-01-01
This article presents a method for evaluating equating results. Within the kernel equating framework, the percent relative error (PRE) for chained equipercentile equating was computed under the nonequivalent groups with anchor test (NEAT) design. The method was applied to two data sets to obtain the PRE, which can be used to measure equating…
Vectorized multigrid Poisson solver for the CDC CYBER 205
NASA Technical Reports Server (NTRS)
Barkai, D.; Brandt, M. A.
1984-01-01
The full multigrid (FMG) method is applied to the two dimensional Poisson equation with Dirichlet boundary conditions. This has been chosen as a relatively simple test case for examining the efficiency of fully vectorizing of the multigrid method. Data structure and programming considerations and techniques are discussed, accompanied by performance details.
Signal processing methods for in-situ creep specimen monitoring
NASA Astrophysics Data System (ADS)
Guers, Manton J.; Tittmann, Bernhard R.
2018-04-01
Previous work investigated using guided waves for monitoring creep deformation during accelerated life testing. The basic objective was to relate observed changes in the time-of-flight to changes in the environmental temperature and specimen gage length. The work presented in this paper investigated several signal processing strategies for possible application in the in-situ monitoring system. Signal processing methods for both group velocity (wave-packet envelope) and phase velocity (peak tracking) time-of-flight were considered. Although the Analytic Envelope found via the Hilbert transform is commonly applied for group velocity measurements, erratic behavior in the indicated time-of-flight was observed when this technique was applied to the in-situ data. The peak tracking strategies tested had generally linear trends, and tracking local minima in the raw waveform ultimately showed the most consistent results.
NASA Astrophysics Data System (ADS)
Shirzaei, M.; Walter, T. R.
2009-10-01
Modern geodetic techniques provide valuable and near real-time observations of volcanic activity. Characterizing the source of deformation based on these observations has become of major importance in related monitoring efforts. We investigate two random search approaches, simulated annealing (SA) and genetic algorithm (GA), and utilize them in an iterated manner. The iterated approach helps to prevent GA in general and SA in particular from getting trapped in local minima, and it also increases redundancy for exploring the search space. We apply a statistical competency test for estimating the confidence interval of the inversion source parameters, considering their internal interaction through the model, the effect of the model deficiency, and the observational error. Here, we present and test this new randomly iterated search and statistical competency (RISC) optimization method together with GA and SA for the modeling of data associated with volcanic deformations. Following synthetic and sensitivity tests, we apply the improved inversion techniques to two episodes of activity in the Campi Flegrei volcanic region in Italy, observed by the interferometric synthetic aperture radar technique. Inversion of these data allows derivation of deformation source parameters and their associated quality so that we can compare the two inversion methods. The RISC approach was found to be an efficient method in terms of computation time and search results and may be applied to other optimization problems in volcanic and tectonic environments.
Threshold Assessment of Gear Diagnostic Tools on Flight and Test Rig Data
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.; Mosher, Marianne; Huff, Edward M.
2003-01-01
A method for defining thresholds for vibration-based algorithms that provides the minimum number of false alarms while maintaining sensitivity to gear damage was developed. This analysis focused on two vibration based gear damage detection algorithms, FM4 and MSA. This method was developed using vibration data collected during surface fatigue tests performed in a spur gearbox rig. The thresholds were defined based on damage progression during tests with damage. The thresholds false alarm rates were then evaluated on spur gear tests without damage. Next, the same thresholds were applied to flight data from an OH-58 helicopter transmission. Results showed that thresholds defined in test rigs can be used to define thresholds in flight to correctly classify the transmission operation as normal.
NASA Astrophysics Data System (ADS)
Guo, Jun; Lu, Siliang; Zhai, Chao; He, Qingbo
2018-02-01
An automatic bearing fault diagnosis method is proposed for permanent magnet synchronous generators (PMSGs), which are widely installed in wind turbines subjected to low rotating speeds, speed fluctuations, and electrical device noise interferences. The mechanical rotating angle curve is first extracted from the phase current of a PMSG by sequentially applying a series of algorithms. The synchronous sampled vibration signal of the fault bearing is then resampled in the angular domain according to the obtained rotating phase information. Considering that the resampled vibration signal is still overwhelmed by heavy background noise, an adaptive stochastic resonance filter is applied to the resampled signal to enhance the fault indicator and facilitate bearing fault identification. Two types of fault bearings with different fault sizes in a PMSG test rig are subjected to experiments to test the effectiveness of the proposed method. The proposed method is fully automated and thus shows potential for convenient, highly efficient and in situ bearing fault diagnosis for wind turbines subjected to harsh environments.
The fundamental parameter method applied to X-ray fluorescence analysis with synchrotron radiation
NASA Astrophysics Data System (ADS)
Pantenburg, F. J.; Beier, T.; Hennrich, F.; Mommsen, H.
1992-05-01
Quantitative X-ray fluorescence analysis applying the fundamental parameter method is usually restricted to monochromatic excitation sources. It is shown here, that such analyses can be performed as well with a white synchrotron radiation spectrum. To determine absolute elemental concentration values it is necessary to know the spectral distribution of this spectrum. A newly designed and tested experimental setup, which uses the synchrotron radiation emitted from electrons in a bending magnet of ELSA (electron stretcher accelerator of the university of Bonn) is presented. The determination of the exciting spectrum, described by the given electron beam parameters, is limited due to uncertainties in the vertical electron beam size and divergence. We describe a method which allows us to determine the relative and absolute spectral distributions needed for accurate analysis. First test measurements of different alloys and standards of known composition demonstrate that it is possible to determine exact concentration values in bulk and trace element analysis.
Leontaridou, Maria; Gabbert, Silke; Van Ierland, Ekko C; Worth, Andrew P; Landsiedel, Robert
2016-07-01
This paper offers a Bayesian Value-of-Information (VOI) analysis for guiding the development of non-animal testing strategies, balancing information gains from testing with the expected social gains and costs from the adoption of regulatory decisions. Testing is assumed to have value, if, and only if, the information revealed from testing triggers a welfare-improving decision on the use (or non-use) of a substance. As an illustration, our VOI model is applied to a set of five individual non-animal prediction methods used for skin sensitisation hazard assessment, seven battery combinations of these methods, and 236 sequential 2-test and 3-test strategies. Their expected values are quantified and compared to the expected value of the local lymph node assay (LLNA) as the animal method. We find that battery and sequential combinations of non-animal prediction methods reveal a significantly higher expected value than the LLNA. This holds for the entire range of prior beliefs. Furthermore, our results illustrate that the testing strategy with the highest expected value does not necessarily have to follow the order of key events in the sensitisation adverse outcome pathway (AOP). 2016 FRAME.
Ingersoll, C.G.; Ankley, G.T.; Benoit, D.A.; Brunson, E.L.; Burton, G.A.; Dwyer, F.J.; Hoke, R.A.; Landrum, P.F.; Norberg-King, T. J.; Winger, P.V.
1995-01-01
This paper reviews recent developments in methods for evaluating the toxicity and bioaccumulation of contaminants associated with freshwater sediments and summarizes example case studies demonstrating the application of these methods. Over the past decade, research has emphasized development of more specific testing procedures for conducting 10-d toxicity tests with the amphipod Hyalella azteca and the midge Chironomus tentans. Toxicity endpoints measured in these tests are survival for H. azteca and survival and growth for C. tentans. Guidance has also been developed for conducting 28-d bioaccumulation tests with the oligochaete Lumbriculus variegatus, including determination of bioaccumulation kinetics for different compound classes. These methods have been applied to a variety of sediments to address issues ranging from site assessments to bioavailability of organic and inorganic contaminants using field-collected and laboratory-spiked samples. Survival and growth of controls routinely meet or exceed test acceptability criteria. Results of laboratory bioaccumulation studies with L. variegatus have been confirmed with comparisons to residues (PCBs, PAHs, DDT) present from synoptically collected field populations of oligochaetes. Additional method development is currently underway to develop chronic toxicity tests and to provide additional data-confirming responses observed in laboratory sediment tests with natural benthic populations.
Sauer, G
1976-04-01
Mandibular movements of test persons with eugnathic and dysgnathic dentitions were compared with each other by kinematographic tests and evaluation of single photographs. No essential differences in coarse movements are discernible. Neither form nor extent of the masticatory movements are significantly influenced by any dysgnathia or the type of food chewed. The testing method applied does not furnish any information on movements in the occlusion phase.
The use of fractional order derivatives for eddy current non-destructive testing
NASA Astrophysics Data System (ADS)
Sikora, Ryszard; Grzywacz, Bogdan; Chady, Tomasz
2018-04-01
The paper presents the possibility of using the fractional derivatives for non-destructive testing when a multi-frequency method based on eddy current is applied. It is shown that frequency characteristics obtained during tests can be approximated by characteristics of a proposed model in the form of fractional order transfer function, and values of parameters of this model can be utilized for detection and identification of defects.
Multisignal detecting system of pile integrity testing
NASA Astrophysics Data System (ADS)
Liu, Zuting; Luo, Ying; Yu, Shihai
2002-05-01
The low strain reflection wave method plays a principal rule in the integrating detection of base piles. However, there are some deficiencies with this method. For example, there is a blind area of detection on top of the tested pile; it is difficult to recognize the defects at deep-seated parts of the pile; there is still the planar of 3D domino effect, etc. It is very difficult to solve these problems only with the single-transducer pile integrity testing system. A new multi-signal piles integrity testing system is proposed in this paper, which is able to impulse and collect signals on multiple points on top of the pile. By using the multiple superposition data processing method, the detecting system can effectively restrain the interference and elevate the precision and SNR of pile integrity testing. The system can also be applied to the evaluation of engineering structure health.
Methodology to improve design of accelerated life tests in civil engineering projects.
Lin, Jing; Yuan, Yongbo; Zhou, Jilai; Gao, Jie
2014-01-01
For reliability testing an Energy Expansion Tree (EET) and a companion Energy Function Model (EFM) are proposed and described in this paper. Different from conventional approaches, the EET provides a more comprehensive and objective way to systematically identify external energy factors affecting reliability. The EFM introduces energy loss into a traditional Function Model to identify internal energy sources affecting reliability. The combination creates a sound way to enumerate the energies to which a system may be exposed during its lifetime. We input these energies into planning an accelerated life test, a Multi Environment Over Stress Test. The test objective is to discover weak links and interactions among the system and the energies to which it is exposed, and design them out. As an example, the methods are applied to the pipe in subsea pipeline. However, they can be widely used in other civil engineering industries as well. The proposed method is compared with current methods.
Hyper-X Mach 7 Scramjet Design, Ground Test and Flight Results
NASA Technical Reports Server (NTRS)
Ferlemann, Shelly M.; McClinton, Charles R.; Rock, Ken E.; Voland, Randy T.
2005-01-01
The successful Mach 7 flight test of the Hyper-X (X-43) research vehicle has provided the major, essential demonstration of the capability of the airframe integrated scramjet engine. This flight was a crucial first step toward realizing the potential for airbreathing hypersonic propulsion for application to space launch vehicles. However, it is not sufficient to have just achieved a successful flight. The more useful knowledge gained from the flight is how well the prediction methods matched the actual test results in order to have confidence that these methods can be applied to the design of other scramjet engines and powered vehicles. The propulsion predictions for the Mach 7 flight test were calculated using the computer code, SRGULL, with input from computational fluid dynamics (CFD) and wind tunnel tests. This paper will discuss the evolution of the Mach 7 Hyper-X engine, ground wind tunnel experiments, propulsion prediction methodology, flight results and validation of design methods.
Shugars, D A; Trent, P J; Heymann, H O
1979-08-01
Two instructional strategies, the traditional lecture method and a standardized self-instructional (ACORDE) format, were compared for efficiency and perceived usefulness in a preclinical restorative dentistry technique course through the use of a posttest-only control group research design. Control and experimental groups were compared on (a) technique grades, (b) didactic grades, (c) amount of time spent, (d) student and faculty perceptions, and (e) observation of social dynamics. The results of this study demonstrated the effectiveness of Project ACORDE materials in teaching dental students, provided an example of applied research designed to test contemplated instructional innovations prior to use and used a method which highlighted qualitative, as well as quantitative, techniques for data gathering in applied research.
Paibon, W; Yimnoi, C-A; Tembab, N; Boonlue, W; Jampachaisri, K; Nuengchamnong, N; Waranuch, N; Ingkaninan, K
2011-04-01
Several tropical flowers have distinctive fragrances which are very appealing to use in perfumery, cosmetics and spa. However, to obtain a 'natural fragrance' from the flower is a challenge as the scent could change during the extraction process. The aim of the study is to find the suitable procedure for extraction of volatile oils from some Thai fragrant flowers. Three different methods: hydrodistillation, solvent extraction and enfleurage methods have been applied for the extraction of volatile oil from Jasminum sambac L. Aiton; Oleaceae (jasmine). The quantities and quality of jasmine volatile oils obtained from the different tested methods were compared. The solvent extraction method using 95% ethanol provided the greatest level of oil yield. However, sensory evaluation using preference test showed that the scents of the volatile oils from solvent extraction using diethyl ether and from enfleurage method were the closest to the fresh flowers compared with the volatile oils obtained from other methods. Their chemical constituents were analysed using gas chromatography coupled with mass spectrometer. Both volatile oils were then evaluated using a triangle discrimination test. From the triangle test, we found that 14 panellists from the total of 36 could not distinguish between the scents of jasmine oil from enfleurage and fresh jasmine flowers whereas only one panellist could not distinguish between the scent of jasmine oil from the solvent extraction and fresh jasmine flowers. These results suggest that the scent of the volatile oil obtained from the enfleurage method was the closest to fresh flowers compared with that obtained from other methods. This method was then successfully applied for extraction of volatile oils from three other Thai fragrant flowers, Michelia alba DC.; Magnoliaceae, Millingtonia hortensis L.; Bignoniaceae and Hedychium coronarium J. Konig; Zingiberaceae. © 2010 The Authors. Journal compilation © 2010 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
Rock-Magnetic Method for Post Nuclear Detonation Diagnostics
NASA Astrophysics Data System (ADS)
Englert, J.; Petrosky, J.; Bailey, W.; Watts, D. R.; Tauxe, L.; Heger, A. S.
2011-12-01
A magnetic signature characteristic of a Nuclear Electromagnetic Pulse (NEMP) may still be detectable near the sites of atmospheric nuclear tests conducted at what is now the Nevada National Security Site. This signature is due to a secondary magnetization component of the natural remanent magnetization of material containing traces of ferromagnetic particles that have been exposed to a strong pulse of magnetic field. We apply a rock-magnetic method introduced by Verrier et al. (2002), and tested on samples exposed to artificial lightning, to samples of rock and building materials (e.g. bricks, concrete) retrieved from several above ground nuclear test sites. The results of magnetization measurements are compared to NEMP simulations and historic test measurements.
Dimech, Wayne; Karakaltsas, Marina; Vincini, Giuseppe A
2018-05-25
A general trend towards conducting infectious disease serology testing in centralized laboratories means that quality control (QC) principles used for clinical chemistry testing are applied to infectious disease testing. However, no systematic assessment of methods used to establish QC limits has been applied to infectious disease serology testing. A total of 103 QC data sets, obtained from six different infectious disease serology analytes, were parsed through standard methods for establishing statistical control limits, including guidelines from Public Health England, USA Clinical and Laboratory Standards Institute (CLSI), German Richtlinien der Bundesärztekammer (RiliBÄK) and Australian QConnect. The percentage of QC results failing each method was compared. The percentage of data sets having more than 20% of QC results failing Westgard rules when the first 20 results were used to calculate the mean±2 standard deviation (SD) ranged from 3 (2.9%) for R4S to 66 (64.1%) for 10X rule, whereas the percentage ranged from 0 (0%) for R4S to 32 (40.5%) for 10X when the first 100 results were used to calculate the mean±2 SD. By contrast, the percentage of data sets with >20% failing the RiliBÄK control limits was 25 (24.3%). Only two data sets (1.9%) had more than 20% of results outside the QConnect Limits. The rate of failure of QCs using QConnect Limits was more applicable for monitoring infectious disease serology testing compared with UK Public Health, CLSI and RiliBÄK, as the alternatives to QConnect Limits reported an unacceptably high percentage of failures across the 103 data sets.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) National Emission Standards for Hazardous Air Pollutants for Coke Ovens: Pushing, Quenching, and Battery... applied to the coke (e.g., from the header that feeds water to the quench tower reservoirs). Conduct... sample of the quench water as applied to the coke (e.g., from the header that feeds water to the quench...
Code of Federal Regulations, 2012 CFR
2012-07-01
...) National Emission Standards for Hazardous Air Pollutants for Coke Ovens: Pushing, Quenching, and Battery... applied to the coke (e.g., from the header that feeds water to the quench tower reservoirs). Conduct... sample of the quench water as applied to the coke (e.g., from the header that feeds water to the quench...
Code of Federal Regulations, 2013 CFR
2013-07-01
...) National Emission Standards for Hazardous Air Pollutants for Coke Ovens: Pushing, Quenching, and Battery... applied to the coke (e.g., from the header that feeds water to the quench tower reservoirs). Conduct... sample of the quench water as applied to the coke (e.g., from the header that feeds water to the quench...
Code of Federal Regulations, 2010 CFR
2010-07-01
...) National Emission Standards for Hazardous Air Pollutants for Coke Ovens: Pushing, Quenching, and Battery... applied to the coke (e.g., from the header that feeds water to the quench tower reservoirs). Conduct... sample of the quench water as applied to the coke (e.g., from the header that feeds water to the quench...
Julee A Herdt; John Hunt; Kellen Schauermann
2016-01-01
This project demonstrates newly invented, biobased construction materials developed by applying lowcarbon, biomass waste sources through the Authorsâ engineered fiber processes and technology. If manufactured and applied large-scale the project inventions can divert large volumes of cellulose waste into high-performance, low embodied energy, environmental construction...
Mirmohseni, A; Abdollahi, H; Rostamizadeh, K
2007-02-28
Net analyte signal (NAS)-based method called HLA/GO was applied for the selectively determination of binary mixture of ethanol and water by quartz crystal nanobalance (QCN) sensor. A full factorial design was applied for the formation of calibration and prediction sets in the concentration ranges 5.5-22.2 microg mL(-1) for ethanol and 7.01-28.07 microg mL(-1) for water. An optimal time range was selected by procedure which was based on the calculation of the net analyte signal regression plot in any considered time window for each test sample. A moving window strategy was used for searching the region with maximum linearity of NAS regression plot (minimum error indicator) and minimum of PRESS value. On the base of obtained results, the differences on the adsorption profiles in the time range between 1 and 600 s were used to determine mixtures of both compounds by HLA/GO method. The calculation of the net analytical signal using HLA/GO method allows determination of several figures of merit like selectivity, sensitivity, analytical sensitivity and limit of detection, for each component. To check the ability of the proposed method in the selection of linear regions of adsorption profile, a test for detecting non-linear regions of adsorption profile data in the presence of methanol was also described. The results showed that the method was successfully applied for the determination of ethanol and water.
A nonparametric smoothing method for assessing GEE models with longitudinal binary data.
Lin, Kuo-Chin; Chen, Yi-Ju; Shyr, Yu
2008-09-30
Studies involving longitudinal binary responses are widely applied in the health and biomedical sciences research and frequently analyzed by generalized estimating equations (GEE) method. This article proposes an alternative goodness-of-fit test based on the nonparametric smoothing approach for assessing the adequacy of GEE fitted models, which can be regarded as an extension of the goodness-of-fit test of le Cessie and van Houwelingen (Biometrics 1991; 47:1267-1282). The expectation and approximate variance of the proposed test statistic are derived. The asymptotic distribution of the proposed test statistic in terms of a scaled chi-squared distribution and the power performance of the proposed test are discussed by simulation studies. The testing procedure is demonstrated by two real data. Copyright (c) 2008 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Sharma, Dinkar; Singh, Prince; Chauhan, Shubha
2017-06-01
In this paper, a combined form of the Laplace transform method with the homotopy perturbation method is applied to solve nonlinear fifth order Korteweg de Vries (KdV) equations. The method is known as homotopy perturbation transform method (HPTM). The nonlinear terms can be easily handled by the use of He's polynomials. Two test examples are considered to illustrate the present scheme. Further the results are compared with Homotopy perturbation method (HPM).
East Europe Report, Economic and Industrial Affairs.
1984-06-07
undeniable that even objectivized norms often lack the required quality. Many of them have not been determined by dependable analytical met methods ...incentive methods , such as, for example, contract wages and their various modifications, are applied only to a limited extent. Many economic managers...discipline. Method of the Future—Work Team Khozrashchet We have been testing and gradually expanding work team forms of labor organiza- tion and
Aeroacoustics Computation for Nearly Fully Expanded Supersonic Jets Using the CE/SE Method
NASA Technical Reports Server (NTRS)
Loh, Ching Y.; Hultgren, Lennart S.; Wang, Xiao Y.; Chang, Sin-Chung; Jorgenson, Philip C. E.
2000-01-01
In this paper, the space-time conservation element solution element (CE/SE) method is tested in the classical axisymmetric jet instability problem, rendering good agreement with the linear theory. The CE/SE method is then applied to numerical simulations of several nearly fully expanded axisymmetric jet flows and their noise fields and qualitative agreement with available experimental and theoretical results is demonstrated.
Application of the Probabilistic Dynamic Synthesis Method to Realistic Structures
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ferri, Aldo A.
1998-01-01
The Probabilistic Dynamic Synthesis method is a technique for obtaining the statistics of a desired response engineering quantity for a structure with non-deterministic parameters. The method uses measured data from modal testing of the structure as the input random variables, rather than more "primitive" quantities like geometry or material variation. This modal information is much more comprehensive and easily measured than the "primitive" information. The probabilistic analysis is carried out using either response surface reliability methods or Monte Carlo simulation. In previous work, the feasibility of the PDS method applied to a simple seven degree-of-freedom spring-mass system was verified. In this paper, extensive issues involved with applying the method to a realistic three-substructure system are examined, and free and forced response analyses are performed. The results from using the method are promising, especially when the lack of alternatives for obtaining quantitative output for probabilistic structures is considered.
Comparison of analytical methods for the determination of histamine in reference canned fish samples
NASA Astrophysics Data System (ADS)
Jakšić, S.; Baloš, M. Ž.; Mihaljev, Ž.; Prodanov Radulović, J.; Nešić, K.
2017-09-01
Two screening methods for histamine in canned fish, an enzymatic test and a competitive direct enzyme-linked immunosorbent assay (CD-ELISA), were compared with the reversed-phase liquid chromatography (RP-HPLC) standard method. For enzymatic and CD-ELISA methods, determination was conducted according to producers’ manuals. For RP-HPLC, histamine was derivatized with dansyl-chloride, followed by RP-HPLC and diode array detection. Results of analysis of canned fish, supplied as reference samples for proficiency testing, showed good agreement when histamine was present at higher concentrations (above 100 mg kg-1). At a lower level (16.95 mg kg-1), the enzymatic test produced some higher results. Generally, analysis of four reference samples according to CD-ELISA and RP-HPLC showed good agreement for histamine determination (r=0.977 in concentration range 16.95-216 mg kg-1) The results show that the applied enzymatic test and CD-ELISA appeared to be suitable screening methods for the determination of histamine in canned fish.
Training needs for toxicity testing in the 21st century: a survey-informed analysis.
Lapenna, Silvia; Gabbert, Silke; Worth, Andrew
2012-12-01
Current training needs on the use of alternative methods in predictive toxicology, including new approaches based on mode-of-action (MoA) and adverse outcome pathway (AOP) concepts, are expected to evolve rapidly. In order to gain insight into stakeholder preferences for training, the European Commission's Joint Research Centre (JRC) conducted a single-question survey with twelve experts in regulatory agencies, industry, national research organisations, NGOs and consultancies. Stakeholder responses were evaluated by means of theory-based qualitative data analysis. Overall, a set of training topics were identified that relate both to general background information and to guidance for applying alternative testing methods. In particular, for the use of in silico methods, stakeholders emphasised the need for training on data integration and evaluation, in order to increase confidence in applying these methods for regulatory purposes. Although the survey does not claim to offer an exhaustive overview of the training requirements, its findings support the conclusion that the development of well-targeted and tailor-made training opportunities that inform about the usefulness of alternative methods, in particular those that offer practical experience in the application of in silico methods, deserves more attention. This should be complemented by transparent information and guidance on the interpretation of the results generated by these methods and software tools. 2012 FRAME.
A DATA-DRIVEN MODEL FOR SPECTRA: FINDING DOUBLE REDSHIFTS IN THE SLOAN DIGITAL SKY SURVEY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsalmantza, P.; Hogg, David W., E-mail: vivitsal@mpia.de
2012-07-10
We present a data-driven method-heteroscedastic matrix factorization, a kind of probabilistic factor analysis-for modeling or performing dimensionality reduction on observed spectra or other high-dimensional data with known but non-uniform observational uncertainties. The method uses an iterative inverse-variance-weighted least-squares minimization procedure to generate a best set of basis functions. The method is similar to principal components analysis (PCA), but with the substantial advantage that it uses measurement uncertainties in a responsible way and accounts naturally for poorly measured and missing data; it models the variance in the noise-deconvolved data space. A regularization can be applied, in the form of a smoothnessmore » prior (inspired by Gaussian processes) or a non-negative constraint, without making the method prohibitively slow. Because the method optimizes a justified scalar (related to the likelihood), the basis provides a better fit to the data in a probabilistic sense than any PCA basis. We test the method on Sloan Digital Sky Survey (SDSS) spectra, concentrating on spectra known to contain two redshift components: these are spectra of gravitational lens candidates and massive black hole binaries. We apply a hypothesis test to compare one-redshift and two-redshift models for these spectra, utilizing the data-driven model trained on a random subset of all SDSS spectra. This test confirms 129 of the 131 lens candidates in our sample and all of the known binary candidates, and turns up very few false positives.« less
Ramirez, Tzutzuy; Beken, Sonja; Chlebus, Magda; Ellis, Graham; Griesinger, Claudius; De Jonghe, Sandra; Manou, Irene; Mehling, Annette; Reisinger, Kerstin; Rossi, Laura H; van Benthem, Jan; van der Laan, Jan Willem; Weissenhorn, Renate; Sauer, Ursula G
2015-10-01
The European Partnership for Alternative Approaches to Animal Testing (EPAA) convened a workshop Knowledge sharing to facilitate regulatory decision-making. Fifty invited participants from the European Commission, national and European agencies and bodies, different industry sectors (chemicals, cosmetics, fragrances, pharmaceuticals, vaccines), and animal protection organizations attended the workshop. Four case studies exemplarily revealed which procedures are in place to obtain regulatory acceptance of new test methods in different sectors. Breakout groups discussed the status quo identifying the following facilitators for regulatory acceptance of alternatives to animal testing: Networking and communication (including cross-sector collaboration, international cooperation and harmonization); involvement of regulatory agencies from the initial stages of test method development on; certainty on prerequisites for test method acceptance including the establishment of specific criteria for regulatory acceptance. Data sharing and intellectual property issues affect many aspects of test method development, validation and regulatory acceptance. In principle, all activities should address replacement, reduction and refinement methods (albeit animal testing is generally prohibited in the cosmetics sector). Provision of financial resources and education support all activities aiming at facilitating the acceptance and use of alternatives to animal testing. Overall, workshop participants recommended building confidence in new methodologies by applying and gaining experience with them. Copyright © 2015 Elsevier Inc. All rights reserved.
Weld leaks rapidly and safely detected
NASA Technical Reports Server (NTRS)
1965-01-01
Test method detects leaks that occur during hydrostatic pressure testing of welded joints in metal tanks. A strip of aluminum foil and a strip of water-soluble paper are placed over the weld. A voltage applied between the tank wall and the foil strip is monitored to detect a decrease in ohmic resistance caused by water leakage into the paper layer.
Code of Federal Regulations, 2010 CFR
2010-07-01
... must I use to demonstrate initial compliance with the emission limits for particulate matter? 63.9621... the emission limits for particulate matter? (a) You must conduct each performance test that applies to... source, you must determine compliance with the applicable emission limit for particulate matter in Table...
Code of Federal Regulations, 2011 CFR
2011-07-01
... must I use to demonstrate initial compliance with the emission limits for particulate matter? 63.9621... the emission limits for particulate matter? (a) You must conduct each performance test that applies to... source, you must determine compliance with the applicable emission limit for particulate matter in Table...
Sally Smith's Art Methods Applied: Music Education for Adolescents with Learning Disabilities & ADHD
ERIC Educational Resources Information Center
Rozsics, M. Sean
2010-01-01
In recent years, Arts Education in America's secondary schools has been underfunded, undervalued, and underdeveloped. Music, in particular, has been under siege in the "No Child Left Behind" era as teachers increasingly teach students to pass specific written tests, and administrators focus on improving these test scores and struggle with related…
Airflow Resistance of Loose-Fill Mineral Fiber Insulations in Retrofit Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schumacher, C. J.; Fox, M. J.; Lstiburek, J.
2015-02-01
This report expands on Building America Report 1109 by applying the experimental apparatus and test method to dense-pack retrofit applications using mineral fiber insulation materials. Three (3) fiber glass insulation materials and one (1) stone wool insulation material were tested, and the results compared to the cellulose results from the previous study.
An Empirical Investigation of Methods for Assessing Item Fit for Mixed Format Tests
ERIC Educational Resources Information Center
Chon, Kyong Hee; Lee, Won-Chan; Ansley, Timothy N.
2013-01-01
Empirical information regarding performance of model-fit procedures has been a persistent need in measurement practice. Statistical procedures for evaluating item fit were applied to real test examples that consist of both dichotomously and polytomously scored items. The item fit statistics used in this study included the PARSCALE's G[squared],…
Hu, Jingwen; Klinich, Kathleen D; Miller, Carl S; Nazmi, Giseli; Pearlman, Mark D; Schneider, Lawrence W; Rupp, Jonathan D
2009-11-13
Motor-vehicle crashes are the leading cause of fetal deaths resulting from maternal trauma in the United States, and placental abruption is the most common cause of these deaths. To minimize this injury, new assessment tools, such as crash-test dummies and computational models of pregnant women, are needed to evaluate vehicle restraint systems with respect to reducing the risk of placental abruption. Developing these models requires accurate material properties for tissues in the pregnant abdomen under dynamic loading conditions that can occur in crashes. A method has been developed for determining dynamic material properties of human soft tissues that combines results from uniaxial tensile tests, specimen-specific finite-element models based on laser scans that accurately capture non-uniform tissue-specimen geometry, and optimization techniques. The current study applies this method to characterizing material properties of placental tissue. For 21 placenta specimens tested at a strain rate of 12/s, the mean failure strain is 0.472+/-0.097 and the mean failure stress is 34.80+/-12.62 kPa. A first-order Ogden material model with ground-state shear modulus (mu) of 23.97+/-5.52 kPa and exponent (alpha(1)) of 3.66+/-1.90 best fits the test results. The new method provides a nearly 40% error reduction (p<0.001) compared to traditional curve-fitting methods by considering detailed specimen geometry, loading conditions, and dynamic effects from high-speed loading. The proposed method can be applied to determine mechanical properties of other soft biological tissues.
OpenCL based machine learning labeling of biomedical datasets
NASA Astrophysics Data System (ADS)
Amoros, Oscar; Escalera, Sergio; Puig, Anna
2011-03-01
In this paper, we propose a two-stage labeling method of large biomedical datasets through a parallel approach in a single GPU. Diagnostic methods, structures volume measurements, and visualization systems are of major importance for surgery planning, intra-operative imaging and image-guided surgery. In all cases, to provide an automatic and interactive method to label or to tag different structures contained into input data becomes imperative. Several approaches to label or segment biomedical datasets has been proposed to discriminate different anatomical structures in an output tagged dataset. Among existing methods, supervised learning methods for segmentation have been devised to easily analyze biomedical datasets by a non-expert user. However, they still have some problems concerning practical application, such as slow learning and testing speeds. In addition, recent technological developments have led to widespread availability of multi-core CPUs and GPUs, as well as new software languages, such as NVIDIA's CUDA and OpenCL, allowing to apply parallel programming paradigms in conventional personal computers. Adaboost classifier is one of the most widely applied methods for labeling in the Machine Learning community. In a first stage, Adaboost trains a binary classifier from a set of pre-labeled samples described by a set of features. This binary classifier is defined as a weighted combination of weak classifiers. Each weak classifier is a simple decision function estimated on a single feature value. Then, at the testing stage, each weak classifier is independently applied on the features of a set of unlabeled samples. In this work, we propose an alternative representation of the Adaboost binary classifier. We use this proposed representation to define a new GPU-based parallelized Adaboost testing stage using OpenCL. We provide numerical experiments based on large available data sets and we compare our results to CPU-based strategies in terms of time and labeling speeds.
A study on setting of the fatigue limit of temporary dental implants.
Kim, M H; Cho, E J; Lee, J W; Kim, E K; Yoo, S H; Park, C W
2017-07-01
A temporary dental implant is a medical device which is temporarily used to support a prosthesis such as an artificial tooth used for restoring patient's masticatory function during implant treatment. It is implanted in the oral cavity to substitute for the role of tooth. Due to the aging and westernization of current Korean society, the number of tooth extraction and implantation procedures is increasing, leading to an increase in the use and development of temporary dental implants. Because an implant performs a masticatory function in place of a tooth, a dynamic load is repeatedly put on the implant. Thus, the fatigue of implants is reported to be the most common causes of the fracture thereof. According to the investigation and analysis of the current domestic and international standards, the standard for fatigue of implant fixtures is not separately specified. Although a test method for measuring the fatigue is suggested in an ISO standard, it is a standard for permanent dental implants. Most of the test standards for Korean manufacturers and importers apply 250 N or more based on the guidance for the safety and performance evaluation of dental implants. Therefore, this study is intended to figure out the fatigue standard which can be applied to temporary dental implants when measuring the fatigue according to the test method suggested in the permanent dental implant standard. The results determined that suitable fatigue standards of temporary dental implants should be provided by each manufacturer rather than applying 250 N. This study will be useful for the establishment of the fatigue standards and fatigue test methods of the manufacturers and importers of temporary dental implants.
Taylor, Michael J; McNicholas, Chris; Nicolay, Chris; Darzi, Ara; Bell, Derek; Reed, Julie E
2014-01-01
Background Plan–do–study–act (PDSA) cycles provide a structure for iterative testing of changes to improve quality of systems. The method is widely accepted in healthcare improvement; however there is little overarching evaluation of how the method is applied. This paper proposes a theoretical framework for assessing the quality of application of PDSA cycles and explores the consistency with which the method has been applied in peer-reviewed literature against this framework. Methods NHS Evidence and Cochrane databases were searched by three independent reviewers. Empirical studies were included that reported application of the PDSA method in healthcare. Application of PDSA cycles was assessed against key features of the method, including documentation characteristics, use of iterative cycles, prediction-based testing of change, initial small-scale testing and use of data over time. Results 73 of 409 individual articles identified met the inclusion criteria. Of the 73 articles, 47 documented PDSA cycles in sufficient detail for full analysis against the whole framework. Many of these studies reported application of the PDSA method that failed to accord with primary features of the method. Less than 20% (14/73) fully documented the application of a sequence of iterative cycles. Furthermore, a lack of adherence to the notion of small-scale change is apparent and only 15% (7/47) reported the use of quantitative data at monthly or more frequent data intervals to inform progression of cycles. Discussion To progress the development of the science of improvement, a greater understanding of the use of improvement methods, including PDSA, is essential to draw reliable conclusions about their effectiveness. This would be supported by the development of systematic and rigorous standards for the application and reporting of PDSAs. PMID:24025320
Meade, Rhiana D; Murray, Anna L; Mittelman, Anjuliee M; Rayner, Justine; Lantagne, Daniele S
2017-02-01
Locally manufactured ceramic water filters are one effective household drinking water treatment technology. During manufacturing, silver nanoparticles or silver nitrate are applied to prevent microbiological growth within the filter and increase bacterial removal efficacy. Currently, there is no recommendation for manufacturers to test silver concentrations of application solutions or filtered water. We identified six commercially available silver test strips, kits, and meters, and evaluated them by: (1) measuring in quintuplicate six samples from 100 to 1,000 mg/L (application range) and six samples from 0.0 to 1.0 mg/L (effluent range) of silver nanoparticles and silver nitrate to determine accuracy and precision; (2) conducting volunteer testing to assess ease-of-use; and (3) comparing costs. We found no method accurately detected silver nanoparticles, and accuracy ranged from 4 to 91% measurement error for silver nitrate samples. Most methods were precise, but only one method could test both application and effluent concentration ranges of silver nitrate. Volunteers considered test strip methods easiest. The cost for 100 tests ranged from 36 to 1,600 USD. We found no currently available method accurately and precisely measured both silver types at reasonable cost and ease-of-use, thus these methods are not recommended to manufacturers. We recommend development of field-appropriate methods that accurately and precisely measure silver nanoparticle and silver nitrate concentrations.
Concentrator hot-spot testing, phase 1
NASA Technical Reports Server (NTRS)
Gonzalez, C. C.
1987-01-01
Results of a study to determine the hot-spot susceptibility of concentrator cells, to provide a hot-spot qualification test for concentrator modules, and to provide guidelines for reducing hot-spot susceptibility are presented. Hot-spot heating occurs in a photovoltaic module when the short-circuit current of a cell is lower than the string operating current forcing the cell into reverse bias with a concurrent power dissipation. Although the basis for the concentrator module hot-spot qualification test is the test developed for flat-plate modules, issues, such as providing cell illumination, introduce additional complexities into the testing procedure. The same general guidelines apply for protecting concentrator modules from hot-spot stressing as apply to flat-plate modules. Therefore, recommendations are made on the number of bypass diodes required per given number of series cells per module or source circuit. In addition, a new method for determining the cell temperature in the laboratory or in the field is discussed.
Fracture Tests of Etched Components Using a Focused Ion Beam Machine
NASA Technical Reports Server (NTRS)
Kuhn, Jonathan, L.; Fettig, Rainer K.; Moseley, S. Harvey; Kutyrev, Alexander S.; Orloff, Jon; Powers, Edward I. (Technical Monitor)
2000-01-01
Many optical MEMS device designs involve large arrays of thin (0.5 to 1 micron components subjected to high stresses due to cyclic loading. These devices are fabricated from a variety of materials, and the properties strongly depend on size and processing. Our objective is to develop standard and convenient test methods that can be used to measure the properties of large numbers of witness samples, for every device we build. In this work we explore a variety of fracture test configurations for 0.5 micron thick silicon nitride membranes machined using the Reactive Ion Etching (RIE) process. Testing was completed using an FEI 620 dual focused ion beam milling machine. Static loads were applied using a probe. and dynamic loads were applied through a piezo-electric stack mounted at the base of the probe. Results from the tests are presented and compared, and application for predicting fracture probability of large arrays of devices are considered.
Revalidation of game for teaching blood pressure auscultatory measurement: a pilot study.
Bellan, Margarete Consorti; Alves, Vanessa Cortez; Neves, Mayza Luzia Dos Santos; Lamas, José Luiz Tatagiba
2017-01-01
To adapt a pre-existing educational game, making it specific to the teaching of blood pressure auscultatory measurement, and to apply this game in a pilot study. The original game cards were altered by the authors and submitted to content validation by six experts in the field. After redesigns, the game was applied to 30 subjects, who answered a questionnaire (pre-test and post-test) on auscultatory measurement. Data were analyzed descriptively and by the paired Student's t-test and paired Wilcoxon test. Throughout the content validation process, 17 of the 28 original cards were modified. Of these 17 cards, 13 obtained 80% agreement, and the rest were modified according to the judges' suggestions. The obtained grades significantly increased between pre- and the post-test. It was concluded that the reformulated game presented satisfactory evidence of content validity. Its use as a teaching-learning method was effective for this sample.
Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques
NASA Astrophysics Data System (ADS)
Mai, Juliane; Tolson, Bryan
2017-04-01
The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters or model processes. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method independency of the convergence testing method, we applied it to three widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991, Campolongo et al., 2000), the variance-based Sobol' method (Solbol' 1993, Saltelli et al. 2010) and a derivative-based method known as Parameter Importance index (Goehler et al. 2013). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. Subsequently, we focus on the model-independency by testing the frugal method using the hydrologic model mHM (www.ufz.de/mhm) with about 50 model parameters. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed (and published) sensitivity results. This is one step towards reliable and transferable, published sensitivity results.
Validation of two innovative methods to measure contaminant mass flux in groundwater
NASA Astrophysics Data System (ADS)
Goltz, Mark N.; Close, Murray E.; Yoon, Hyouk; Huang, Junqi; Flintoft, Mark J.; Kim, Sehjong; Enfield, Carl
2009-04-01
The ability to quantify the mass flux of a groundwater contaminant that is leaching from a source area is critical to enable us to: (1) evaluate the risk posed by the contamination source and prioritize cleanup, (2) evaluate the effectiveness of source remediation technologies or natural attenuation processes, and (3) quantify a source term for use in models that may be applied to predict maximum contaminant concentrations in downstream wells. Recently, a number of new methods have been developed and subsequently applied to measure contaminant mass flux in groundwater in the field. However, none of these methods has been validated at larger than the laboratory-scale through a comparison of measured mass flux and a known flux that has been introduced into flowing groundwater. A couple of innovative flux measurement methods, the tandem circulation well (TCW) and modified integral pumping test (MIPT) methods, have recently been proposed. The TCW method can measure mass flux integrated over a large subsurface volume without extracting water. The TCW method may be implemented using two different techniques. One technique, the multi-dipole technique, is relatively simple and inexpensive, only requiring measurement of heads, while the second technique requires conducting a tracer test. The MIPT method is an easily implemented method of obtaining volume-integrated flux measurements. In the current study, flux measurements obtained using these two methods are compared with known mass fluxes in a three-dimensional, artificial aquifer. Experiments in the artificial aquifer show that the TCW multi-dipole and tracer test techniques accurately estimated flux, within 2% and 16%, respectively; although the good results obtained using the multi-dipole technique may be fortuitous. The MIPT method was not as accurate as the TCW method, underestimating flux by as much as 70%. MIPT method inaccuracies may be due to the fact that the method assumptions (two-dimensional steady groundwater flow to fully-screened wells) were not well-approximated. While fluxes measured using the MIPT method were consistently underestimated, the method's simplicity and applicability to the field may compensate for the inaccuracies that were observed in this artificial aquifer test.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ning; Huang, Zhenyu; Tuffner, Francis K.
2010-02-28
Small signal stability problems are one of the major threats to grid stability and reliability. Prony analysis has been successfully applied on ringdown data to monitor electromechanical modes of a power system using phasor measurement unit (PMU) data. To facilitate an on-line application of mode estimation, this paper develops a recursive algorithm for implementing Prony analysis and proposed an oscillation detection method to detect ringdown data in real time. By automatically detecting ringdown data, the proposed method helps guarantee that Prony analysis is applied properly and timely on the ringdown data. Thus, the mode estimation results can be performed reliablymore » and timely. The proposed method is tested using Monte Carlo simulations based on a 17-machine model and is shown to be able to properly identify the oscillation data for on-line application of Prony analysis. In addition, the proposed method is applied to field measurement data from WECC to show the performance of the proposed algorithm.« less
The Fiber Grating Sensors Applied in the Deformation Measurement of Shipborne Antenna Basement
NASA Astrophysics Data System (ADS)
Liu, Yong; Chen, Jiahong; Zhao, Wenhua
2016-02-01
The optical fiber grating sensor is a novel fibre-optical passive device, its reflecting optical spectrum is linearly related with strain. It is broadly applied in the structural monitoring industry. Shipborne antenna basement is the basic supporting structure for the radar tracking movement. The bending deformation of the basement caused by ship attitude changing influences the antenna tracking precision, According to the structure of shipborne antenna basement, a distributed strain testing method based on the fibre grating sensor is approved to measure the bending deformation under the bending force. The strain-angle model is built. The regularity of the strain distribution is obtained. The finite element method is used to analyze the deformation of the antenna basement. The measuring experiment on the contractible basement mould is carried out to verify the availability of the method. The result of the experiment proves that the model is effective to apply in the deformation measurement. It provides an optimized method for the distribution of the fiber grating sensor in the actual measuring process.
A refined method for multivariate meta-analysis and meta-regression
Jackson, Daniel; Riley, Richard D
2014-01-01
Making inferences about the average treatment effect using the random effects model for meta-analysis is problematic in the common situation where there is a small number of studies. This is because estimates of the between-study variance are not precise enough to accurately apply the conventional methods for testing and deriving a confidence interval for the average effect. We have found that a refined method for univariate meta-analysis, which applies a scaling factor to the estimated effects’ standard error, provides more accurate inference. We explain how to extend this method to the multivariate scenario and show that our proposal for refined multivariate meta-analysis and meta-regression can provide more accurate inferences than the more conventional approach. We explain how our proposed approach can be implemented using standard output from multivariate meta-analysis software packages and apply our methodology to two real examples. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:23996351
Monkman, Helen; Kushniruk, Andre
2013-01-01
The prevalence of consumer health information systems is increasing. However, usability and health literacy impact both the value and adoption of these systems. Health literacy and usability are closely related in that systems may not be used accurately if users cannot understand the information therein. Thus, it is imperative to focus on mitigating the demands on health literacy in consumer health information systems. This study modified two usability evaluation methods (heuristic evaluation and usability testing) to incorporate the identification of potential health literacy issues in a Personal Health Record (PHR). Heuristic evaluation is an analysis of a system performed by a usability specialist who evaluates how well the system abides by usability principles. In contrast, a usability test involves a post hoc analysis of a representative user interacting with the system. These two methods revealed several health literacy issues and suggestions to ameliorate them were made. Thus, it was demonstrated that usability methods could be successfully augmented for the purpose of investigating health literacy issues. To improve users' health knowledge, the adoption of consumer health information systems, and the accuracy of the information contained therein, it is encouraged that usability methods be applied with an added focus on health literacy.
Dynamic test/analysis correlation using reduced analytical models
NASA Technical Reports Server (NTRS)
Mcgowan, Paul E.; Angelucci, A. Filippo; Javeed, Mehzad
1992-01-01
Test/analysis correlation is an important aspect of the verification of analysis models which are used to predict on-orbit response characteristics of large space structures. This paper presents results of a study using reduced analysis models for performing dynamic test/analysis correlation. The reduced test-analysis model (TAM) has the same number and orientation of DOF as the test measurements. Two reduction methods, static (Guyan) reduction and the Improved Reduced System (IRS) reduction, are applied to the test/analysis correlation of a laboratory truss structure. Simulated test results and modal test data are used to examine the performance of each method. It is shown that selection of DOF to be retained in the TAM is critical when large structural masses are involved. In addition, the use of modal test results may provide difficulties in TAM accuracy even if a large number of DOF are retained in the TAM.
A test of inflated zeros for Poisson regression models.
He, Hua; Zhang, Hui; Ye, Peng; Tang, Wan
2017-01-01
Excessive zeros are common in practice and may cause overdispersion and invalidate inference when fitting Poisson regression models. There is a large body of literature on zero-inflated Poisson models. However, methods for testing whether there are excessive zeros are less well developed. The Vuong test comparing a Poisson and a zero-inflated Poisson model is commonly applied in practice. However, the type I error of the test often deviates seriously from the nominal level, rendering serious doubts on the validity of the test in such applications. In this paper, we develop a new approach for testing inflated zeros under the Poisson model. Unlike the Vuong test for inflated zeros, our method does not require a zero-inflated Poisson model to perform the test. Simulation studies show that when compared with the Vuong test our approach not only better at controlling type I error rate, but also yield more power.
NASA Astrophysics Data System (ADS)
Lotfy, Hayam Mahmoud; Omran, Yasmin Rostom
2018-07-01
A novel, simple, rapid, accurate, and economical spectrophotometric method, namely absorptivity centering (a-Centering) has been developed and validated for the simultaneous determination of mixtures with partially and completely overlapping spectra in different matrices using either normalized or factorized spectrum using built-in spectrophotometer software without a need of special purchased program. Mixture I (Mix I) composed of Simvastatin (SM) and Ezetimibe (EZ) is the one with partial overlapping spectra formulated as tablets, while mixture II (Mix II) formed by Chloramphenicol (CPL) and Prednisolone acetate (PA) is that with complete overlapping spectra formulated as eye drops. These procedures do not require any separation steps. Resolution of spectrally overlapping binary mixtures has been achieved getting recovered zero-order (D0) spectrum of each drug, then absorbance was recorded at their maxima 238, 233.5, 273 and 242.5 nm for SM, EZ, CPL and PA, respectively. Calibration graphs were established with good correlation coefficients. The method shows significant advantages as simplicity, minimal data manipulation besides maximum reproducibility and robustness. Moreover, it was validated according to ICH guidelines. Selectivity was tested using laboratory-prepared mixtures. Accuracy, precision and repeatability were found to be within the acceptable limits. The proposed method is good enough to be applied to an assay of drugs in their combined formulations without any interference from excipients. The obtained results were statistically compared with those of the reported and official methods by applying t-test and F-test at 95% confidence level concluding that there is no significant difference with regard to accuracy and precision. Generally, this method could be used successfully for the routine quality control testing.
Atzori, Manfredo; Cognolato, Matteo; Müller, Henning
2016-01-01
Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too. PMID:27656140
Atzori, Manfredo; Cognolato, Matteo; Müller, Henning
2016-01-01
Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too.
Lotfy, Hayam Mahmoud; Omran, Yasmin Rostom
2018-07-05
A novel, simple, rapid, accurate, and economical spectrophotometric method, namely absorptivity centering (a-Centering) has been developed and validated for the simultaneous determination of mixtures with partially and completely overlapping spectra in different matrices using either normalized or factorized spectrum using built-in spectrophotometer software without a need of special purchased program. Mixture I (Mix I) composed of Simvastatin (SM) and Ezetimibe (EZ) is the one with partial overlapping spectra formulated as tablets, while mixture II (Mix II) formed by Chloramphenicol (CPL) and Prednisolone acetate (PA) is that with complete overlapping spectra formulated as eye drops. These procedures do not require any separation steps. Resolution of spectrally overlapping binary mixtures has been achieved getting recovered zero-order (D 0 ) spectrum of each drug, then absorbance was recorded at their maxima 238, 233.5, 273 and 242.5 nm for SM, EZ, CPL and PA, respectively. Calibration graphs were established with good correlation coefficients. The method shows significant advantages as simplicity, minimal data manipulation besides maximum reproducibility and robustness. Moreover, it was validated according to ICH guidelines. Selectivity was tested using laboratory-prepared mixtures. Accuracy, precision and repeatability were found to be within the acceptable limits. The proposed method is good enough to be applied to an assay of drugs in their combined formulations without any interference from excipients. The obtained results were statistically compared with those of the reported and official methods by applying t-test and F-test at 95% confidence level concluding that there is no significant difference with regard to accuracy and precision. Generally, this method could be used successfully for the routine quality control testing. Copyright © 2018 Elsevier B.V. All rights reserved.
Ju, Hyunjin; Lee, Deuck Hang; Cho, Hae-Chang; Kim, Kang Su; Yoon, Seyoon; Seo, Soo-Yeon
2014-01-01
In this study, hydrophilic chemical grout using silanol (HCGS) was adopted to overcome the performance limitations of epoxy materials used for strengthening existing buildings and civil engineering structures. The enhanced material performances of HCGS were introduced, and applied to the section enlargement method, which is one of the typical structural strengthening methods used in practice. To evaluate the excellent structural strengthening performance of the HCGS, structural tests were conducted on reinforced concrete beams, and analyses on the flexural behaviors of test specimens were performed by modified partial interaction theory (PIT). In particular, to improve the constructability of the section enlargement method, an advanced strengthening method was proposed, in which the precast panel was directly attached to the bottom of the damaged structural member by HCGS, and the degree of connection of the test specimens, strengthened by the section enlargement method, were quantitatively evaluated by PIT-based analysis. PMID:28788708
NASA Astrophysics Data System (ADS)
Trandafir, Laura; Alexandru, Mioara; Constantin, Mihai; Ioniţă, Anca; Zorilă, Florina; Moise, Valentin
2012-09-01
EN ISO 11137 established regulations for setting or substantiating the dose for achieving the desired sterility assurance level. The validation studies can be designed in particular for different types of products. Each product needs distinct protocols for bioburden determination and sterility testing. The Microbiological Laboratory from Irradiation Processing Center (IRASM) deals with different types of products, mainly for the VDmax25 method. When it comes to microbiological evaluation the most challenging was cotton gauze. A special situation for establishing the sterilization validation method appears in cases of cotton packed in large quantities. The VDmax25 method cannot be applied for items with average bioburden more than 1000 CFU/pack, irrespective of the weight of the package. This is a method limitation and implies increased costs for the manufacturer when choosing other methods. For microbiological tests, culture condition should be selected in both cases of the bioburden and sterility testing. Details about choosing criteria are given.
Ju, Hyunjin; Lee, Deuck Hang; Cho, Hae-Chang; Kim, Kang Su; Yoon, Seyoon; Seo, Soo-Yeon
2014-06-23
In this study, hydrophilic chemical grout using silanol (HCGS) was adopted to overcome the performance limitations of epoxy materials used for strengthening existing buildings and civil engineering structures. The enhanced material performances of HCGS were introduced, and applied to the section enlargement method, which is one of the typical structural strengthening methods used in practice. To evaluate the excellent structural strengthening performance of the HCGS, structural tests were conducted on reinforced concrete beams, and analyses on the flexural behaviors of test specimens were performed by modified partial interaction theory (PIT). In particular, to improve the constructability of the section enlargement method, an advanced strengthening method was proposed, in which the precast panel was directly attached to the bottom of the damaged structural member by HCGS, and the degree of connection of the test specimens, strengthened by the section enlargement method, were quantitatively evaluated by PIT-based analysis.
A bootstrap method for estimating uncertainty of water quality trends
Hirsch, Robert M.; Archfield, Stacey A.; DeCicco, Laura
2015-01-01
Estimation of the direction and magnitude of trends in surface water quality remains a problem of great scientific and practical interest. The Weighted Regressions on Time, Discharge, and Season (WRTDS) method was recently introduced as an exploratory data analysis tool to provide flexible and robust estimates of water quality trends. This paper enhances the WRTDS method through the introduction of the WRTDS Bootstrap Test (WBT), an extension of WRTDS that quantifies the uncertainty in WRTDS-estimates of water quality trends and offers various ways to visualize and communicate these uncertainties. Monte Carlo experiments are applied to estimate the Type I error probabilities for this method. WBT is compared to other water-quality trend-testing methods appropriate for data sets of one to three decades in length with sampling frequencies of 6–24 observations per year. The software to conduct the test is in the EGRETci R-package.
Pearson's chi-square test and rank correlation inferences for clustered data.
Shih, Joanna H; Fay, Michael P
2017-09-01
Pearson's chi-square test has been widely used in testing for association between two categorical responses. Spearman rank correlation and Kendall's tau are often used for measuring and testing association between two continuous or ordered categorical responses. However, the established statistical properties of these tests are only valid when each pair of responses are independent, where each sampling unit has only one pair of responses. When each sampling unit consists of a cluster of paired responses, the assumption of independent pairs is violated. In this article, we apply the within-cluster resampling technique to U-statistics to form new tests and rank-based correlation estimators for possibly tied clustered data. We develop large sample properties of the new proposed tests and estimators and evaluate their performance by simulations. The proposed methods are applied to a data set collected from a PET/CT imaging study for illustration. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
Surgical model pig ex vivo for venous dissection teaching in medical schools.
Tube, Milton Ignacio Carvalho; Spencer-Netto, Fernando Antonio Campelo; Oliveira, Anderson Igor Pereira de; Holanda, Arthur Cesário de; Barros, Bruno Leão Dos Santos; Rezende, Caio Cezar Gomes; Cavalcanti, João Pedro Guerra; Batista, Marília Apolinário; Campos, Josemberg Marins
2017-02-01
To investigate a method for development of surgical skills in medical students simulating venous dissection in surgical ex vivo pig model. Prospective, analytical, experimental, controlled study with four stages: selection, theoretical teaching, training and assessment. Sample of 312 students was divided into two groups: Group A - 2nd semester students; Group B - students of 8th semester. The groups were divided into five groups of 12 students, trained two hours per week in the semester. They set up four models to three students in each skill station assisted by a monitor. Teaching protocol emergency procedures training were applied to venous dissection, test goal-discursive and OSATS scale. The pre-test confirmed that the methodology has not been previously applied to the students. The averages obtained in the theoretical evaluation reached satisfactory parameters in both groups. The results of applying OSATS scale showed the best performance in group A compared to group B, however, both groups had satisfactory medium. The method was enough to raise a satisfactory level of skill both groups in venous dissection running on surgical swine ex vivo models.
Space Suit Joint Torque Measurement Method Validation
NASA Technical Reports Server (NTRS)
Valish, Dana; Eversley, Karina
2012-01-01
In 2009 and early 2010, a test method was developed and performed to quantify the torque required to manipulate joints in several existing operational and prototype space suits. This was done in an effort to develop joint torque requirements appropriate for a new Constellation Program space suit system. The same test method was levied on the Constellation space suit contractors to verify that their suit design met the requirements. However, because the original test was set up and conducted by a single test operator there was some question as to whether this method was repeatable enough to be considered a standard verification method for Constellation or other future development programs. In order to validate the method itself, a representative subset of the previous test was repeated, using the same information that would be available to space suit contractors, but set up and conducted by someone not familiar with the previous test. The resultant data was compared using graphical and statistical analysis; the results indicated a significant variance in values reported for a subset of the re-tested joints. Potential variables that could have affected the data were identified and a third round of testing was conducted in an attempt to eliminate and/or quantify the effects of these variables. The results of the third test effort will be used to determine whether or not the proposed joint torque methodology can be applied to future space suit development contracts.
49 CFR 178.604 - Leakproofness test.
Code of Federal Regulations, 2010 CFR
2010-10-01
...) Before they are first used in transportation; and (ii) Prior to reuse, when authorized for reuse by § 173... packaging must be restrained under water while an internal air pressure is applied; the method of restraint...
Method modification of the Legipid® Legionella fast detection test kit.
Albalat, Guillermo Rodríguez; Broch, Begoña Bedrina; Bono, Marisa Jiménez
2014-01-01
Legipid(®) Legionella Fast Detection is a test based on combined magnetic immunocapture and enzyme-immunoassay (CEIA) for the detection of Legionella in water. The test is based on the use of anti-Legionella antibodies immobilized on magnetic microspheres. Target microorganism is preconcentrated by filtration. Immunomagnetic analysis is applied on these preconcentrated water samples in a final test portion of 9 mL. The test kit was certified by the AOAC Research Institute as Performance Tested Method(SM) (PTM) No. 111101 in a PTM validation which certifies the performance claims of the test method in comparison to the ISO reference method 11731-1998 and the revision 11731-2004 "Water Quality: Detection and Enumeration of Legionella pneumophila" in potable water, industrial water, and waste water. The modification of this test kit has been approved. The modification includes increasing the target analyte from L. pneumophila to Legionella species and adding an optical reader to the test method. In this study, 71 strains of Legionella spp. other than L. pneumophila were tested to determine its reactivity with the kit based on CEIA. All the strains of Legionella spp. tested by the CEIA test were confirmed positive by reference standard method ISO 11731. This test (PTM 111101) has been modified to include a final optical reading. A methods comparison study was conducted to demonstrate the equivalence of this modification to the reference culture method. Two water matrixes were analyzed. Results show no statistically detectable difference between the test method and the reference culture method for the enumeration of Legionella spp. The relative level of detection was 93 CFU/volume examined (LOD50). For optical reading, the LOD was 40 CFU/volume examined and the LOQ was 60 CFU/volume examined. Results showed that the test Legipid Legionella Fast Detection is equivalent to the reference culture method for the enumeration of Legionella spp.
Improved nonlinear prediction method
NASA Astrophysics Data System (ADS)
Adenan, Nur Hamiza; Md Noorani, Mohd Salmi
2014-06-01
The analysis and prediction of time series data have been addressed by researchers. Many techniques have been developed to be applied in various areas, such as weather forecasting, financial markets and hydrological phenomena involving data that are contaminated by noise. Therefore, various techniques to improve the method have been introduced to analyze and predict time series data. In respect of the importance of analysis and the accuracy of the prediction result, a study was undertaken to test the effectiveness of the improved nonlinear prediction method for data that contain noise. The improved nonlinear prediction method involves the formation of composite serial data based on the successive differences of the time series. Then, the phase space reconstruction was performed on the composite data (one-dimensional) to reconstruct a number of space dimensions. Finally the local linear approximation method was employed to make a prediction based on the phase space. This improved method was tested with data series Logistics that contain 0%, 5%, 10%, 20% and 30% of noise. The results show that by using the improved method, the predictions were found to be in close agreement with the observed ones. The correlation coefficient was close to one when the improved method was applied on data with up to 10% noise. Thus, an improvement to analyze data with noise without involving any noise reduction method was introduced to predict the time series data.
Constructing networks from a dynamical system perspective for multivariate nonlinear time series.
Nakamura, Tomomichi; Tanizawa, Toshihiro; Small, Michael
2016-03-01
We describe a method for constructing networks for multivariate nonlinear time series. We approach the interaction between the various scalar time series from a deterministic dynamical system perspective and provide a generic and algorithmic test for whether the interaction between two measured time series is statistically significant. The method can be applied even when the data exhibit no obvious qualitative similarity: a situation in which the naive method utilizing the cross correlation function directly cannot correctly identify connectivity. To establish the connectivity between nodes we apply the previously proposed small-shuffle surrogate (SSS) method, which can investigate whether there are correlation structures in short-term variabilities (irregular fluctuations) between two data sets from the viewpoint of deterministic dynamical systems. The procedure to construct networks based on this idea is composed of three steps: (i) each time series is considered as a basic node of a network, (ii) the SSS method is applied to verify the connectivity between each pair of time series taken from the whole multivariate time series, and (iii) the pair of nodes is connected with an undirected edge when the null hypothesis cannot be rejected. The network constructed by the proposed method indicates the intrinsic (essential) connectivity of the elements included in the system or the underlying (assumed) system. The method is demonstrated for numerical data sets generated by known systems and applied to several experimental time series.
NASA Astrophysics Data System (ADS)
Magdy, Nancy; Ayad, Miriam F.
2015-02-01
Two simple, accurate, precise, sensitive and economic spectrophotometric methods were developed for the simultaneous determination of Simvastatin and Ezetimibe in fixed dose combination products without prior separation. The first method depends on a new chemometrics-assisted ratio spectra derivative method using moving window polynomial least square fitting method (Savitzky-Golay filters). The second method is based on a simple modification for the ratio subtraction method. The suggested methods were validated according to USP guidelines and can be applied for routine quality control testing.
Analytical study to define a helicopter stability derivative extraction method, volume 1
NASA Technical Reports Server (NTRS)
Molusis, J. A.
1973-01-01
A method is developed for extracting six degree-of-freedom stability and control derivatives from helicopter flight data. Different combinations of filtering and derivative estimate are investigated and used with a Bayesian approach for derivative identification. The combination of filtering and estimate found to yield the most accurate time response match to flight test data is determined and applied to CH-53A and CH-54B flight data. The method found to be most accurate consists of (1) filtering flight test data with a digital filter, followed by an extended Kalman filter (2) identifying a derivative estimate with a least square estimator, and (3) obtaining derivatives with the Bayesian derivative extraction method.
The SNPforID Assay as a Supplementary Method in Kinship and Trace Analysis
Schwark, Thorsten; Meyer, Patrick; Harder, Melanie; Modrow, Jan-Hendrick; von Wurmb-Schwark, Nicole
2012-01-01
Objective Short tandem repeat (STR) analysis using commercial multiplex PCR kits is the method of choice for kinship testing and trace analysis. However, under certain circumstances (deficiency testing, mutations, minute DNA amounts), STRs alone may not suffice. Methods We present a 50-plex single nucleotide polymorphism (SNP) assay based on the SNPs chosen by the SNPforID consortium as an additional method for paternity and for trace analysis. The new assay was applied to selected routine paternity and trace cases from our laboratory. Results and Conclusions Our investigation shows that the new SNP multiplex assay is a valuable method to supplement STR analysis, and is a powerful means to solve complicated genetic analyses. PMID:22851934
Detection of osmotic damages in GRP boat hulls
NASA Astrophysics Data System (ADS)
Krstulović-Opara, L.; Domazet, Ž.; Garafulić, E.
2013-09-01
Infrared thermography as a tool of non-destructive testing is method enabling visualization and estimation of structural anomalies and differences in structure's topography. In presented paper problem of osmotic damage in submerged glass reinforced polymer structures is addressed. The osmotic damage can be detected by a simple humidity gauging, but for proper evaluation and estimation testing methods are restricted and hardly applicable. In this paper it is demonstrated that infrared thermography, based on estimation of heat wave propagation, can be used. Three methods are addressed; Pulsed thermography, Fast Fourier Transform and Continuous Morlet Wavelet. An additional image processing based on gradient approach is applied on all addressed methods. It is shown that the Continuous Morlet Wavelet is the most appropriate method for detection of osmotic damage.
Correction for Metastability in the Quantification of PID in Thin-film Module Testing: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hacke, Peter L; Johnston, Steven; Spataru, Sergiu
A fundamental change in the analysis for the accelerated stress testing of thin-film modules is proposed, whereby power changes due to metastability and other effects that may occur due to the thermal history are removed from the power measurement that we obtain as a function of the applied stress factor. The power of reference modules normalized to an initial state - undergoing the same thermal and light- exposure history but without the applied stress factor such as humidity or voltage bias - is subtracted from that of the stressed modules. For better understanding and appropriate application in standardized tests, themore » method is demonstrated and discussed for potential-induced degradation testing in view of the parallel-occurring but unrelated physical mechanisms that can lead to confounding power changes in the module.« less
A new IRT-based standard setting method: application to eCat-listening.
García, Pablo Eduardo; Abad, Francisco José; Olea, Julio; Aguado, David
2013-01-01
Criterion-referenced interpretations of tests are highly necessary, which usually involves the difficult task of establishing cut scores. Contrasting with other Item Response Theory (IRT)-based standard setting methods, a non-judgmental approach is proposed in this study, in which Item Characteristic Curve (ICC) transformations lead to the final cut scores. eCat-Listening, a computerized adaptive test for the evaluation of English Listening, was administered to 1,576 participants, and the proposed standard setting method was applied to classify them into the performance standards of the Common European Framework of Reference for Languages (CEFR). The results showed a classification closely related to relevant external measures of the English language domain, according to the CEFR. It is concluded that the proposed method is a practical and valid standard setting alternative for IRT-based tests interpretations.
A likelihood ratio test for evolutionary rate shifts and functional divergence among proteins
Knudsen, Bjarne; Miyamoto, Michael M.
2001-01-01
Changes in protein function can lead to changes in the selection acting on specific residues. This can often be detected as evolutionary rate changes at the sites in question. A maximum-likelihood method for detecting evolutionary rate shifts at specific protein positions is presented. The method determines significance values of the rate differences to give a sound statistical foundation for the conclusions drawn from the analyses. A statistical test for detecting slowly evolving sites is also described. The methods are applied to a set of Myc proteins for the identification of both conserved sites and those with changing evolutionary rates. Those positions with conserved and changing rates are related to the structures and functions of their proteins. The results are compared with an earlier Bayesian method, thereby highlighting the advantages of the new likelihood ratio tests. PMID:11734650
A new tool for post-AGB SED classification
NASA Astrophysics Data System (ADS)
Bendjoya, P.; Suarez, O.; Galluccio, L.; Michel, O.
We present the results of an unsupervised classification method applied on a set of 344 spectral energy distributions (SED) of post-AGB stars extracted from the Torun catalogue of Galactic post-AGB stars. This method aims to find a new unbiased method for post-AGB star classification based on the information contained in the IR region of the SED (fluxes, IR excess, colours). We used the data from IRAS and MSX satellites, and from the 2MASS survey. We applied a classification method based on the construction of the dataset of a minimal spanning tree (MST) with the Prim's algorithm. In order to build this tree, different metrics have been tested on both flux and color indices. Our method is able to classify the set of 344 post-AGB stars in 9 distinct groups according to their SEDs.
Polarization-based and specular-reflection-based noncontact latent fingerprint imaging and lifting
NASA Astrophysics Data System (ADS)
Lin, Shih-Schön; Yemelyanov, Konstantin M.; Pugh, Edward N., Jr.; Engheta, Nader
2006-09-01
In forensic science the finger marks left unintentionally by people at a crime scene are referred to as latent fingerprints. Most existing techniques to detect and lift latent fingerprints require application of a certain material directly onto the exhibit. The chemical and physical processing applied to the fingerprint potentially degrades or prevents further forensic testing on the same evidence sample. Many existing methods also have deleterious side effects. We introduce a method to detect and extract latent fingerprint images without applying any powder or chemicals on the object. Our method is based on the optical phenomena of polarization and specular reflection together with the physiology of fingerprint formation. The recovered image quality is comparable to existing methods. In some cases, such as the sticky side of tape, our method shows unique advantages.
Rispin, Amy; Farrar, David; Margosches, Elizabeth; Gupta, Kailash; Stitzel, Katherine; Carr, Gregory; Greene, Michael; Meyer, William; McCall, Deborah
2002-01-01
The authors have developed an improved version of the up-and-down procedure (UDP) as one of the replacements for the traditional acute oral toxicity test formerly used by the Organisation for Economic Co-operation and Development member nations to characterize industrial chemicals, pesticides, and their mixtures. This method improves the performance of acute testing for applications that use the median lethal dose (classic LD50) test while achieving significant reductions in animal use. It uses sequential dosing, together with sophisticated computer-assisted computational methods during the execution and calculation phases of the test. Staircase design, a form of sequential test design, can be applied to acute toxicity testing with its binary experimental endpoints (yes/no outcomes). The improved UDP provides a point estimate of the LD50 and approximate confidence intervals in addition to observed toxic signs for the substance tested. It does not provide information about the dose-response curve. Computer simulation was used to test performance of the UDP without the need for additional laboratory validation.
Accelerated testing of composites
NASA Technical Reports Server (NTRS)
Papazian, H. A.
1983-01-01
It is shown that the Zhurkov method for testing the strength of solids can be applied to dynamic tension and to cyclic loading and provides a viable approach to accelerated testing of composites. Data from the literature are used to demonstrate a straightforward application of the method to dynamic tension of glass fiber and cyclic loading for glass/polymer, metal matrix, and graphite/epoxy composites. Zhurkov's equation can be used at relatively high loads to obtain failure times at any temperature of interest. By taking a few data points at one or two other temperatures the spectrum of failure times can be expanded to temperatures not easily accessible.
Freckmann, Guido; Baumstark, Annette; Schmid, Christina; Pleus, Stefan; Link, Manuela; Haug, Cornelia
2014-02-01
Systems for self-monitoring of blood glucose (SMBG) have to provide accurate and reproducible blood glucose (BG) values in order to ensure adequate therapeutic decisions by people with diabetes. Twelve SMBG systems were compared in a standardized manner under controlled laboratory conditions: nine systems were available on the German market and were purchased from a local pharmacy, and three systems were obtained from the manufacturer (two systems were available on the U.S. market, and one system was not yet introduced to the German market). System accuracy was evaluated following DIN EN ISO (International Organization for Standardization) 15197:2003. In addition, measurement reproducibility was assessed following a modified TNO (Netherlands Organization for Applied Scientific Research) procedure. Comparison measurements were performed with either the glucose oxidase method (YSI 2300 STAT Plus™ glucose analyzer; YSI Life Sciences, Yellow Springs, OH) or the hexokinase method (cobas(®) c111; Roche Diagnostics GmbH, Mannheim, Germany) according to the manufacturer's measurement procedure. The 12 evaluated systems showed between 71.5% and 100% of the measurement results within the required system accuracy limits. Ten systems fulfilled with the evaluated test strip lot minimum accuracy requirements specified by DIN EN ISO 15197:2003. In addition, accuracy limits of the recently published revision ISO 15197:2013 were applied and showed between 54.5% and 100% of the systems' measurement results within the required accuracy limits. Regarding measurement reproducibility, each of the 12 tested systems met the applied performance criteria. In summary, 83% of the systems fulfilled with the evaluated test strip lot minimum system accuracy requirements of DIN EN ISO 15197:2003. Each of the tested systems showed acceptable measurement reproducibility. In order to ensure sufficient measurement quality of each distributed test strip lot, regular evaluations are required.
Vexler, Albert; Tanajian, Hovig; Hutson, Alan D
In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.