Research on Generating Method of Embedded Software Test Document Based on Dynamic Model
NASA Astrophysics Data System (ADS)
Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying
2018-03-01
This paper provides a dynamic model-based test document generation method for embedded software that provides automatic generation of two documents: test requirements specification documentation and configuration item test documentation. This method enables dynamic test requirements to be implemented in dynamic models, enabling dynamic test demand tracking to be easily generated; able to automatically generate standardized, standardized test requirements and test documentation, improved document-related content inconsistency and lack of integrity And other issues, improve the efficiency.
An Efficient Functional Test Generation Method For Processors Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Hudec, Ján; Gramatová, Elena
2015-07-01
The paper presents a new functional test generation method for processors testing based on genetic algorithms and evolutionary strategies. The tests are generated over an instruction set architecture and a processor description. Such functional tests belong to the software-oriented testing. Quality of the tests is evaluated by code coverage of the processor description using simulation. The presented test generation method uses VHDL models of processors and the professional simulator ModelSim. The rules, parameters and fitness functions were defined for various genetic algorithms used in automatic test generation. Functionality and effectiveness were evaluated using the RISC type processor DP32.
40 CFR 53.22 - Generation of test atmospheres.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 6 2012-07-01 2012-07-01 false Generation of test atmospheres. 53.22... Characteristics of Automated Methods for SO2, CO, O3, and NO2 § 53.22 Generation of test atmospheres. (a) Table B-2 to subpart B of part 53 specifies preferred methods for generating test atmospheres and suggested...
40 CFR 53.22 - Generation of test atmospheres.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 6 2014-07-01 2014-07-01 false Generation of test atmospheres. 53.22... Characteristics of Automated Methods for SO2, CO, O3, and NO2 § 53.22 Generation of test atmospheres. (a) Table B-2 to subpart B of part 53 specifies preferred methods for generating test atmospheres and suggested...
40 CFR 53.22 - Generation of test atmospheres.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 6 2013-07-01 2013-07-01 false Generation of test atmospheres. 53.22... Characteristics of Automated Methods for SO2, CO, O3, and NO2 § 53.22 Generation of test atmospheres. (a) Table B-2 to subpart B of part 53 specifies preferred methods for generating test atmospheres and suggested...
A Model-Based Method for Content Validation of Automatically Generated Test Items
ERIC Educational Resources Information Center
Zhang, Xinxin; Gierl, Mark
2016-01-01
The purpose of this study is to describe a methodology to recover the item model used to generate multiple-choice test items with a novel graph theory approach. Beginning with the generated test items and working backward to recover the original item model provides a model-based method for validating the content used to automatically generate test…
Integrated circuit test-port architecture and method and apparatus of test-port generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teifel, John
A method and apparatus are provided for generating RTL code for a test-port interface of an integrated circuit. In an embodiment, a test-port table is provided as input data. A computer automatically parses the test-port table into data structures and analyzes it to determine input, output, local, and output-enable port names. The computer generates address-detect and test-enable logic constructed from combinational functions. The computer generates one-hot multiplexer logic for at least some of the output ports. The one-hot multiplexer logic for each port is generated so as to enable the port to toggle between data signals and test signals. Themore » computer then completes the generation of the RTL code.« less
Test pattern generation for ILA sequential circuits
NASA Technical Reports Server (NTRS)
Feng, YU; Frenzel, James F.; Maki, Gary K.
1993-01-01
An efficient method of generating test patterns for sequential machines implemented using one-dimensional, unilateral, iterative logic arrays (ILA's) of BTS pass transistor networks is presented. Based on a transistor level fault model, the method affords a unique opportunity for real-time fault detection with improved fault coverage. The resulting test sets are shown to be equivalent to those obtained using conventional gate level models, thus eliminating the need for additional test patterns. The proposed method advances the simplicity and ease of the test pattern generation for a special class of sequential circuitry.
Optimal Test Design with Rule-Based Item Generation
ERIC Educational Resources Information Center
Geerlings, Hanneke; van der Linden, Wim J.; Glas, Cees A. W.
2013-01-01
Optimal test-design methods are applied to rule-based item generation. Three different cases of automated test design are presented: (a) test assembly from a pool of pregenerated, calibrated items; (b) test generation on the fly from a pool of calibrated item families; and (c) test generation on the fly directly from calibrated features defining…
A rule-based software test data generator
NASA Technical Reports Server (NTRS)
Deason, William H.; Brown, David B.; Chang, Kai-Hsiung; Cross, James H., II
1991-01-01
Rule-based software test data generation is proposed as an alternative to either path/predicate analysis or random data generation. A prototype rule-based test data generator for Ada programs is constructed and compared to a random test data generator. Four Ada procedures are used in the comparison. Approximately 2000 rule-based test cases and 100,000 randomly generated test cases are automatically generated and executed. The success of the two methods is compared using standard coverage metrics. Simple statistical tests showing that even the primitive rule-based test data generation prototype is significantly better than random data generation are performed. This result demonstrates that rule-based test data generation is feasible and shows great promise in assisting test engineers, especially when the rule base is developed further.
Design and Test of Pseudorandom Number Generator Using a Star Network of Lorenz Oscillators
NASA Astrophysics Data System (ADS)
Cho, Kenichiro; Miyano, Takaya
We have recently developed a chaos-based stream cipher based on augmented Lorenz equations as a star network of Lorenz subsystems. In our method, the augmented Lorenz equations are used as a pseudorandom number generator. In this study, we propose a new method based on the augmented Lorenz equations for generating binary pseudorandom numbers and evaluate its security using the statistical tests of SP800-22 published by the National Institute for Standards and Technology in comparison with the performances of other chaotic dynamical models used as binary pseudorandom number generators. We further propose a faster version of the proposed method and evaluate its security using the statistical tests of TestU01 published by L’Ecuyer and Simard.
40 CFR 53.42 - Generation of test atmospheres for wind tunnel tests.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Generation of test atmospheres for wind... Testing Performance Characteristics of Methods for PM10 § 53.42 Generation of test atmospheres for wind... particle delivery system shall consist of a blower system and a wind tunnel having a test section of...
40 CFR 53.42 - Generation of test atmospheres for wind tunnel tests.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 5 2011-07-01 2011-07-01 false Generation of test atmospheres for wind... Testing Performance Characteristics of Methods for PM10 § 53.42 Generation of test atmospheres for wind... particle delivery system shall consist of a blower system and a wind tunnel having a test section of...
40 CFR 53.22 - Generation of test atmospheres.
Code of Federal Regulations, 2010 CFR
2010-07-01
... test concentration shall be verified. (b) The test atmosphere delivery system shall be designed and... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Generation of test atmospheres. 53.22... Characteristics of Automated Methods SO2, CO, O3, and NO2 § 53.22 Generation of test atmospheres. (a) Table B-2...
40 CFR 53.22 - Generation of test atmospheres.
Code of Federal Regulations, 2011 CFR
2011-07-01
... test concentration shall be verified. (b) The test atmosphere delivery system shall be designed and... 40 Protection of Environment 5 2011-07-01 2011-07-01 false Generation of test atmospheres. 53.22... Characteristics of Automated Methods SO2, CO, O3, and NO2 § 53.22 Generation of test atmospheres. (a) Table B-2...
Delay test generation for synchronous sequential circuits
NASA Astrophysics Data System (ADS)
Devadas, Srinivas
1989-05-01
We address the problem of generating tests for delay faults in non-scan synchronous sequential circuits. Delay test generation for sequential circuits is a considerably more difficult problem than delay testing of combinational circuits and has received much less attention. In this paper, we present a method for generating test sequences to detect delay faults in sequential circuits using the stuck-at fault sequential test generator STALLION. The method is complete in that it will generate a delay test sequence for a targeted fault given sufficient CPU time, if such a sequence exists. We term faults for which no delay test sequence exists, under out test methodology, sequentially delay redundant. We describe means of eliminating sequential delay redundancies in logic circuits. We present a partial-scan methodology for enhancing the testability of difficult-to-test of untestable sequential circuits, wherein a small number of flip-flops are selected and made controllable/observable. The selection process guarantees the elimination of all sequential delay redundancies. We show that an intimate relationship exists between state assignment and delay testability of a sequential machine. We describe a state assignment algorithm for the synthesis of sequential machines with maximal delay fault testability. Preliminary experimental results using the test generation, partial-scan and synthesis algorithm are presented.
49 CFR 383.133 - Test methods.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 5 2014-10-01 2014-10-01 false Test methods. 383.133 Section 383.133... STANDARDS; REQUIREMENTS AND PENALTIES Tests § 383.133 Test methods. (a) All tests must be constructed in... and provides to all State Driver Licensing Agencies. (2) The State method of generating knowledge...
49 CFR 383.133 - Test methods.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 5 2012-10-01 2012-10-01 false Test methods. 383.133 Section 383.133... STANDARDS; REQUIREMENTS AND PENALTIES Tests § 383.133 Test methods. (a) All tests must be constructed in... and provides to all State Driver Licensing Agencies. (2) The State method of generating knowledge...
49 CFR 383.133 - Test methods.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 5 2011-10-01 2011-10-01 false Test methods. 383.133 Section 383.133... STANDARDS; REQUIREMENTS AND PENALTIES Tests § 383.133 Test methods. (a) All tests must be constructed in... and provides to all State Driver Licensing Agencies. (2) The State method of generating knowledge...
49 CFR 383.133 - Test methods.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 5 2013-10-01 2013-10-01 false Test methods. 383.133 Section 383.133... STANDARDS; REQUIREMENTS AND PENALTIES Tests § 383.133 Test methods. (a) All tests must be constructed in... and provides to all State Driver Licensing Agencies. (2) The State method of generating knowledge...
Karpušenkaitė, Aistė; Ruzgas, Tomas; Denafas, Gintaras
2018-05-01
The aim of the study was to create a hybrid forecasting method that could produce higher accuracy forecasts than previously used 'pure' time series methods. Mentioned methods were already tested with total automotive waste, hazardous automotive waste, and total medical waste generation, but demonstrated at least a 6% error rate in different cases and efforts were made to decrease it even more. Newly developed hybrid models used a random start generation method to incorporate different time-series advantages and it helped to increase the accuracy of forecasts by 3%-4% in hazardous automotive waste and total medical waste generation cases; the new model did not increase the accuracy of total automotive waste generation forecasts. Developed models' abilities to forecast short- and mid-term forecasts were tested using prediction horizon.
Concept Development for Future Domains: A New Method of Knowledge Elicitation
2005-06-01
Procedure: U.S. Army Research Institute for the Behavioral and Social Sciences (ARI) examined methods to generate, refine, test , and validate new...generate, elaborate, refine, describe, test , and validate new Future Force concepts relating to doctrine, tactics, techniques, procedures, unit and team...System (Harvey, 1993), and the Job Element Method (Primoff & Eyde , 1988). Figure 1 provides a more comprehensive list of task analytic methods. Please see
Foam property tests to evaluate the potential for longwall shield dust control.
Reed, W R; Beck, T W; Zheng, Y; Klima, S; Driscoll, J
2018-01-01
Tests were conducted to determine properties of four foam agents for their potential use in longwall mining dust control. Foam has been tried in underground mining in the past for dust control and is currently being reconsidered for use in underground coal longwall operations in order to help those operations comply with the Mine Safety and Health Administration's lower coal mine respirable dust standard of 1.5 mg/m 3 . Foams were generated using two different methods. One method used compressed air and water pressure to generate foam, while the other method used low-pressure air generated by a blower and water pressure using a foam generator developed by the U.S. National Institute for Occupational Safety and Health. Foam property tests, consisting of a foam expansion ratio test and a water drainage test, were conducted to classify foams. Compressed-air-generated foams tended to have low expansion ratios, from 10 to 19, with high water drainage. Blower-air-generated foams had higher foam expansion ratios, from 30 to 60, with lower water drainage. Foams produced within these ranges of expansion ratios are stable and potentially suitable for dust control. The test results eliminated two foam agents for future testing because they had poor expansion ratios. The remaining two foam agents seem to have properties adequate for dust control. These material property tests can be used to classify foams for their potential use in longwall mining dust control.
Foam property tests to evaluate the potential for longwall shield dust control
Reed, W.R.; Beck, T.W.; Zheng, Y.; Klima, S.; Driscoll, J.
2018-01-01
Tests were conducted to determine properties of four foam agents for their potential use in longwall mining dust control. Foam has been tried in underground mining in the past for dust control and is currently being reconsidered for use in underground coal longwall operations in order to help those operations comply with the Mine Safety and Health Administration’s lower coal mine respirable dust standard of 1.5 mg/m3. Foams were generated using two different methods. One method used compressed air and water pressure to generate foam, while the other method used low-pressure air generated by a blower and water pressure using a foam generator developed by the U.S. National Institute for Occupational Safety and Health. Foam property tests, consisting of a foam expansion ratio test and a water drainage test, were conducted to classify foams. Compressed-air-generated foams tended to have low expansion ratios, from 10 to 19, with high water drainage. Blower-air-generated foams had higher foam expansion ratios, from 30 to 60, with lower water drainage. Foams produced within these ranges of expansion ratios are stable and potentially suitable for dust control. The test results eliminated two foam agents for future testing because they had poor expansion ratios. The remaining two foam agents seem to have properties adequate for dust control. These material property tests can be used to classify foams for their potential use in longwall mining dust control. PMID:29416179
An assessment of unstructured grid technology for timely CFD analysis
NASA Technical Reports Server (NTRS)
Kinard, Tom A.; Schabowski, Deanne M.
1995-01-01
An assessment of two unstructured methods is presented in this paper. A tetrahedral unstructured method USM3D, developed at NASA Langley Research Center is compared to a Cartesian unstructured method, SPLITFLOW, developed at Lockheed Fort Worth Company. USM3D is an upwind finite volume solver that accepts grids generated primarily from the Vgrid grid generator. SPLITFLOW combines an unstructured grid generator with an implicit flow solver in one package. Both methods are exercised on three test cases, a wing, and a wing body, and a fully expanded nozzle. The results for the first two runs are included here and compared to the structured grid method TEAM and to available test data. On each test case, the set up procedure are described, including any difficulties that were encountered. Detailed descriptions of the solvers are not included in this paper.
Formal methods for test case generation
NASA Technical Reports Server (NTRS)
Rushby, John (Inventor); De Moura, Leonardo Mendonga (Inventor); Hamon, Gregoire (Inventor)
2011-01-01
The invention relates to the use of model checkers to generate efficient test sets for hardware and software systems. The method provides for extending existing tests to reach new coverage targets; searching *to* some or all of the uncovered targets in parallel; searching in parallel *from* some or all of the states reached in previous tests; and slicing the model relative to the current set of coverage targets. The invention provides efficient test case generation and test set formation. Deep regions of the state space can be reached within allotted time and memory. The approach has been applied to use of the model checkers of SRI's SAL system and to model-based designs developed in Stateflow. Stateflow models achieving complete state and transition coverage in a single test case are reported.
The Application of Surface Potential Test on Hand-making Insulation for Generator Stator End-winding
NASA Astrophysics Data System (ADS)
Lu, Zhu-mao; Liu, Qing; Wang, Tian-zheng; Bai, Lu; Li, Yan-peng
2017-05-01
This paper presents the advantage of surface potential test on hand-making insulation for generator stator end-winding insulation detection, compared with DC or AC withstand voltage test, also details the test principle, connection method and test notes. And through the case, surface potential test on hand-making insulation proved effective for insulation quality detection after generator stator end-winding maintenance, and the experimental data is useful and reliable for the electrical equipment operation and maintenance in the power plant.
2012-01-01
Background Gene Set Analysis (GSA) has proven to be a useful approach to microarray analysis. However, most of the method development for GSA has focused on the statistical tests to be used rather than on the generation of sets that will be tested. Existing methods of set generation are often overly simplistic. The creation of sets from individual pathways (in isolation) is a poor reflection of the complexity of the underlying metabolic network. We have developed a novel approach to set generation via the use of Principal Component Analysis of the Laplacian matrix of a metabolic network. We have analysed a relatively simple data set to show the difference in results between our method and the current state-of-the-art pathway-based sets. Results The sets generated with this method are semi-exhaustive and capture much of the topological complexity of the metabolic network. The semi-exhaustive nature of this method has also allowed us to design a hypergeometric enrichment test to determine which genes are likely responsible for set significance. We show that our method finds significant aspects of biology that would be missed (i.e. false negatives) and addresses the false positive rates found with the use of simple pathway-based sets. Conclusions The set generation step for GSA is often neglected but is a crucial part of the analysis as it defines the full context for the analysis. As such, set generation methods should be robust and yield as complete a representation of the extant biological knowledge as possible. The method reported here achieves this goal and is demonstrably superior to previous set analysis methods. PMID:22876834
Automatic item generation implemented for measuring artistic judgment aptitude.
Bezruczko, Nikolaus
2014-01-01
Automatic item generation (AIG) is a broad class of methods that are being developed to address psychometric issues arising from internet and computer-based testing. In general, issues emphasize efficiency, validity, and diagnostic usefulness of large scale mental testing. Rapid prominence of AIG methods and their implicit perspective on mental testing is bringing painful scrutiny to many sacred psychometric assumptions. This report reviews basic AIG ideas, then presents conceptual foundations, image model development, and operational application to artistic judgment aptitude testing.
Test Generation Algorithm for Fault Detection of Analog Circuits Based on Extreme Learning Machine
Zhou, Jingyu; Tian, Shulin; Yang, Chenglin; Ren, Xuelong
2014-01-01
This paper proposes a novel test generation algorithm based on extreme learning machine (ELM), and such algorithm is cost-effective and low-risk for analog device under test (DUT). This method uses test patterns derived from the test generation algorithm to stimulate DUT, and then samples output responses of the DUT for fault classification and detection. The novel ELM-based test generation algorithm proposed in this paper contains mainly three aspects of innovation. Firstly, this algorithm saves time efficiently by classifying response space with ELM. Secondly, this algorithm can avoid reduced test precision efficiently in case of reduction of the number of impulse-response samples. Thirdly, a new process of test signal generator and a test structure in test generation algorithm are presented, and both of them are very simple. Finally, the abovementioned improvement and functioning are confirmed in experiments. PMID:25610458
Comparison of three commercially available fit-test methods.
Janssen, Larry L; Luinenburg, D Michael; Mullins, Haskell E; Nelson, Thomas J
2002-01-01
American National Standards Institute (ANSI) standard Z88.10, Respirator Fit Testing Methods, includes criteria to evaluate new fit-tests. The standard allows generated aerosol, particle counting, or controlled negative pressure quantitative fit-tests to be used as the reference method to determine acceptability of a new test. This study examined (1) comparability of three Occupational Safety and Health Administration-accepted fit-test methods, all of which were validated using generated aerosol as the reference method; and (2) the effect of the reference method on the apparent performance of a fit-test method under evaluation. Sequential fit-tests were performed using the controlled negative pressure and particle counting quantitative fit-tests and the bitter aerosol qualitative fit-test. Of 75 fit-tests conducted with each method, the controlled negative pressure method identified 24 failures; bitter aerosol identified 22 failures; and the particle counting method identified 15 failures. The sensitivity of each method, that is, agreement with the reference method in identifying unacceptable fits, was calculated using each of the other two methods as the reference. None of the test methods met the ANSI sensitivity criterion of 0.95 or greater when compared with either of the other two methods. These results demonstrate that (1) the apparent performance of any fit-test depends on the reference method used, and (2) the fit-tests evaluated use different criteria to identify inadequately fitting respirators. Although "acceptable fit" cannot be defined in absolute terms at this time, the ability of existing fit-test methods to reject poor fits can be inferred from workplace protection factor studies.
Rapid testing of pulse transformers
NASA Technical Reports Server (NTRS)
Grillo, J.
1980-01-01
Quality-control testing of pulse transformers is speeded up by method for determining rise time and droop. Instead of using oscilloscope and square-wave generator to measure these characteristics directly, method uses voltmeter and sine-wave generator to measure them indirectly in about one-tenth time. Droop and rise time are determined by measuring input/output voltage ratio at just four frequencies.
ERIC Educational Resources Information Center
ALTMANN, BERTHOLD; BROWN, WILLIAM G.
THE FIRST-GENERATION APPROACH BY CONCEPT (ABC) STORAGE AND RETRIEVAL METHOD, A METHOD WHICH UTILIZES AS A SUBJECT APPROACH APPROPRIATE STANDARDIZED ENGLISH-LANGUAGE STATEMENTS PROCESSED AND PRINTED IN A PERMUTED INDEX FORMAT, UNDERWENT A PERFORMANCE TEST, THE PRIMARY OBJECTIVE OF WHICH WAS TO SPOT DEFICIENCIES AND TO DEVELOP A SECOND-GENERATION…
NASA Astrophysics Data System (ADS)
Součková, Natálie; Kuklová, Jana; Popelka, Lukáš; Matějka, Milan
2012-04-01
This paper focuses on a suppression of the flow separation, which occurs on a deflected flap, by means of vortex generators (VG's). An airfoil NACA 63A421 with a simple flap and vane-type vortex generators were used. The investigation was carried out by using experimental and numerical methods. The data from the numerical simulation of the flapped airfoil without VG's control were used for the vortex generator design. Two sizes, two different shapes and various spacing of the vortex generators were tested. The flow past the airfoil was visualized through three methods, namely tuft filaments technique, oil and thermo camera visualization. The experiments were performed in closed circuit wind tunnels with closed and open test sections. The lift curves for both cases without and with vortex generators were acquired for a lift coefficient improvement determination. The improvement was achieved for several cases by means all of the applied methods.
Renewable Energy Generation and Storage Models | Grid Modernization | NREL
-the-loop testing Projects Generator, Plant, and Storage Modeling, Simulation, and Validation NREL power plants. Power Hardware-in-the-Loop Testing NREL researchers are developing software-and-hardware -combined simulation testing methods known as power hardware-in-the-loop testing. Power hardware in the loop
A simplified analytic form for generation of axisymmetric plasma boundaries
Luce, Timothy C.
2017-02-23
An improved method has been formulated for generating analytic boundary shapes as input for axisymmetric MHD equilibria. This method uses the family of superellipses as the basis function, as previously introduced. The improvements are a simplified notation, reduction of the number of simultaneous nonlinear equations to be solved, and the realization that not all combinations of input parameters admit a solution to the nonlinear constraint equations. The method tests for the existence of a self-consistent solution and, when no solution exists, it uses a deterministic method to find a nearby solution. As a result, examples of generation of boundaries, includingmore » tests with an equilibrium solver, are given.« less
A simplified analytic form for generation of axisymmetric plasma boundaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luce, Timothy C.
An improved method has been formulated for generating analytic boundary shapes as input for axisymmetric MHD equilibria. This method uses the family of superellipses as the basis function, as previously introduced. The improvements are a simplified notation, reduction of the number of simultaneous nonlinear equations to be solved, and the realization that not all combinations of input parameters admit a solution to the nonlinear constraint equations. The method tests for the existence of a self-consistent solution and, when no solution exists, it uses a deterministic method to find a nearby solution. As a result, examples of generation of boundaries, includingmore » tests with an equilibrium solver, are given.« less
Advanced Method of Boundary-Layer Control Based on Localized Plasma Generation
2009-05-01
measurements, validation of experiments, wind-tunnel testing of the microwave / plasma generation system , preliminary assessment of energy required...and design of a microwave generator , electrodynamic and multivibrator systems for experiments in the IHM-NAU wind tunnel: MW generator and its high...equipped with the microwave - generation and protection systems to study advanced methods of flow control (Kiev) Fig. 2.1,a. The blade
Sanaka, Masaki; Yamamoto, Takatsugu; Ishii, Tarou; Kuyama, Yasushi
2004-01-01
In pharmacokinetics, the Wagner-Nelson (W-N) method can accurately estimate the rate of drug absorption from its urinary elimination rate. A stable isotope (13C) breath test attempts to estimate the rate of absorption of 13C, as an index of gastric emptying rate, from the rate of pulmonary elimination of 13CO2. The time-gastric emptying curve determined by the breath test is quite different from that determined by scintigraphy or ultrasonography. In this report, we have shown that the W-N method can adjust the difference. The W-N equation to estimate gastric emptying from breath data is as follows: the fractional cumulative amount of gastric contents emptied by time t = Abreath (t)/Abreath (infinity) + (1/0.65).d[Abreath (t)/Abreath (infinity) ]/dt, where Abreath (t) = the cumulative recovery of 13CO2 in breath by time t and Abreath ( infinity ) = the ultimate cumulative 13CO2 recovery. The emptying flow curve generated by ultrasonography was compared with that generated by the W-N method-adjusted breath test in 6 volunteers. The emptying curves by the W-N method were almost identical to those by ultrasound. The W-N method can generate an accurate emptying flow curve from 13CO2 data, and it can adjust the difference between ultrasonography and the breath test. Copyright 2004 S. Karger AG, Basel
Test Input Generation for Red-Black Trees using Abstraction
NASA Technical Reports Server (NTRS)
Visser, Willem; Pasareanu, Corina S.; Pelanek, Radek
2005-01-01
We consider the problem of test input generation for code that manipulates complex data structures. Test inputs are sequences of method calls from the data structure interface. We describe test input generation techniques that rely on state matching to avoid generation of redundant tests. Exhaustive techniques use explicit state model checking to explore all the possible test sequences up to predefined input sizes. Lossy techniques rely on abstraction mappings to compute and store abstract versions of the concrete states; they explore under-approximations of all the possible test sequences. We have implemented the techniques on top of the Java PathFinder model checker and we evaluate them using a Java implementation of red-black trees.
Automated unit-level testing with heuristic rules
NASA Technical Reports Server (NTRS)
Carlisle, W. Homer; Chang, Kai-Hsiung; Cross, James H.; Keleher, William; Shackelford, Keith
1990-01-01
Software testing plays a significant role in the development of complex software systems. Current testing methods generally require significant effort to generate meaningful test cases. The QUEST/Ada system is a prototype system designed using CLIPS to experiment with expert system based test case generation. The prototype is designed to test for condition coverage, and attempts to generate test cases to cover all feasible branches contained in an Ada program. This paper reports on heuristics sued by the system. These heuristics vary according to the amount of knowledge obtained by preprocessing and execution of the boolean conditions in the program.
Functional test generation for digital circuits described with a declarative language: LUSTRE
NASA Astrophysics Data System (ADS)
Almahrous, Mazen
1990-08-01
A functional approach to the test generation problem starting from a high level description is proposed. The circuit tested is modeled, using the LUSTRE high level data flow description language. The different LUSTRE primitives are translated to a SATAN format graph in order to evaluate the testability of the circuit and to generate test sequences. Another method of testing the complex circuits comprising an operative part and a control part is defined. It consists of checking experiments for the control part observed through the operative part. It was applied to the automata generated from a LUSTRE description of the circuit.
ERIC Educational Resources Information Center
Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.
2016-01-01
The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…
Method for Smoke Spread Testing of Large Premises
NASA Astrophysics Data System (ADS)
Walmerdahl, P.; Werling, P.
2001-11-01
A method for performing non-destructive smoke spread tests has been developed, tested and applied to several existing buildings. Burning methanol in different size steel trays cooled by water generates the heat source. Several tray sizes are available to cover fire sources up to nearly 1MW. The smoke is supplied by means of a suitable number of smoke generators that produce a smoke, which can be described as a non-toxic aerosol. The advantage of the method is that it provides a means for performing non-destructive tests in already existing buildings and other installations for the purpose of evaluating the functionality and design of the active fire protection measures such as smoke extraction systems, etc. In the report, the method is described in detail and experimental data from the try-out of the method are also presented in addition to a discussion on applicability and flexibility of the method.
Deblauwe, Vincent; Kennel, Pol; Couteron, Pierre
2012-01-01
Background Independence between observations is a standard prerequisite of traditional statistical tests of association. This condition is, however, violated when autocorrelation is present within the data. In the case of variables that are regularly sampled in space (i.e. lattice data or images), such as those provided by remote-sensing or geographical databases, this problem is particularly acute. Because analytic derivation of the null probability distribution of the test statistic (e.g. Pearson's r) is not always possible when autocorrelation is present, we propose instead the use of a Monte Carlo simulation with surrogate data. Methodology/Principal Findings The null hypothesis that two observed mapped variables are the result of independent pattern generating processes is tested here by generating sets of random image data while preserving the autocorrelation function of the original images. Surrogates are generated by matching the dual-tree complex wavelet spectra (and hence the autocorrelation functions) of white noise images with the spectra of the original images. The generated images can then be used to build the probability distribution function of any statistic of association under the null hypothesis. We demonstrate the validity of a statistical test of association based on these surrogates with both actual and synthetic data and compare it with a corrected parametric test and three existing methods that generate surrogates (randomization, random rotations and shifts, and iterative amplitude adjusted Fourier transform). Type I error control was excellent, even with strong and long-range autocorrelation, which is not the case for alternative methods. Conclusions/Significance The wavelet-based surrogates are particularly appropriate in cases where autocorrelation appears at all scales or is direction-dependent (anisotropy). We explore the potential of the method for association tests involving a lattice of binary data and discuss its potential for validation of species distribution models. An implementation of the method in Java for the generation of wavelet-based surrogates is available online as supporting material. PMID:23144961
Shuttle filter study. Volume 2: Contaminant generation and sensitivity studies
NASA Technical Reports Server (NTRS)
1974-01-01
Contaminant generation studies were conducted at the component level using two different methods, radioactive tracer technique and gravimetric analysis test procedure. Both of these were reduced to practice during this program. In the first of these methods, radioactively tagged components typical of those used in spacecraft were studied to determine their contaminant generation characteristics under simulated operating conditions. Because the purpose of the work was: (1) to determine the types and quantities of contaminants generated; and (2) to evaluate improved monitoring and detection schemes, no attempt was made to evaluate or qualify specific components. The components used in this test program were therefore not flight hardware items. Some of them had been used in previous tests; some were obsolete; one was an experimental device. In addition to the component tests, various materials of interest to contaminant and filtration studies were irradiated and evaluated for use as autotracer materials. These included test dusts, plastics, valve seat materials, and bearing cage materials.
A new comparison method for dew-point generators
NASA Astrophysics Data System (ADS)
Heinonen, Martti
1999-12-01
A new method for comparing dew-point generators was developed at the Centre for Metrology and Accreditation. In this method, the generators participating in a comparison are compared with a transportable saturator unit using a dew-point comparator. The method was tested by constructing a test apparatus and by comparing it with the MIKES primary dew-point generator several times in the dew-point temperature range from -40 to +75 °C. The expanded uncertainty (k = 2) of the apparatus was estimated to be between 0.05 and 0.07 °C and the difference between the comparator system and the generator is well within these limits. In particular, all of the results obtained in the range below 0 °C are within ±0.03 °C. It is concluded that a new type of a transfer standard with characteristics most suitable for dew-point comparisons can be developed on the basis of the principles presented in this paper.
SU-F-T-423: Automating Treatment Planning for Cervical Cancer in Low- and Middle- Income Countries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kisling, K; Zhang, L; Yang, J
Purpose: To develop and test two independent algorithms that automatically create the photon treatment fields for a four-field box beam arrangement, a common treatment technique for cervical cancer in low- and middle-income countries. Methods: Two algorithms were developed and integrated into Eclipse using its Advanced Programming Interface:3D Method: We automatically segment bony anatomy on CT using an in-house multi-atlas contouring tool and project the structures into the beam’s-eye-view. We identify anatomical landmarks on the projections to define the field apertures. 2D Method: We generate DRRs for all four beams. An atlas of DRRs for six standard patients with corresponding fieldmore » apertures are deformably registered to the test patient DRRs. The set of deformed atlas apertures are fitted to an expected shape to define the final apertures. Both algorithms were tested on 39 patient CTs, and the resulting treatment fields were scored by a radiation oncologist. We also investigated the feasibility of using one algorithm as an independent check of the other algorithm. Results: 96% of the 3D-Method-generated fields and 79% of the 2D-method-generated fields were scored acceptable for treatment (“Per Protocol” or “Acceptable Variation”). The 3D Method generated more fields scored “Per Protocol” than the 2D Method (62% versus 17%). The 4% of the 3D-Method-generated fields that were scored “Unacceptable Deviation” were all due to an improper L5 vertebra contour resulting in an unacceptable superior jaw position. When these same patients were planned with the 2D method, the superior jaw was acceptable, suggesting that the 2D method can be used to independently check the 3D method. Conclusion: Our results show that our 3D Method is feasible for automatically generating cervical treatment fields. Furthermore, the 2D Method can serve as an automatic, independent check of the automatically-generated treatment fields. These algorithms will be implemented for fully automated cervical treatment planning.« less
Correlation to FVIII:C in Two Thrombin Generation Tests: TGA-CAT and INNOVANCE ETP.
Ljungkvist, Marcus; Berndtsson, Maria; Holmström, Margareta; Mikovic, Danijela; Elezovic, Ivo; Antovic, Jovan P; Zetterberg, Eva; Berntorp, Erik
2017-01-01
Several thrombin-generation tests are available, but few have been directly compared. Our primary aim was to investigate the correlation of two thrombin generation tests, thrombin generation assay-calibrated automated thrombogram (TGA-CAT) and INNOVANCE ETP, to factor VIII levels (FVIII:C) in a group of patients with hemophilia A. The secondary aim was to investigate inter-laboratory variation for the TGA-CAT method. Blood samples were taken from 45 patients with mild, moderate and severe hemophilia A. The TGA-CAT method was performed at both centers while the INNOVANCE ETP was only performed at the Stockholm center. Correlation between parameters was evaluated using Spearman's rank correlation test. For determination of the TGA-CAT inter-laboratory variability, Bland-Altman plots were used. The correlation for the INNOVANCE ETP and TGA-CAT methods with FVIII:C in persons with hemophilia (PWH) was r=0.701 and r=0.734 respectively.The correlation between the two methods was r=0.546.When dividing the study material into disease severity groups (mild, moderate and severe) based on FVIII levels, both methods fail to discriminate between them.The variability of the TGA-CAT results performed at the two centers was reduced after normalization; before normalization, 29% of values showed less than ±10% difference while after normalization the number increased to 41%. Both methods correlate in an equal manner to FVIII:C in PWH but show a poor correlation with each other. The level of agreement for the TGA-CAT method was poor though slightly improved after normalization of data. Further improvement of standardization of these methods is warranted.
Statistically generated weighted curve fit of residual functions for modal analysis of structures
NASA Technical Reports Server (NTRS)
Bookout, P. S.
1995-01-01
A statistically generated weighting function for a second-order polynomial curve fit of residual functions has been developed. The residual flexibility test method, from which a residual function is generated, is a procedure for modal testing large structures in an external constraint-free environment to measure the effects of higher order modes and interface stiffness. This test method is applicable to structures with distinct degree-of-freedom interfaces to other system components. A theoretical residual function in the displacement/force domain has the characteristics of a relatively flat line in the lower frequencies and a slight upward curvature in the higher frequency range. In the test residual function, the above-mentioned characteristics can be seen in the data, but due to the present limitations in the modal parameter evaluation (natural frequencies and mode shapes) of test data, the residual function has regions of ragged data. A second order polynomial curve fit is required to obtain the residual flexibility term. A weighting function of the data is generated by examining the variances between neighboring data points. From a weighted second-order polynomial curve fit, an accurate residual flexibility value can be obtained. The residual flexibility value and free-free modes from testing are used to improve a mathematical model of the structure. The residual flexibility modal test method is applied to a straight beam with a trunnion appendage and a space shuttle payload pallet simulator.
The Effect of Schooling and Ability on Achievement Test Scores. NBER Working Paper Series.
ERIC Educational Resources Information Center
Hansen, Karsten; Heckman, James J.; Mullen, Kathleen J.
This study developed two methods for estimating the effect of schooling on achievement test scores that control for the endogeneity of schooling by postulating that both schooling and test scores are generated by a common unobserved latent ability. The methods were applied to data on schooling and test scores. Estimates from the two methods are in…
Optical testing of aspheres based on photochromic computer-generated holograms
NASA Astrophysics Data System (ADS)
Pariani, Giorgio; Bianco, Andrea; Bertarelli, Chiara; Spanó, Paolo; Molinari, Emilio
2010-07-01
Aspherical optics are widely used in modern optical telescopes and instrumentation because of their ability to reduce aberrations with a simple optical system. Testing their optical quality through null interferometry is not trivial as reference optics are not available. Computer-Generated Holograms (CGHs) are efficient devices that allow to generate a well-defined optical wavefront. We developed rewritable Computer Generated Holograms for the interferometric test of aspheres based on photochromic layers. These photochromic holograms are cost-effective and the method of production does not need any post exposure process.
ERIC Educational Resources Information Center
Roid, Gale; And Others
Several measurement theorists have convincingly argued that methods of writing test questions, particularly for criterion-referenced tests, should be based on operationally defined rules. This study was designed to examine and further refine a method for objectively generating multiple-choice questions for prose instructional materials. Important…
Development of the mathematical model for design and verification of acoustic modal analysis methods
NASA Astrophysics Data System (ADS)
Siner, Alexander; Startseva, Maria
2016-10-01
To reduce the turbofan noise it is necessary to develop methods for the analysis of the sound field generated by the blade machinery called modal analysis. Because modal analysis methods are very difficult and their testing on the full scale measurements are very expensive and tedious it is necessary to construct some mathematical models allowing to test modal analysis algorithms fast and cheap. At this work the model allowing to set single modes at the channel and to analyze generated sound field is presented. Modal analysis of the sound generated by the ring array of point sound sources is made. Comparison of experimental and numerical modal analysis results is presented at this work.
A micro-vibration generated method for testing the imaging quality on ground of space remote sensing
NASA Astrophysics Data System (ADS)
Gu, Yingying; Wang, Li; Wu, Qingwen
2018-03-01
In this paper, a novel method is proposed, which can simulate satellite platform micro-vibration and test the impact of satellite micro-vibration on imaging quality of space optical remote sensor on ground. The method can generate micro-vibration of satellite platform in orbit from vibrational degrees of freedom, spectrum, magnitude, and coupling path. Experiment results show that the relative error of acceleration control is within 7%, in frequencies from 7Hz to 40Hz. Utilizing this method, the system level test about the micro-vibration impact on imaging quality of space optical remote sensor can be realized. This method will have an important applications in testing micro-vibration tolerance margin of optical remote sensor, verifying vibration isolation and suppression performance of optical remote sensor, exploring the principle of micro-vibration impact on imaging quality of optical remote sensor.
Generation of physical random numbers by using homodyne detection
NASA Astrophysics Data System (ADS)
Hirakawa, Kodai; Oya, Shota; Oguri, Yusuke; Ichikawa, Tsubasa; Eto, Yujiro; Hirano, Takuya; Tsurumaru, Toyohiro
2016-10-01
Physical random numbers generated by quantum measurements are, in principle, impossible to predict. We have demonstrated the generation of physical random numbers by using a high-speed balanced photodetector to measure the quadrature amplitudes of vacuum states. Using this method, random numbers were generated at 500 Mbps, which is more than one order of magnitude faster than previously [Gabriel et al:, Nature Photonics 4, 711-715 (2010)]. The Crush test battery of the TestU01 suite consists of 31 tests in 144 variations, and we used them to statistically analyze these numbers. The generated random numbers passed 14 of the 31 tests. To improve the randomness, we performed a hash operation, in which each random number was multiplied by a random Toeplitz matrix; the resulting numbers passed all of the tests in the TestU01 Crush battery.
Improving Arterial Spin Labeling by Using Deep Learning.
Kim, Ki Hwan; Choi, Seung Hong; Park, Sung-Hong
2018-05-01
Purpose To develop a deep learning algorithm that generates arterial spin labeling (ASL) perfusion images with higher accuracy and robustness by using a smaller number of subtraction images. Materials and Methods For ASL image generation from pair-wise subtraction, we used a convolutional neural network (CNN) as a deep learning algorithm. The ground truth perfusion images were generated by averaging six or seven pairwise subtraction images acquired with (a) conventional pseudocontinuous arterial spin labeling from seven healthy subjects or (b) Hadamard-encoded pseudocontinuous ASL from 114 patients with various diseases. CNNs were trained to generate perfusion images from a smaller number (two or three) of subtraction images and evaluated by means of cross-validation. CNNs from the patient data sets were also tested on 26 separate stroke data sets. CNNs were compared with the conventional averaging method in terms of mean square error and radiologic score by using a paired t test and/or Wilcoxon signed-rank test. Results Mean square errors were approximately 40% lower than those of the conventional averaging method for the cross-validation with the healthy subjects and patients and the separate test with the patients who had experienced a stroke (P < .001). Region-of-interest analysis in stroke regions showed that cerebral blood flow maps from CNN (mean ± standard deviation, 19.7 mL per 100 g/min ± 9.7) had smaller mean square errors than those determined with the conventional averaging method (43.2 ± 29.8) (P < .001). Radiologic scoring demonstrated that CNNs suppressed noise and motion and/or segmentation artifacts better than the conventional averaging method did (P < .001). Conclusion CNNs provided superior perfusion image quality and more accurate perfusion measurement compared with those of the conventional averaging method for generation of ASL images from pair-wise subtraction images. © RSNA, 2017.
Multifunction waveform generator for EM receiver testing
NASA Astrophysics Data System (ADS)
Chen, Kai; Jin, Sheng; Deng, Ming
2018-01-01
In many electromagnetic (EM) methods - such as magnetotelluric, spectral-induced polarization (SIP), time-domain-induced polarization (TDIP), and controlled-source audio magnetotelluric (CSAMT) methods - it is important to evaluate and test the EM receivers during their development stage. To assess the performance of the developed EM receivers, controlled synthetic data that simulate the observed signals in different modes are required. In CSAMT and SIP mode testing, the waveform generator should use the GPS time as the reference for repeating schedule. Based on our testing, the frequency range, frequency precision, and time synchronization of the currently available function waveform generators on the market are deficient. This paper presents a multifunction waveform generator with three waveforms: (1) a wideband, low-noise electromagnetic field signal to be used for magnetotelluric, audio-magnetotelluric, and long-period magnetotelluric studies; (2) a repeating frequency sweep square waveform for CSAMT and SIP studies; and (3) a positive-zero-negative-zero
signal that contains primary and secondary fields for TDIP studies. In this paper, we provide the principles of the above three waveforms along with a hardware design for the generator. Furthermore, testing of the EM receiver was conducted with the waveform generator, and the results of the experiment were compared with those calculated from the simulation and theory in the frequency band of interest.
Software Quality Assurance and Verification for the MPACT Library Generation Process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yuxuan; Williams, Mark L.; Wiarda, Dorothea
This report fulfills the requirements for the Consortium for the Advanced Simulation of Light-Water Reactors (CASL) milestone L2:RTM.P14.02, “SQA and Verification for MPACT Library Generation,” by documenting the current status of the software quality, verification, and acceptance testing of nuclear data libraries for MPACT. It provides a brief overview of the library generation process, from general-purpose evaluated nuclear data files (ENDF/B) to a problem-dependent cross section library for modeling of light-water reactors (LWRs). The software quality assurance (SQA) programs associated with each of the software used to generate the nuclear data libraries are discussed; specific tests within the SCALE/AMPX andmore » VERA/XSTools repositories are described. The methods and associated tests to verify the quality of the library during the generation process are described in detail. The library generation process has been automated to a degree to (1) ensure that it can be run without user intervention and (2) to ensure that the library can be reproduced. Finally, the acceptance testing process that will be performed by representatives from the Radiation Transport Methods (RTM) Focus Area prior to the production library’s release is described in detail.« less
Optimization of OT-MACH Filter Generation for Target Recognition
NASA Technical Reports Server (NTRS)
Johnson, Oliver C.; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin
2009-01-01
An automatic Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter generator for use in a gray-scale optical correlator (GOC) has been developed for improved target detection at JPL. While the OT-MACH filter has been shown to be an optimal filter for target detection, actually solving for the optimum is too computationally intensive for multiple targets. Instead, an adaptive step gradient descent method was tested to iteratively optimize the three OT-MACH parameters, alpha, beta, and gamma. The feedback for the gradient descent method was a composite of the performance measures, correlation peak height and peak to side lobe ratio. The automated method generated and tested multiple filters in order to approach the optimal filter quicker and more reliably than the current manual method. Initial usage and testing has shown preliminary success at finding an approximation of the optimal filter, in terms of alpha, beta, gamma values. This corresponded to a substantial improvement in detection performance where the true positive rate increased for the same average false positives per image.
Generating Test Templates via Automated Theorem Proving
NASA Technical Reports Server (NTRS)
Kancherla, Mani Prasad
1997-01-01
Testing can be used during the software development process to maintain fidelity between evolving specifications, program designs, and code implementations. We use a form of specification-based testing that employs the use of an automated theorem prover to generate test templates. A similar approach was developed using a model checker on state-intensive systems. This method applies to systems with functional rather than state-based behaviors. This approach allows for the use of incomplete specifications to aid in generation of tests for potential failure cases. We illustrate the technique on the cannonical triangle testing problem and discuss its use on analysis of a spacecraft scheduling system.
Comparison of two Galerkin quadrature methods
Morel, Jim E.; Warsa, James; Franke, Brian C.; ...
2017-02-21
Here, we compare two methods for generating Galerkin quadratures. In method 1, the standard S N method is used to generate the moment-to-discrete matrix and the discrete-to-moment matrix is generated by inverting the moment-to-discrete matrix. This is a particular form of the original Galerkin quadrature method. In method 2, which we introduce here, the standard S N method is used to generate the discrete-to-moment matrix and the moment-to-discrete matrix is generated by inverting the discrete-to-moment matrix. With an N-point quadrature, method 1 has the advantage that it preserves N eigenvalues and N eigenvectors of the scattering operator in a pointwisemore » sense. With an N-point quadrature, method 2 has the advantage that it generates consistent angular moment equations from the corresponding S N equations while preserving N eigenvalues of the scattering operator. Our computational results indicate that these two methods are quite comparable for the test problem considered.« less
Comparison of two Galerkin quadrature methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morel, Jim E.; Warsa, James; Franke, Brian C.
Here, we compare two methods for generating Galerkin quadratures. In method 1, the standard S N method is used to generate the moment-to-discrete matrix and the discrete-to-moment matrix is generated by inverting the moment-to-discrete matrix. This is a particular form of the original Galerkin quadrature method. In method 2, which we introduce here, the standard S N method is used to generate the discrete-to-moment matrix and the moment-to-discrete matrix is generated by inverting the discrete-to-moment matrix. With an N-point quadrature, method 1 has the advantage that it preserves N eigenvalues and N eigenvectors of the scattering operator in a pointwisemore » sense. With an N-point quadrature, method 2 has the advantage that it generates consistent angular moment equations from the corresponding S N equations while preserving N eigenvalues of the scattering operator. Our computational results indicate that these two methods are quite comparable for the test problem considered.« less
Generation and characterization of biological aerosols for laser measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Yung-Sung; Barr, E.B.
1995-12-01
Concerns for proliferation of biological weapons including bacteria, fungi, and viruses have prompted research and development on methods for the rapid detection of biological aerosols in the field. Real-time instruments that can distinguish biological aerosols from background dust would be especially useful. Sandia National Laboratories (SNL) is developing a laser-based, real-time instrument for rapid detection of biological aerosols, and ITRI is working with SNL scientists and engineers to evaluate this technology for a wide range of biological aerosols. This paper describes methods being used to generate the characterize the biological aerosols for these tests. In summary, a biosafe system hasmore » been developed for generating and characterizing biological aerosols and using those aerosols to test the SNL laser-based real-time instrument. Such tests are essential in studying methods for rapid detection of airborne biological materials.« less
Comparative Evaluation of Quantitative Test Methods for Gases on a Hard Surface
2017-02-01
COMPARATIVE EVALUATION OF QUANTITATIVE TEST METHODS FOR GASES ON A HARD SURFACE ECBC-TR-1426 Vipin Rastogi...1 COMPARATIVE EVALUATION OF QUANTITATIVE TEST METHODS FOR GASES ON A HARD SURFACE 1. INTRODUCTION Members of the U.S. Environmental...Generator 4 2.4 Experimental Design Each quantitative method was performed three times on three consecutive days. For the CD runs, three
Liu, Xiaofeng; Bai, Fang; Ouyang, Sisheng; Wang, Xicheng; Li, Honglin; Jiang, Hualiang
2009-03-31
Conformation generation is a ubiquitous problem in molecule modelling. Many applications require sampling the broad molecular conformational space or perceiving the bioactive conformers to ensure success. Numerous in silico methods have been proposed in an attempt to resolve the problem, ranging from deterministic to non-deterministic and systemic to stochastic ones. In this work, we described an efficient conformation sampling method named Cyndi, which is based on multi-objective evolution algorithm. The conformational perturbation is subjected to evolutionary operation on the genome encoded with dihedral torsions. Various objectives are designated to render the generated Pareto optimal conformers to be energy-favoured as well as evenly scattered across the conformational space. An optional objective concerning the degree of molecular extension is added to achieve geometrically extended or compact conformations which have been observed to impact the molecular bioactivity (J Comput -Aided Mol Des 2002, 16: 105-112). Testing the performance of Cyndi against a test set consisting of 329 small molecules reveals an average minimum RMSD of 0.864 A to corresponding bioactive conformations, indicating Cyndi is highly competitive against other conformation generation methods. Meanwhile, the high-speed performance (0.49 +/- 0.18 seconds per molecule) renders Cyndi to be a practical toolkit for conformational database preparation and facilitates subsequent pharmacophore mapping or rigid docking. The copy of precompiled executable of Cyndi and the test set molecules in mol2 format are accessible in Additional file 1. On the basis of MOEA algorithm, we present a new, highly efficient conformation generation method, Cyndi, and report the results of validation and performance studies comparing with other four methods. The results reveal that Cyndi is capable of generating geometrically diverse conformers and outperforms other four multiple conformer generators in the case of reproducing the bioactive conformations against 329 structures. The speed advantage indicates Cyndi is a powerful alternative method for extensive conformational sampling and large-scale conformer database preparation.
Advances in Time Estimation Methods for Molecular Data.
Kumar, Sudhir; Hedges, S Blair
2016-04-01
Molecular dating has become central to placing a temporal dimension on the tree of life. Methods for estimating divergence times have been developed for over 50 years, beginning with the proposal of molecular clock in 1962. We categorize the chronological development of these methods into four generations based on the timing of their origin. In the first generation approaches (1960s-1980s), a strict molecular clock was assumed to date divergences. In the second generation approaches (1990s), the equality of evolutionary rates between species was first tested and then a strict molecular clock applied to estimate divergence times. The third generation approaches (since ∼2000) account for differences in evolutionary rates across the tree by using a statistical model, obviating the need to assume a clock or to test the equality of evolutionary rates among species. Bayesian methods in the third generation require a specific or uniform prior on the speciation-process and enable the inclusion of uncertainty in clock calibrations. The fourth generation approaches (since 2012) allow rates to vary from branch to branch, but do not need prior selection of a statistical model to describe the rate variation or the specification of speciation model. With high accuracy, comparable to Bayesian approaches, and speeds that are orders of magnitude faster, fourth generation methods are able to produce reliable timetrees of thousands of species using genome scale data. We found that early time estimates from second generation studies are similar to those of third and fourth generation studies, indicating that methodological advances have not fundamentally altered the timetree of life, but rather have facilitated time estimation by enabling the inclusion of more species. Nonetheless, we feel an urgent need for testing the accuracy and precision of third and fourth generation methods, including their robustness to misspecification of priors in the analysis of large phylogenies and data sets. © The Author(s) 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Shum, Bennett O V; Henner, Ilya; Belluoccio, Daniele; Hinchcliffe, Marcus J
2017-07-01
The sensitivity and specificity of next-generation sequencing laboratory developed tests (LDTs) are typically determined by an analyte-specific approach. Analyte-specific validations use disease-specific controls to assess an LDT's ability to detect known pathogenic variants. Alternatively, a methods-based approach can be used for LDT technical validations. Methods-focused validations do not use disease-specific controls but use benchmark reference DNA that contains known variants (benign, variants of unknown significance, and pathogenic) to assess variant calling accuracy of a next-generation sequencing workflow. Recently, four whole-genome reference materials (RMs) from the National Institute of Standards and Technology (NIST) were released to standardize methods-based validations of next-generation sequencing panels across laboratories. We provide a practical method for using NIST RMs to validate multigene panels. We analyzed the utility of RMs in validating a novel newborn screening test that targets 70 genes, called NEO1. Despite the NIST RM variant truth set originating from multiple sequencing platforms, replicates, and library types, we discovered a 5.2% false-negative variant detection rate in the RM truth set genes that were assessed in our validation. We developed a strategy using complementary non-RM controls to demonstrate 99.6% sensitivity of the NEO1 test in detecting variants. Our findings have implications for laboratories or proficiency testing organizations using whole-genome NIST RMs for testing. Copyright © 2017 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Real-time high speed generator system emulation with hardware-in-the-loop application
NASA Astrophysics Data System (ADS)
Stroupe, Nicholas
The emerging emphasis and benefits of distributed generation on smaller scale networks has prompted much attention and focus to research in this field. Much of the research that has grown in distributed generation has also stimulated the development of simulation software and techniques. Testing and verification of these distributed power networks is a complex task and real hardware testing is often desired. This is where simulation methods such as hardware-in-the-loop become important in which an actual hardware unit can be interfaced with a software simulated environment to verify proper functionality. In this thesis, a simulation technique is taken one step further by utilizing a hardware-in-the-loop technique to emulate the output voltage of a generator system interfaced to a scaled hardware distributed power system for testing. The purpose of this thesis is to demonstrate a new method of testing a virtually simulated generation system supplying a scaled distributed power system in hardware. This task is performed by using the Non-Linear Loads Test Bed developed by the Energy Conversion and Integration Thrust at the Center for Advanced Power Systems. This test bed consists of a series of real hardware developed converters consistent with the Navy's All-Electric-Ship proposed power system to perform various tests on controls and stability under the expected non-linear load environment of the Navy weaponry. This test bed can also explore other distributed power system research topics and serves as a flexible hardware unit for a variety of tests. In this thesis, the test bed will be utilized to perform and validate this newly developed method of generator system emulation. In this thesis, the dynamics of a high speed permanent magnet generator directly coupled with a micro turbine are virtually simulated on an FPGA in real-time. The calculated output stator voltage will then serve as a reference for a controllable three phase inverter at the input of the test bed that will emulate and reproduce these voltages on real hardware. The output of the inverter is then connected with the rest of the test bed and can consist of a variety of distributed system topologies for many testing scenarios. The idea is that the distributed power system under test in hardware can also integrate real generator system dynamics without physically involving an actual generator system. The benefits of successful generator system emulation are vast and lead to much more detailed system studies without the draw backs of needing physical generator units. Some of these advantages are safety, reduced costs, and the ability of scaling while still preserving the appropriate system dynamics. This thesis will introduce the ideas behind generator emulation and explain the process and necessary steps to obtaining such an objective. It will also demonstrate real results and verification of numerical values in real-time. The final goal of this thesis is to introduce this new idea and show that it is in fact obtainable and can prove to be a highly useful tool in the simulation and verification of distributed power systems.
The generation of monoclonal antibodies and their use in rapid diagnostic tests
USDA-ARS?s Scientific Manuscript database
Antibodies are the most important component of an immunoassay. In these proceedings we outline novel methods used to generate and select monoclonal antibodies that meet performance criteria for use in rapid lateral flow and microfluidic immunoassay tests for the detection of agricultural pathogens ...
ERIC Educational Resources Information Center
Krach, Soren; Hartje, Wolfgang
2006-01-01
The Wada test is at present the method of choice for preoperative assessment of patients who require surgery close to cortical language areas. It is, however, an invasive test with an attached morbidity risk. By now, an alternative to the Wada test is to combine a lexical word generation paradigm with non-invasive imaging techniques. However,…
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 6 2011-07-01 2011-07-01 false Compliance and performance test methods and procedures for particulate matter and nitrogen oxides. 60.46b Section 60.46b Protection of... Generating Units § 60.46b Compliance and performance test methods and procedures for particulate matter and...
Marks, Michał; Glinicki, Michał A.; Gibas, Karolina
2015-01-01
The aim of the study was to generate rules for the prediction of the chloride resistance of concrete modified with high calcium fly ash using machine learning methods. The rapid chloride permeability test, according to the Nordtest Method Build 492, was used for determining the chloride ions’ penetration in concrete containing high calcium fly ash (HCFA) for partial replacement of Portland cement. The results of the performed tests were used as the training set to generate rules describing the relation between material composition and the chloride resistance. Multiple methods for rule generation were applied and compared. The rules generated by algorithm J48 from the Weka workbench provided the means for adequate classification of plain concretes and concretes modified with high calcium fly ash as materials of good, acceptable or unacceptable resistance to chloride penetration. PMID:28793740
A Test Generation Framework for Distributed Fault-Tolerant Algorithms
NASA Technical Reports Server (NTRS)
Goodloe, Alwyn; Bushnell, David; Miner, Paul; Pasareanu, Corina S.
2009-01-01
Heavyweight formal methods such as theorem proving have been successfully applied to the analysis of safety critical fault-tolerant systems. Typically, the models and proofs performed during such analysis do not inform the testing process of actual implementations. We propose a framework for generating test vectors from specifications written in the Prototype Verification System (PVS). The methodology uses a translator to produce a Java prototype from a PVS specification. Symbolic (Java) PathFinder is then employed to generate a collection of test cases. A small example is employed to illustrate how the framework can be used in practice.
Energy saving concepts relating to induction generators
NASA Technical Reports Server (NTRS)
Nola, F. J.
1980-01-01
Energy saving concepts relating to induction generators are presented. The first describes a regenerative scheme using an induction generator as a variable load for prime movers under test is described. A method for reducing losses in induction machines used specifically as wind driven generators is also described.
NASA Technical Reports Server (NTRS)
Generazio, Edward R. (Inventor)
2012-01-01
A method of validating a probability of detection (POD) testing system using directed design of experiments (DOE) includes recording an input data set of observed hit and miss or analog data for sample components as a function of size of a flaw in the components. The method also includes processing the input data set to generate an output data set having an optimal class width, assigning a case number to the output data set, and generating validation instructions based on the assigned case number. An apparatus includes a host machine for receiving the input data set from the testing system and an algorithm for executing DOE to validate the test system. The algorithm applies DOE to the input data set to determine a data set having an optimal class width, assigns a case number to that data set, and generates validation instructions based on the case number.
Mwogi, Thomas S.; Biondich, Paul G.; Grannis, Shaun J.
2014-01-01
Motivated by the need for readily available data for testing an open-source health information exchange platform, we developed and evaluated two methods for generating synthetic messages. The methods used HL7 version 2 messages obtained from the Indiana Network for Patient Care. Data from both methods were analyzed to assess how effectively the output reflected original ‘real-world’ data. The Markov Chain method (MCM) used an algorithm based on transitional probability matrix while the Music Box model (MBM) randomly selected messages of particular trigger type from the original data to generate new messages. The MBM was faster, generated shorter messages and exhibited less variation in message length. The MCM required more computational power, generated longer messages with more message length variability. Both methods exhibited adequate coverage, producing a high proportion of messages consistent with original messages. Both methods yielded similar rates of valid messages. PMID:25954458
Stanislawski, Jerzy; Kotulska, Malgorzata; Unold, Olgierd
2013-01-17
Amyloids are proteins capable of forming fibrils. Many of them underlie serious diseases, like Alzheimer disease. The number of amyloid-associated diseases is constantly increasing. Recent studies indicate that amyloidogenic properties can be associated with short segments of aminoacids, which transform the structure when exposed. A few hundreds of such peptides have been experimentally found. Experimental testing of all possible aminoacid combinations is currently not feasible. Instead, they can be predicted by computational methods. 3D profile is a physicochemical-based method that has generated the most numerous dataset - ZipperDB. However, it is computationally very demanding. Here, we show that dataset generation can be accelerated. Two methods to increase the classification efficiency of amyloidogenic candidates are presented and tested: simplified 3D profile generation and machine learning methods. We generated a new dataset of hexapeptides, using more economical 3D profile algorithm, which showed very good classification overlap with ZipperDB (93.5%). The new part of our dataset contains 1779 segments, with 204 classified as amyloidogenic. The dataset of 6-residue sequences with their binary classification, based on the energy of the segment, was applied for training machine learning methods. A separate set of sequences from ZipperDB was used as a test set. The most effective methods were Alternating Decision Tree and Multilayer Perceptron. Both methods obtained area under ROC curve of 0.96, accuracy 91%, true positive rate ca. 78%, and true negative rate 95%. A few other machine learning methods also achieved a good performance. The computational time was reduced from 18-20 CPU-hours (full 3D profile) to 0.5 CPU-hours (simplified 3D profile) to seconds (machine learning). We showed that the simplified profile generation method does not introduce an error with regard to the original method, while increasing the computational efficiency. Our new dataset proved representative enough to use simple statistical methods for testing the amylogenicity based only on six letter sequences. Statistical machine learning methods such as Alternating Decision Tree and Multilayer Perceptron can replace the energy based classifier, with advantage of very significantly reduced computational time and simplicity to perform the analysis. Additionally, a decision tree provides a set of very easily interpretable rules.
Analysis of large system black box verification test data
NASA Technical Reports Server (NTRS)
Clapp, Kenneth C.; Iyer, Ravishankar Krishnan
1993-01-01
Issues regarding black box, large systems verification are explored. It begins by collecting data from several testing teams. An integrated database containing test, fault, repair, and source file information is generated. Intuitive effectiveness measures are generated using conventional black box testing results analysis methods. Conventional analysts methods indicate that the testing was effective in the sense that as more tests were run, more faults were found. Average behavior and individual data points are analyzed. The data is categorized and average behavior shows a very wide variation in number of tests run and in pass rates (pass rates ranged from 71 percent to 98 percent). The 'white box' data contained in the integrated database is studied in detail. Conservative measures of effectiveness are discussed. Testing efficiency (ratio of repairs to number of tests) is measured at 3 percent, fault record effectiveness (ratio of repairs to fault records) is measured at 55 percent, and test script redundancy (ratio of number of failed tests to minimum number of tests needed to find the faults) ranges from 4.2 to 15.8. Error prone source files and subsystems are identified. A correlational mapping of test functional area to product subsystem is completed. A new adaptive testing process based on real-time generation of the integrated database is proposed.
A Method for Generating Educational Test Items That Are Aligned to the Common Core State Standards
ERIC Educational Resources Information Center
Gierl, Mark J.; Lai, Hollis; Hogan, James B.; Matovinovic, Donna
2015-01-01
The demand for test items far outstrips the current supply. This increased demand can be attributed, in part, to the transition to computerized testing, but, it is also linked to dramatic changes in how 21st century educational assessments are designed and administered. One way to address this growing demand is with automatic item generation.…
Rotating rake design for unique measurement of fan-generated spinning acoustic modes
NASA Technical Reports Server (NTRS)
Konno, Kevin E.; Hausmann, Clifford R.
1993-01-01
In light of the current emphasis on noise reduction in subsonic aircraft design, NASA has been actively studying the source of and propagation of noise generated by subsonic fan engines. NASA/LeRC has developed and tested a unique method of accurately measuring these spinning acoustic modes generated by an experimental fan. This mode measuring method is based on the use of a rotating microphone rake. Testing was conducted in the 9 x 15 Low-speed Wind Tunnel. The rotating rake was tested with the Advanced Ducted Propeller (ADP) model. This memorandum discusses the design and performance of the motor/drive system for the fan-synchronized rotating acoustic rake. This novel motor/drive design approach is now being adapted for additional acoustic mode studies in new test rigs as baseline data for the future design of active noise control for subsonic fan engines. Included in this memorandum are the research requirements, motor/drive specifications, test performance results, and a description of the controls and software involved.
Modal Survey of ETM-3, A 5-Segment Derivative of the Space Shuttle Solid Rocket Booster
NASA Technical Reports Server (NTRS)
Nielsen, D.; Townsend, J.; Kappus, K.; Driskill, T.; Torres, I.; Parks, R.
2005-01-01
The complex interactions between internal motor generated pressure oscillations and motor structural vibration modes associated with the static test configuration of a Reusable Solid Rocket Motor have potential to generate significant dynamic thrust loads in the 5-segment configuration (Engineering Test Motor 3). Finite element model load predictions for worst-case conditions were generated based on extrapolation of a previously correlated 4-segment motor model. A modal survey was performed on the largest rocket motor to date, Engineering Test Motor #3 (ETM-3), to provide data for finite element model correlation and validation of model generated design loads. The modal survey preparation included pretest analyses to determine an efficient analysis set selection using the Effective Independence Method and test simulations to assure critical test stand component loads did not exceed design limits. Historical Reusable Solid Rocket Motor modal testing, ETM-3 test analysis model development and pre-test loads analyses, as well as test execution, and a comparison of results to pre-test predictions are discussed.
Measuring Thermal Conductivity at LH2 Temperatures
NASA Technical Reports Server (NTRS)
Selvidge, Shawn; Watwood, Michael C.
2004-01-01
For many years, the National Institute of Standards and Technology (NIST) produced reference materials for materials testing. One such reference material was intended for use with a guarded hot plate apparatus designed to meet the requirements of ASTM C177-97, "Standard Test Method for Steady-State Heat Flux Measurements and Thermal Transmission Properties by Means of the Guarded-Hot-Plate Apparatus." This apparatus can be used to test materials in various gaseous environments from atmospheric pressure to a vacuum. It allows the thermal transmission properties of insulating materials to be measured from just above ambient temperature down to temperatures below liquid hydrogen. However, NIST did not generate data below 77 K temperature for the reference material in question. This paper describes a test method used at NASA's Marshall Space Flight Center (MSFC) to optimize thermal conductivity measurements during the development of thermal protection systems. The test method extends the usability range of this reference material by generating data at temperatures lower than 77 K. Information provided by this test is discussed, as are the capabilities of the MSFC Hydrogen Test Facility, where advanced methods for materials testing are routinely developed and optimized in support of aerospace applications.
USDA-ARS?s Scientific Manuscript database
This research tested whether children could categorize foods more accurately and speedily when presented with child-generated rather than professionally-generated food categories; and whether a graphically appealing browse procedure similar to the Apple, Inc, "cover flow" graphical user interface ac...
40 CFR 799.6786 - TSCA water solubility: Generator column method.
Code of Federal Regulations, 2014 CFR
2014-07-01
... quantitative) analysis of solvent extract in paragraph (c)(3)(iv) of this section. The design of the generator.... Finally, the design of most chemical tests and many ecological and health tests requires precise knowledge..., molality, and mole fraction. For example, to convert from weight/volume to molarity molecular mass is...
40 CFR 799.6786 - TSCA water solubility: Generator column method.
Code of Federal Regulations, 2013 CFR
2013-07-01
... quantitative) analysis of solvent extract in paragraph (c)(3)(iv) of this section. The design of the generator.... Finally, the design of most chemical tests and many ecological and health tests requires precise knowledge..., molality, and mole fraction. For example, to convert from weight/volume to molarity molecular mass is...
40 CFR 799.6786 - TSCA water solubility: Generator column method.
Code of Federal Regulations, 2012 CFR
2012-07-01
... quantitative) analysis of solvent extract in paragraph (c)(3)(iv) of this section. The design of the generator.... Finally, the design of most chemical tests and many ecological and health tests requires precise knowledge..., molality, and mole fraction. For example, to convert from weight/volume to molarity molecular mass is...
Automatic Generation of Tests from Domain and Multimedia Ontologies
ERIC Educational Resources Information Center
Papasalouros, Andreas; Kotis, Konstantinos; Kanaris, Konstantinos
2011-01-01
The aim of this article is to present an approach for generating tests in an automatic way. Although other methods have been already reported in the literature, the proposed approach is based on ontologies, representing both domain and multimedia knowledge. The article also reports on a prototype implementation of this approach, which…
Predicting Slag Generation in Sub-Scale Test Motors Using a Neural Network
NASA Technical Reports Server (NTRS)
Wiesenberg, Brent
1999-01-01
Generation of slag (aluminum oxide) is an important issue for the Reusable Solid Rocket Motor (RSRM). Thiokol performed testing to quantify the relationship between raw material variations and slag generation in solid propellants by testing sub-scale motors cast with propellant containing various combinations of aluminum fuel and ammonium perchlorate (AP) oxidizer particle sizes. The test data were analyzed using statistical methods and an artificial neural network. This paper primarily addresses the neural network results with some comparisons to the statistical results. The neural network showed that the particle sizes of both the aluminum and unground AP have a measurable effect on slag generation. The neural network analysis showed that aluminum particle size is the dominant driver in slag generation, about 40% more influential than AP. The network predictions of the amount of slag produced during firing of sub-scale motors were 16% better than the predictions of a statistically derived empirical equation. Another neural network successfully characterized the slag generated during full-scale motor tests. The success is attributable to the ability of neural networks to characterize multiple complex factors including interactions that affect slag generation.
Analysis of subsonic wind tunnel with variation shape rectangular and octagonal on test section
NASA Astrophysics Data System (ADS)
Rhakasywi, D.; Ismail; Suwandi, A.; Fadhli, A.
2018-02-01
The need for good design in the aerodynamics field required a wind tunnel design. The wind tunnel design required in this case is capable of generating laminar flow. In this research searched for wind tunnel models with rectangular and octagonal variations with objectives to generate laminar flow in the test section. The research method used numerical approach of CFD (Computational Fluid Dynamics) and manual analysis to analyze internal flow in test section. By CFD simulation results and manual analysis to generate laminar flow in the test section is a design that has an octagonal shape without filled for optimal design.
Hageman, Philip L.; Seal, Robert R.; Diehl, Sharon F.; Piatak, Nadine M.; Lowers, Heather
2015-01-01
A comparison study of selected static leaching and acid–base accounting (ABA) methods using a mineralogically diverse set of 12 modern-style, metal mine waste samples was undertaken to understand the relative performance of the various tests. To complement this study, in-depth mineralogical studies were conducted in order to elucidate the relationships between sample mineralogy, weathering features, and leachate and ABA characteristics. In part one of the study, splits of the samples were leached using six commonly used leaching tests including paste pH, the U.S. Geological Survey (USGS) Field Leach Test (FLT) (both 5-min and 18-h agitation), the U.S. Environmental Protection Agency (USEPA) Method 1312 SPLP (both leachate pH 4.2 and leachate pH 5.0), and the USEPA Method 1311 TCLP (leachate pH 4.9). Leachate geochemical trends were compared in order to assess differences, if any, produced by the various leaching procedures. Results showed that the FLT (5-min agitation) was just as effective as the 18-h leaching tests in revealing the leachate geochemical characteristics of the samples. Leaching results also showed that the TCLP leaching test produces inconsistent results when compared to results produced from the other leaching tests. In part two of the study, the ABA was determined on splits of the samples using both well-established traditional static testing methods and a relatively quick, simplified net acid–base accounting (NABA) procedure. Results showed that the traditional methods, while time consuming, provide the most in-depth data on both the acid generating, and acid neutralizing tendencies of the samples. However, the simplified NABA method provided a relatively fast, effective estimation of the net acid–base account of the samples. Overall, this study showed that while most of the well-established methods are useful and effective, the use of a simplified leaching test and the NABA acid–base accounting method provide investigators fast, quantitative tools that can be used to provide rapid, reliable information about the leachability of metals and other constituents of concern, and the acid-generating potential of metal mining waste.
A new simple technique for improving the random properties of chaos-based cryptosystems
NASA Astrophysics Data System (ADS)
Garcia-Bosque, M.; Pérez-Resa, A.; Sánchez-Azqueta, C.; Celma, S.
2018-03-01
A new technique for improving the security of chaos-based stream ciphers has been proposed and tested experimentally. This technique manages to improve the randomness properties of the generated keystream by preventing the system to fall into short period cycles due to digitation. In order to test this technique, a stream cipher based on a Skew Tent Map algorithm has been implemented on a Virtex 7 FPGA. The randomness of the keystream generated by this system has been compared to the randomness of the keystream generated by the same system with the proposed randomness-enhancement technique. By subjecting both keystreams to the National Institute of Standards and Technology (NIST) tests, we have proved that our method can considerably improve the randomness of the generated keystreams. In order to incorporate our randomness-enhancement technique, only 41 extra slices have been needed, proving that, apart from effective, this method is also efficient in terms of area and hardware resources.
Dynamic test input generation for multiple-fault isolation
NASA Technical Reports Server (NTRS)
Schaefer, Phil
1990-01-01
Recent work is Causal Reasoning has provided practical techniques for multiple fault diagnosis. These techniques provide a hypothesis/measurement diagnosis cycle. Using probabilistic methods, they choose the best measurements to make, then update fault hypotheses in response. For many applications such as computers and spacecraft, few measurement points may be accessible, or values may change quickly as the system under diagnosis operates. In these cases, a hypothesis/measurement cycle is insufficient. A technique is presented for a hypothesis/test-input/measurement diagnosis cycle. In contrast to generating tests a priori for determining device functionality, it dynamically generates tests in response to current knowledge about fault probabilities. It is shown how the mathematics previously used for measurement specification can be applied to the test input generation process. An example from an efficient implementation called Multi-Purpose Causal (MPC) is presented.
Determination of HIV Status in African Adults With Discordant HIV Rapid Tests.
Fogel, Jessica M; Piwowar-Manning, Estelle; Donohue, Kelsey; Cummings, Vanessa; Marzinke, Mark A; Clarke, William; Breaud, Autumn; Fiamma, Agnès; Donnell, Deborah; Kulich, Michal; Mbwambo, Jessie K K; Richter, Linda; Gray, Glenda; Sweat, Michael; Coates, Thomas J; Eshleman, Susan H
2015-08-01
In resource-limited settings, HIV infection is often diagnosed using 2 rapid tests. If the results are discordant, a third tie-breaker test is often used to determine HIV status. This study characterized samples with discordant rapid tests and compared different testing strategies for determining HIV status in these cases. Samples were previously collected from 173 African adults in a population-based survey who had discordant rapid test results. Samples were classified as HIV positive or HIV negative using a rigorous testing algorithm that included two fourth-generation tests, a discriminatory test, and 2 HIV RNA tests. Tie-breaker tests were evaluated, including rapid tests (1 performed in-country), a third-generation enzyme immunoassay, and two fourth-generation tests. Selected samples were further characterized using additional assays. Twenty-nine samples (16.8%) were classified as HIV positive and 24 of those samples (82.8%) had undetectable HIV RNA. Antiretroviral drugs were detected in 1 sample. Sensitivity was 8.3%-43% for the rapid tests; 24.1% for the third-generation enzyme immunoassay; 95.8% and 96.6% for the fourth-generation tests. Specificity was lower for the fourth-generation tests than the other tests. Accuracy ranged from 79.5% to 91.3%. In this population-based survey, most HIV-infected adults with discordant rapid tests were virally suppressed without antiretroviral drugs. Use of individual assays as tie-breaker tests was not a reliable method for determining HIV status in these individuals. More extensive testing algorithms that use a fourth-generation screening test with a discriminatory test and HIV RNA test are preferable for determining HIV status in these cases.
ERIC Educational Resources Information Center
Palka, Sean
2015-01-01
This research details a methodology designed for creating content in support of various phishing prevention tasks including live exercises and detection algorithm research. Our system uses probabilistic context-free grammars (PCFG) and variable interpolation as part of a multi-pass method to create diverse and consistent phishing email content on…
NASA Astrophysics Data System (ADS)
Hajnayeb, Ali; Nikpour, Masood; Moradi, Shapour; Rossi, Gianluca
2018-02-01
The blade tip-timing (BTT) measurement technique is at present the most promising technique for monitoring the blades of axial turbines and aircraft engines in operating conditions. It is generally used as an alternative to strain gauges in turbine testing. By conducting a comparison with the standard methods such as those based on strain gauges, one determines that the technique is not intrusive and does not require a complicated installation process. Despite its superiority to other methods, the experimental performance analysis of a new BTT method needs a test stand that includes a reference measurement system (e.g. strain gauges equipped with telemetry or other complex optical measurement systems, like rotating laser Doppler vibrometers). In this article, a new reliable, low-cost BTT test setup is proposed for simulating and analyzing blade vibrations based on kinematic inversion. In the proposed test bench, instead of the blades vibrating, it is the BTT sensor that vibrates. The vibration of the sensor is generated by a shaker and can therefore be easily controlled in terms of frequency, amplitude and waveform shape. The amplitude of vibration excitation is measured by a simple accelerometer. After introducing the components of the simulator, the proposed test bench is used in practice to simulate both synchronous and asynchronous vibration scenarios. Then two BTT methods are used to evaluate the quality of the acquired data. The results demonstrate that the proposed setup is able to generate simulated pulse sequences which are almost the same as those generated by the conventional BTT systems installed around a bladed disk. Moreover, the test setup enables its users to evaluate BTT methods by using a limited number of sensors. This significantly reduces the total costs of the experiments.
Spontaneously Generating Life in Your Classroom? Pasteur, Spallanzani and Science Process.
ERIC Educational Resources Information Center
Byington, Scott
2001-01-01
Presents an experiment that tests for spontaneous generation, or abiogenesis. Observes microbial growth in nutrient broth under seven different flask environments. Includes instructions for the methods. (YDS)
Generation of openEHR Test Datasets for Benchmarking.
El Helou, Samar; Karvonen, Tuukka; Yamamoto, Goshiro; Kume, Naoto; Kobayashi, Shinji; Kondo, Eiji; Hiragi, Shusuke; Okamoto, Kazuya; Tamura, Hiroshi; Kuroda, Tomohiro
2017-01-01
openEHR is a widely used EHR specification. Given its technology-independent nature, different approaches for implementing openEHR data repositories exist. Public openEHR datasets are needed to conduct benchmark analyses over different implementations. To address their current unavailability, we propose a method for generating openEHR test datasets that can be publicly shared and used.
Coverage criteria for test case generation using UML state chart diagram
NASA Astrophysics Data System (ADS)
Salman, Yasir Dawood; Hashim, Nor Laily; Rejab, Mawarny Md; Romli, Rohaida; Mohd, Haslina
2017-10-01
To improve the effectiveness of test data generation during the software test, many studies have focused on the automation of test data generation from UML diagrams. One of these diagrams is the UML state chart diagram. Test cases are generally evaluated according to coverage criteria. However, combinations of multiple criteria are required to achieve better coverage. Different studies used various number and types of coverage criteria in their methods and approaches. The objective of this paper to propose suitable coverage criteria for test case generation using UML state chart diagram especially in handling loops. In order to achieve this objective, this work reviewed previous studies to present the most practical coverage criteria combinations, including all-states, all-transitions, all-transition-pairs, and all-loop-free-paths coverage. Calculation to determine the coverage percentage of the proposed coverage criteria were presented together with an example has they are applied on a UML state chart diagram. This finding would be beneficial in the area of test case generating especially in handling loops in UML state chart diagram.
Method and apparatus for detecting and quantifying bacterial spores on a surface
NASA Technical Reports Server (NTRS)
Ponce, Adrian (Inventor)
2009-01-01
A method and an apparatus for detecting and quantifying bacterial spores on a surface. In accordance with the method: bacterial spores are transferred from a place of origin to a test surface, the test surface comprises lanthanide ions. Aromatic molecules are released from the bacterial spores; a complex of the lanthanide ions and aromatic molecules is formed on the test surface, the complex is excited to generate a characteristic luminescence on the test surface; the luminescence on the test surface is detected and quantified.
Method and Apparatus for Detecting and Quantifying Bacterial Spores on a Surface
NASA Technical Reports Server (NTRS)
Ponce, Adrian (Inventor)
2016-01-01
A method and an apparatus for detecting and quantifying bacterial spores on a surface. In accordance with the method: bacterial spores are transferred from a place of origin to a test surface, the test surface comprises lanthanide ions. Aromatic molecules are released from the bacterial spores; a complex of the lanthanide ions and aromatic molecules is formed on the test surface, the complex is excited to generate a characteristic luminescence on the test surface; the luminescence on the test surface is detected and quantified.
A ‘reader’ unit of the chemical computer
Smelov, Pavel S.
2018-01-01
We suggest the main principals and functional units of the parallel chemical computer, namely, (i) a generator (which is a network of coupled oscillators) of oscillatory dynamic modes, (ii) a unit which is able to recognize these modes (a ‘reader’) and (iii) a decision-making unit, which analyses the current mode, compares it with the external signal and sends a command to the mode generator to switch it to the other dynamical regime. Three main methods of the functioning of the reader unit are suggested and tested computationally: (a) the polychronization method, which explores the differences between the phases of the generator oscillators; (b) the amplitude method which detects clusters of the generator and (c) the resonance method which is based on the resonances between the frequencies of the generator modes and the internal frequencies of the damped oscillations of the reader cells. Pro and contra of these methods have been analysed. PMID:29410852
Improved Ant Algorithms for Software Testing Cases Generation
Yang, Shunkun; Xu, Jiaqi
2014-01-01
Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to porduce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations. PMID:24883391
A comparative study on different methods of automatic mesh generation of human femurs.
Viceconti, M; Bellingeri, L; Cristofolini, L; Toni, A
1998-01-01
The aim of this study was to evaluate comparatively five methods for automating mesh generation (AMG) when used to mesh a human femur. The five AMG methods considered were: mapped mesh, which provides hexahedral elements through a direct mapping of the element onto the geometry; tetra mesh, which generates tetrahedral elements from a solid model of the object geometry; voxel mesh which builds cubic 8-node elements directly from CT images; and hexa mesh that automatically generated hexahedral elements from a surface definition of the femur geometry. The various methods were tested against two reference models: a simplified geometric model and a proximal femur model. The first model was useful to assess the inherent accuracy of the meshes created by the AMG methods, since an analytical solution was available for the elastic problem of the simplified geometric model. The femur model was used to test the AMG methods in a more realistic condition. The femoral geometry was derived from a reference model (the "standardized femur") and the finite element analyses predictions were compared to experimental measurements. All methods were evaluated in terms of human and computer effort needed to carry out the complete analysis, and in terms of accuracy. The comparison demonstrated that each tested method deserves attention and may be the best for specific situations. The mapped AMG method requires a significant human effort but is very accurate and it allows a tight control of the mesh structure. The tetra AMG method requires a solid model of the object to be analysed but is widely available and accurate. The hexa AMG method requires a significant computer effort but can also be used on polygonal models and is very accurate. The voxel AMG method requires a huge number of elements to reach an accuracy comparable to that of the other methods, but it does not require any pre-processing of the CT dataset to extract the geometry and in some cases may be the only viable solution.
Heintze, S D; Zellweger, G; Cavalleri, A; Ferracane, J
2006-02-01
The aim of the study was to evaluate two ceramic materials as possible substitutes for enamel using two wear simulation methods, and to compare both methods with regard to the wear results for different materials. Flat specimens (OHSU n=6, Ivoclar n=8) of one compomer and three composite materials (Dyract AP, Tetric Ceram, Z250, experimental composite) were fabricated and subjected to wear using two different wear testing methods and two pressable ceramic materials as stylus (Empress, experimental ceramic). For the OHSU method, enamel styli of the same dimensions as the ceramic stylus were fabricated additionally. Both wear testing methods differ with regard to loading force, lateral movement of stylus, stylus dimension, number of cycles, thermocycling and abrasive medium. In the OHSU method, the wear facets (mean vertical loss) were measured using a contact profilometer, while in the Ivoclar method (maximal vertical loss) a laser scanner was used for this purpose. Additionally, the vertical loss of the ceramic stylus was quantified for the Ivoclar method. The results obtained from each method were compared by ANOVA and Tukey's test (p<0.05). To compare both wear methods, the log-transformed data were used to establish relative ranks between material/stylus combinations and assessed by applying the Pearson correlation coefficient. The experimental ceramic material generated significantly less wear in Tetric Ceram and Z250 specimens compared to the Empress stylus in the Ivoclar method, whereas with the OHSU method, no difference between the two ceramic antagonists was found with regard to abrasion or attrition. The wear generated by the enamel stylus was not statistically different from that generated by the other two ceramic materials in the OHSU method. With the Ivoclar method, wear of the ceramic stylus was only statistically different when in contact with Tetric Ceram. There was a close correlation between the attrition wear of the OHSU and the wear of the Ivoclar method (Pearson coefficient 0.83, p=0.01). Pressable ceramic materials can be used as a substitute for enamel in wear testing machines. However, material ranking may be affected by the type of ceramic material chosen. The attrition wear of the OHSU method was comparable with the wear generated with the Ivoclar method.
Numeric Modified Adomian Decomposition Method for Power System Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimitrovski, Aleksandar D; Simunovic, Srdjan; Pannala, Sreekanth
This paper investigates the applicability of numeric Wazwaz El Sayed modified Adomian Decomposition Method (WES-ADM) for time domain simulation of power systems. WESADM is a numerical method based on a modified Adomian decomposition (ADM) technique. WES-ADM is a numerical approximation method for the solution of nonlinear ordinary differential equations. The non-linear terms in the differential equations are approximated using Adomian polynomials. In this paper WES-ADM is applied to time domain simulations of multimachine power systems. WECC 3-generator, 9-bus system and IEEE 10-generator, 39-bus system have been used to test the applicability of the approach. Several fault scenarios have been tested.more » It has been found that the proposed approach is faster than the trapezoidal method with comparable accuracy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sepehri, Aliasghar; Loeffler, Troy D.; Chen, Bin, E-mail: binchen@lsu.edu
2014-08-21
A new method has been developed to generate bending angle trials to improve the acceptance rate and the speed of configurational-bias Monte Carlo. Whereas traditionally the trial geometries are generated from a uniform distribution, in this method we attempt to use the exact probability density function so that each geometry generated is likely to be accepted. In actual practice, due to the complexity of this probability density function, a numerical representation of this distribution function would be required. This numerical table can be generated a priori from the distribution function. This method has been tested on a united-atom model ofmore » alkanes including propane, 2-methylpropane, and 2,2-dimethylpropane, that are good representatives of both linear and branched molecules. It has been shown from these test cases that reasonable approximations can be made especially for the highly branched molecules to reduce drastically the dimensionality and correspondingly the amount of the tabulated data that is needed to be stored. Despite these approximations, the dependencies between the various geometrical variables can be still well considered, as evident from a nearly perfect acceptance rate achieved. For all cases, the bending angles were shown to be sampled correctly by this method with an acceptance rate of at least 96% for 2,2-dimethylpropane to more than 99% for propane. Since only one trial is required to be generated for each bending angle (instead of thousands of trials required by the conventional algorithm), this method can dramatically reduce the simulation time. The profiling results of our Monte Carlo simulation code show that trial generation, which used to be the most time consuming process, is no longer the time dominating component of the simulation.« less
Soucek, David J; Dickinson, Amy
2015-09-01
Although insects occur in nearly all freshwater ecosystems, few sensitive insect models exist for use in determining the toxicity of contaminants. The objectives of the present study were to adapt previously developed culturing and toxicity testing methods for the mayfly Neocloeon triangulifer (Ephemeroptera: Baetidae), and to further develop a method for chronic toxicity tests spanning organism ages of less than 24 h post hatch to adult emergence, using a laboratory cultured diatom diet. The authors conducted 96-h fed acute tests and full-life chronic toxicity tests with sodium chloride, sodium nitrate, and sodium sulfate. The authors generated 96-h median lethal concentrations (LC50s) of 1062 mg Cl/L (mean of 3 tests), 179 mg N-NO3 /L, and 1227 mg SO4 /L. Acute to chronic ratios ranged from 2.1 to 6.4 for chloride, 2.5 to 5.1 for nitrate, and 2.3 to 8.5 for sulfate. The endpoints related to survival and development time were consistently the most sensitive in the tests. The chronic values generated for chloride were in the same range as those generated by others using natural foods. Furthermore, our weight-versus-fecundity plots were similar to those previously published using the food culturing method on which the present authors' method was based, indicating good potential for standardization. The authors believe that the continued use of this sensitive mayfly species in laboratory studies will help to close the gap in understanding between standard laboratory toxicity test results and field-based observations of community impairment. © 2015 SETAC.
Optical metrology at the Optical Sciences Center: an historical review
NASA Astrophysics Data System (ADS)
Creath, Katherine; Parks, Robert E.
2014-10-01
The Optical Sciences Center (OSC) begun as a graduate-level applied optics teaching institution to support the US space effort. The making of optics representative of those used in other space programs was deemed essential. This led to the need for optical metrology: at first Hartmann tests, but almost immediately to interferometric tests using the newly invented HeNe laser. Not only were new types of interferometers needed, but the whole infrastructure that went with testing, fringe location methods, aberration removal software and contour map generation to aid the opticians during polishing needed to be developed. Over the last half century more rapid and precise methods of interferogram data reduction, surface roughness measurement, and methods of instrument calibration to separate errors from those in the optic have been pioneered at OSC. Other areas of research included null lens design and the writing of lens design software that led into the design of computer generated holograms for asphere testing. More recently work has been done on the reduction of speckle noise in interferograms, methods to test large convex aspheres, and a return to slope measuring tests to increase the dynamic range of the types of aspheric surfaces amenable to optical testing including free-form surfaces. This paper documents the history of the development of optical testing projects at OSC and highlights the contributions some of the individuals associated with new methods of testing and the infrastructure needed to support the testing. We conclude with comments about the future trends optical metrology.
Voyager electronic parts radiation program, volume 1
NASA Technical Reports Server (NTRS)
Stanley, A. G.; Martin, K. E.; Price, W. E.
1977-01-01
The Voyager spacecraft is subject to radiation from external natural space, from radioisotope thermoelectric generators and heater units, and from the internal environment where penetrating electrons generate surface ionization effects in semiconductor devices. Methods for radiation hardening and tests for radiation sensitivity are described. Results of characterization testing and sample screening of over 200 semiconductor devices in a radiation environment are summarized.
NASA Astrophysics Data System (ADS)
1984-09-01
This Test Operations Procedure (TOP) provides conventional test methods employing conventional test instrumentation for testing conventional radars. Single tests and subtests designed to test radar components, transmitters, receivers, antennas, etc., and system performance are conducted with single item instruments such as meters, generators, attenuators, counters, oscillators, plotters, etc., and with adequate land areas for conducting field tests.
Sustainability of wildlife populations exposed to endocrine disrupting chemicals in natural water bodies has sparked sufficient concern that the U.S.EPA is developing methods for multiple generation exposures of fishes. Established testing methods and the short life-cycle of the ...
Luo, Mingzhang; Li, Weijie; Wang, Junming; Chen, Xuemin; Song, Gangbing
2018-01-01
As a common approach to nondestructive testing and evaluation, guided wave-based methods have attracted much attention because of their wide detection range and high detection efficiency. It is highly desirable to develop a portable guided wave testing system with high actuating energy and variable frequency. In this paper, a novel giant magnetostrictive actuator with high actuation power is designed and implemented, based on the giant magnetostrictive (GMS) effect. The novel GMS actuator design involves a conical energy-focusing head that can focus the amplified mechanical energy generated by the GMS actuator. This design enables the generation of stress waves with high energy, and the focusing of the generated stress waves on the test object. The guided wave generation system enables two kinds of output modes: the coded pulse signal and the sweep signal. The functionality and the advantages of the developed system are validated through laboratory testing in the quality assessment of rock bolt-reinforced structures. In addition, the developed GMS actuator and the supporting system are successfully implemented and applied in field tests. The device can also be used in other nondestructive testing and evaluation applications that require high-power stress wave generation. PMID:29510540
Luo, Mingzhang; Li, Weijie; Wang, Junming; Wang, Ning; Chen, Xuemin; Song, Gangbing
2018-03-04
As a common approach to nondestructive testing and evaluation, guided wave-based methods have attracted much attention because of their wide detection range and high detection efficiency. It is highly desirable to develop a portable guided wave testing system with high actuating energy and variable frequency. In this paper, a novel giant magnetostrictive actuator with high actuation power is designed and implemented, based on the giant magnetostrictive (GMS) effect. The novel GMS actuator design involves a conical energy-focusing head that can focus the amplified mechanical energy generated by the GMS actuator. This design enables the generation of stress waves with high energy, and the focusing of the generated stress waves on the test object. The guided wave generation system enables two kinds of output modes: the coded pulse signal and the sweep signal. The functionality and the advantages of the developed system are validated through laboratory testing in the quality assessment of rock bolt-reinforced structures. In addition, the developed GMS actuator and the supporting system are successfully implemented and applied in field tests. The device can also be used in other nondestructive testing and evaluation applications that require high-power stress wave generation.
NASA Technical Reports Server (NTRS)
Chambers, Jeffrey A.
1994-01-01
Finite element analysis is regularly used during the engineering cycle of mechanical systems to predict the response to static, thermal, and dynamic loads. The finite element model (FEM) used to represent the system is often correlated with physical test results to determine the validity of analytical results provided. Results from dynamic testing provide one means for performing this correlation. One of the most common methods of measuring accuracy is by classical modal testing, whereby vibratory mode shapes are compared to mode shapes provided by finite element analysis. The degree of correlation between the test and analytical mode shapes can be shown mathematically using the cross orthogonality check. A great deal of time and effort can be exhausted in generating the set of test acquired mode shapes needed for the cross orthogonality check. In most situations response data from vibration tests are digitally processed to generate the mode shapes from a combination of modal parameters, forcing functions, and recorded response data. An alternate method is proposed in which the same correlation of analytical and test acquired mode shapes can be achieved without conducting the modal survey. Instead a procedure is detailed in which a minimum of test information, specifically the acceleration response data from a random vibration test, is used to generate a set of equivalent local accelerations to be applied to the reduced analytical model at discrete points corresponding to the test measurement locations. The static solution of the analytical model then produces a set of deformations that once normalized can be used to represent the test acquired mode shapes in the cross orthogonality relation. The method proposed has been shown to provide accurate results for both a simple analytical model as well as a complex space flight structure.
Yu, S; Gao, S; Gan, Y; Zhang, Y; Ruan, X; Wang, Y; Yang, L; Shi, J
2016-04-01
Quantitative structure-property relationship modelling can be a valuable alternative method to replace or reduce experimental testing. In particular, some endpoints such as octanol-water (KOW) and organic carbon-water (KOC) partition coefficients of polychlorinated biphenyls (PCBs) are easier to predict and various models have been already developed. In this paper, two different methods, which are multiple linear regression based on the descriptors generated using Dragon software and hologram quantitative structure-activity relationships, were employed to predict suspended particulate matter (SPM) derived log KOC and generator column, shake flask and slow stirring method derived log KOW values of 209 PCBs. The predictive ability of the derived models was validated using a test set. The performances of all these models were compared with EPI Suite™ software. The results indicated that the proposed models were robust and satisfactory, and could provide feasible and promising tools for the rapid assessment of the SPM derived log KOC and generator column, shake flask and slow stirring method derived log KOW values of PCBs.
A New Method for Incremental Testing of Finite State Machines
NASA Technical Reports Server (NTRS)
Pedrosa, Lehilton Lelis Chaves; Moura, Arnaldo Vieira
2010-01-01
The automatic generation of test cases is an important issue for conformance testing of several critical systems. We present a new method for the derivation of test suites when the specification is modeled as a combined Finite State Machine (FSM). A combined FSM is obtained conjoining previously tested submachines with newly added states. This new concept is used to describe a fault model suitable for incremental testing of new systems, or for retesting modified implementations. For this fault model, only the newly added or modified states need to be tested, thereby considerably reducing the size of the test suites. The new method is a generalization of the well-known W-method and the G-method, but is scalable, and so it can be used to test FSMs with an arbitrarily large number of states.
Swarm based mean-variance mapping optimization (MVMOS) for solving economic dispatch
NASA Astrophysics Data System (ADS)
Khoa, T. H.; Vasant, P. M.; Singh, M. S. Balbir; Dieu, V. N.
2014-10-01
The economic dispatch (ED) is an essential optimization task in the power generation system. It is defined as the process of allocating the real power output of generation units to meet required load demand so as their total operating cost is minimized while satisfying all physical and operational constraints. This paper introduces a novel optimization which named as Swarm based Mean-variance mapping optimization (MVMOS). The technique is the extension of the original single particle mean-variance mapping optimization (MVMO). Its features make it potentially attractive algorithm for solving optimization problems. The proposed method is implemented for three test power systems, including 3, 13 and 20 thermal generation units with quadratic cost function and the obtained results are compared with many other methods available in the literature. Test results have indicated that the proposed method can efficiently implement for solving economic dispatch.
Method to implement the CCD timing generator based on FPGA
NASA Astrophysics Data System (ADS)
Li, Binhua; Song, Qian; He, Chun; Jin, Jianhui; He, Lin
2010-07-01
With the advance of the PFPA technology, the design methodology of digital systems is changing. In recent years we develop a method to implement the CCD timing generator based on FPGA and VHDL. This paper presents the principles and implementation skills of the method. Taking a developed camera as an example, we introduce the structure, input and output clocks/signals of a timing generator implemented in the camera. The generator is composed of a top module and a bottom module. The bottom one is made up of 4 sub-modules which correspond to 4 different operation modes. The modules are implemented by 5 VHDL programs. Frame charts of the architecture of these programs are shown in the paper. We also describe implementation steps of the timing generator in Quartus II, and the interconnections between the generator and a Nios soft core processor which is the controller of this generator. Some test results are presented in the end.
Highly Efficient Design-of-Experiments Methods for Combining CFD Analysis and Experimental Data
NASA Technical Reports Server (NTRS)
Anderson, Bernhard H.; Haller, Harold S.
2009-01-01
It is the purpose of this study to examine the impact of "highly efficient" Design-of-Experiments (DOE) methods for combining sets of CFD generated analysis data with smaller sets of Experimental test data in order to accurately predict performance results where experimental test data were not obtained. The study examines the impact of micro-ramp flow control on the shock wave boundary layer (SWBL) interaction where a complete paired set of data exist from both CFD analysis and Experimental measurements By combining the complete set of CFD analysis data composed of fifteen (15) cases with a smaller subset of experimental test data containing four/five (4/5) cases, compound data sets (CFD/EXP) were generated which allows the prediction of the complete set of Experimental results No statistical difference were found to exist between the combined (CFD/EXP) generated data sets and the complete Experimental data set composed of fifteen (15) cases. The same optimal micro-ramp configuration was obtained using the (CFD/EXP) generated data as obtained with the complete set of Experimental data, and the DOE response surfaces generated by the two data sets were also not statistically different.
Initialization Method for Grammar-Guided Genetic Programming
NASA Astrophysics Data System (ADS)
García-Arnau, M.; Manrique, D.; Ríos, J.; Rodríguez-Patón, A.
This paper proposes a new tree-generation algorithm for grammarguided genetic programming that includes a parameter to control the maximum size of the trees to be generated. An important feature of this algorithm is that the initial populations generated are adequately distributed in terms of tree size and distribution within the search space. Consequently, genetic programming systems starting from the initial populations generated by the proposed method have a higher convergence speed. Two different problems have been chosen to carry out the experiments: a laboratory test involving searching for arithmetical equalities and the real-world task of breast cancer prognosis. In both problems, comparisons have been made to another five important initialization methods.
Special Test Methods for Batteries
NASA Technical Reports Server (NTRS)
Gross, S.
1984-01-01
Various methods are described for measuring heat generation in primary and secondary batteries as well as the specific heat of batteries and cell thermal conductance. Problems associated with determining heat generation in large batteries are examined. Special attention is given to monitoring temperature gradients in nickel cadmium cells, the use of auxiliary electrodes for conducting tests on battery charge control, evaluating the linear sweep of current from charge to discharge, and determining zero current voltage. The fast transient behavior of batteries in the microsecond range, and the electrical conductance of nickel sinters in the thickness direction are also considered. Mechanical problems experienced in the vibration of Ni-Cd batteries and tests to simulate cyclic fatigue of the steel table connecting the plates to the comb are considered. Methods of defining the distribution of forces when cells are compressed during battery packaging are also explored.
Special test methods for batteries
NASA Astrophysics Data System (ADS)
Gross, S.
1984-09-01
Various methods are described for measuring heat generation in primary and secondary batteries as well as the specific heat of batteries and cell thermal conductance. Problems associated with determining heat generation in large batteries are examined. Special attention is given to monitoring temperature gradients in nickel cadmium cells, the use of auxiliary electrodes for conducting tests on battery charge control, evaluating the linear sweep of current from charge to discharge, and determining zero current voltage. The fast transient behavior of batteries in the microsecond range, and the electrical conductance of nickel sinters in the thickness direction are also considered. Mechanical problems experienced in the vibration of Ni-Cd batteries and tests to simulate cyclic fatigue of the steel table connecting the plates to the comb are considered. Methods of defining the distribution of forces when cells are compressed during battery packaging are also explored.
Design of multi-energy Helds coupling testing system of vertical axis wind power system
NASA Astrophysics Data System (ADS)
Chen, Q.; Yang, Z. X.; Li, G. S.; Song, L.; Ma, C.
2016-08-01
The conversion efficiency of wind energy is the focus of researches and concerns as one of the renewable energy. The present methods of enhancing the conversion efficiency are mostly improving the wind rotor structure, optimizing the generator parameters and energy storage controller and so on. Because the conversion process involves in energy conversion of multi-energy fields such as wind energy, mechanical energy and electrical energy, the coupling effect between them will influence the overall conversion efficiency. In this paper, using system integration analysis technology, a testing system based on multi-energy field coupling (MEFC) of vertical axis wind power system is proposed. When the maximum efficiency of wind rotor is satisfied, it can match to the generator function parameters according to the output performance of wind rotor. The voltage controller can transform the unstable electric power to the battery on the basis of optimizing the parameters such as charging times, charging voltage. Through the communication connection and regulation of the upper computer system (UCS), it can make the coupling parameters configure to an optimal state, and it improves the overall conversion efficiency. This method can test the whole wind turbine (WT) performance systematically and evaluate the design parameters effectively. It not only provides a testing method for system structure design and parameter optimization of wind rotor, generator and voltage controller, but also provides a new testing method for the whole performance optimization of vertical axis wind energy conversion system (WECS).
Qualification of oil-based tracer particles for heated Ludwieg tubes
NASA Astrophysics Data System (ADS)
Casper, Marcus; Stephan, Sören; Scholz, Peter; Radespiel, Rolf
2014-06-01
The generation, insertion, pressurization and use of oil-based tracer particles is qualified for the application in heated flow facilities, typically hypersonic facilities such as Ludwieg tubes. The operative challenges are to ensure a sub-critical amount of seeding material in the heated part, to qualify the methods that are used to generate the seeding, pressurize it to storage tube pressure, as well as to test specific oil types. The mass of the seeding material is held below the lower explosion limit such that operation is safe. The basis for the tracers is qualified in off-situ particle size measurements. In the main part different methods and operational procedures are tested with respect to their ability to generate a suitable amount of seeding in the test section. For the best method the relaxation time of the tracers is qualified by the oblique shock wave test. The results show that the use of a special temperature resistant lubricant oil "Plantfluid" is feasible under the conditions of a Mach-6 Ludwieg tube with heated storage tube. The method gives high-quality tracers with high seeding densities. Although the experimental results of the oblique shock wave test differ from theoretical predictions of relaxation time, still the relaxation time of 3.2 μs under the more dense tunnel conditions with 18 bar storage tube pressure is low enough to allow the use of the seeding for meaningful particle image velocimetry studies.
Methodology for the development of normative data for Spanish-speaking pediatric populations.
Rivera, D; Arango-Lasprilla, J C
2017-01-01
To describe the methodology utilized to calculate reliability and the generation of norms for 10 neuropsychological tests for children in Spanish-speaking countries. The study sample consisted of over 4,373 healthy children from nine countries in Latin America (Chile, Cuba, Ecuador, Guatemala, Honduras, Mexico, Paraguay, Peru, and Puerto Rico) and Spain. Inclusion criteria for all countries were to have between 6 to 17 years of age, an Intelligence Quotient of≥80 on the Test of Non-Verbal Intelligence (TONI-2), and score of <19 on the Children's Depression Inventory. Participants completed 10 neuropsychological tests. Reliability and norms were calculated for all tests. Test-retest analysis showed excellent or good- reliability on all tests (r's>0.55; p's<0.001) except M-WCST perseverative errors whose coefficient magnitude was fair. All scores were normed using multiple linear regressions and standard deviations of residual values. Age, age2, sex, and mean level of parental education (MLPE) were included as predictors in the models by country. The non-significant variables (p > 0.05) were removed and the analysis were run again. This is the largest Spanish-speaking children and adolescents normative study in the world. For the generation of normative data, the method based on linear regression models and the standard deviation of residual values was used. This method allows determination of the specific variables that predict test scores, helps identify and control for collinearity of predictive variables, and generates continuous and more reliable norms than those of traditional methods.
Inductive interference in rapid transit signaling systems. volume 2. suggested test procedures.
DOT National Transportation Integrated Search
1987-03-31
These suggested test procedures have been prepared in order to develop standard methods of analysis and testing to quantify and resolve issues of electromagnetic compatibility in rail transit operations. Electromagnetic interference, generated by rai...
On the prediction of far field computational aeroacoustics of advanced propellers
NASA Technical Reports Server (NTRS)
Jaeger, Stephen M.; Korkan, Kenneth D.
1990-01-01
A numerical method for determining the acoustic far field generated by a high-speed subsonic aircraft propeller was developed. The approach used in this method was to generate the entire three-dimensional pressure field about the propeller (using an Euler flowfield solver) and then to apply a solution of the wave equation on a cylindrical surface enveloping the propeller. The method is applied to generate the three-dimensional flowfield between two blades of an advanced propeller. The results are compared with experimental data obtained in a wind-tunnel test at a Mach number of 0.6.
Threshold matrix for digital halftoning by genetic algorithm optimization
NASA Astrophysics Data System (ADS)
Alander, Jarmo T.; Mantere, Timo J.; Pyylampi, Tero
1998-10-01
Digital halftoning is used both in low and high resolution high quality printing technologies. Our method is designed to be mainly used for low resolution ink jet marking machines to produce both gray tone and color images. The main problem with digital halftoning is pink noise caused by the human eye's visual transfer function. To compensate for this the random dot patterns used are optimized to contain more blue than pink noise. Several such dot pattern generator threshold matrices have been created automatically by using genetic algorithm optimization, a non-deterministic global optimization method imitating natural evolution and genetics. A hybrid of genetic algorithm with a search method based on local backtracking was developed together with several fitness functions evaluating dot patterns for rectangular grids. By modifying the fitness function, a family of dot generators results, each with its particular statistical features. Several versions of genetic algorithms, backtracking and fitness functions were tested to find a reasonable combination. The generated threshold matrices have been tested by simulating a set of test images using the Khoros image processing system. Even though the work was focused on developing low resolution marking technology, the resulting family of dot generators can be applied also in other halftoning application areas including high resolution printing technology.
NASA Astrophysics Data System (ADS)
Matsumoto, Kouhei; Kasuya, Yuki; Yumoto, Mitsuki; Arai, Hideaki; Sato, Takashi; Sakamoto, Shuichi; Ohkawa, Masashi; Ohdaira, Yasuo
2018-02-01
Not so long ago, pseudo random numbers generated by numerical formulae were considered to be adequate for encrypting important data-files, because of the time needed to decode them. With today's ultra high-speed processors, however, this is no longer true. So, in order to thwart ever-more advanced attempts to breach our system's protections, cryptologists have devised a method that is considered to be virtually impossible to decode, and uses what is a limitless number of physical random numbers. This research describes a method, whereby laser diode's frequency noise generate a large quantities of physical random numbers. Using two types of photo detectors (APD and PIN-PD), we tested the abilities of two types of lasers (FP-LD and VCSEL) to generate random numbers. In all instances, an etalon served as frequency discriminator, the examination pass rates were determined using NIST FIPS140-2 test at each bit, and the Random Number Generation (RNG) speed was noted.
NASA Astrophysics Data System (ADS)
Guo, W. C.; Yang, J. D.; Chen, J. P.; Peng, Z. Y.; Zhang, Y.; Chen, C. C.
2016-11-01
Load rejection test is one of the essential tests that carried out before the hydroelectric generating set is put into operation formally. The test aims at inspecting the rationality of the design of the water diversion and power generation system of hydropower station, reliability of the equipment of generating set and the dynamic characteristics of hydroturbine governing system. Proceeding from different accident conditions of hydroelectric generating set, this paper presents the transient processes of load rejection corresponding to different accident conditions, and elaborates the characteristics of different types of load rejection. Then the numerical simulation method of different types of load rejection is established. An engineering project is calculated to verify the validity of the method. Finally, based on the numerical simulation results, the relationship among the different types of load rejection and their functions on the design of hydropower station and the operation of load rejection test are pointed out. The results indicate that: The load rejection caused by the accident within the hydroelectric generating set is realized by emergency distributing valve, and it is the basis of the optimization for the closing law of guide vane and the calculation of regulation and guarantee. The load rejection caused by the accident outside the hydroelectric generating set is realized by the governor. It is the most efficient measure to inspect the dynamic characteristics of hydro-turbine governing system, and its closure rate of guide vane set in the governor depends on the optimization result in the former type load rejection.
Comparison of Control Group Generating Methods.
Szekér, Szabolcs; Fogarassy, György; Vathy-Fogarassy, Ágnes
2017-01-01
Retrospective studies suffer from drawbacks such as selection bias. As the selection of the control group has a significant impact on the evaluation of the results, it is very important to find the proper method to generate the most appropriate control group. In this paper we suggest two nearest neighbors based control group selection methods that aim to achieve good matching between the individuals of case and control groups. The effectiveness of the proposed methods is evaluated by runtime and accuracy tests and the results are compared to the classical stratified sampling method.
A novel method for repeatedly generating speckle patterns used in digital image correlation
NASA Astrophysics Data System (ADS)
Zhang, Juan; Sweedy, Ahmed; Gitzhofer, François; Baroud, Gamal
2018-01-01
Speckle patterns play a key role in Digital Image Correlation (DIC) measurement, and generating an optimal speckle pattern has been the goal for decades now. The usual method of generating a speckle pattern is by manually spraying the paint on the specimen. However, this makes it difficult to reproduce the optimal pattern for maintaining identical testing conditions and achieving consistent DIC results. This study proposed and evaluated a novel method using an atomization system to repeatedly generate speckle patterns. To verify the repeatability of the speckle patterns generated by this system, simulation and experimental studies were systematically performed. The results from both studies showed that the speckle patterns and, accordingly, the DIC measurements become highly accurate and repeatable using the proposed atomization system.
Paulsamy, Sivachandran
2014-01-01
In wind energy systems employing permanent magnet generator, there is an imperative need to reduce the cogging torque for smooth and reliable cut in operation. In a permanent magnet generator, cogging torque is produced due to interaction of the rotor magnets with slots and teeth of the stator. This paper is a result of an ongoing research work that deals with various methods to reduce cogging torque in dual rotor radial flux permanent magnet generator (DRFPMG) for direct coupled stand alone wind energy systems (SAWES). Three methods were applied to reduce the cogging torque in DRFPMG. The methods were changing slot opening width, changing magnet pole arc width and shifting of slot openings. A combination of these three methods was applied to reduce the cogging torque to a level suitable for direct coupled SAWES. Both determination and reduction of cogging torque were carried out by finite element analysis (FEA) using MagNet Software. The cogging torque of DRFPMG has been reduced without major change in induced emf. A prototype of 1 kW, 120 rpm DRFPMG was fabricated and tested to validate the simulation results. The test results have good agreement with the simulation predictions. PMID:25202746
Paulsamy, Sivachandran
2014-01-01
In wind energy systems employing permanent magnet generator, there is an imperative need to reduce the cogging torque for smooth and reliable cut in operation. In a permanent magnet generator, cogging torque is produced due to interaction of the rotor magnets with slots and teeth of the stator. This paper is a result of an ongoing research work that deals with various methods to reduce cogging torque in dual rotor radial flux permanent magnet generator (DRFPMG) for direct coupled stand alone wind energy systems (SAWES). Three methods were applied to reduce the cogging torque in DRFPMG. The methods were changing slot opening width, changing magnet pole arc width and shifting of slot openings. A combination of these three methods was applied to reduce the cogging torque to a level suitable for direct coupled SAWES. Both determination and reduction of cogging torque were carried out by finite element analysis (FEA) using MagNet Software. The cogging torque of DRFPMG has been reduced without major change in induced emf. A prototype of 1 kW, 120 rpm DRFPMG was fabricated and tested to validate the simulation results. The test results have good agreement with the simulation predictions.
Sonic boom generated by a slender body aerodynamically shaded by a disk spike
NASA Astrophysics Data System (ADS)
Potapkin, A. V.; Moskvichev, D. Yu.
2018-03-01
The sonic boom generated by a slender body of revolution aerodynamically shaded by another body is numerically investigated. The aerodynamic shadow is created by a disk placed upstream of the slender body across a supersonic free-stream flow. The disk size and its position upstream of the body are chosen in such a way that the aerodynamically shaded flow is quasi-stationary. A combined method of phantom bodies is used for sonic boom calculations. The method is tested by calculating the sonic boom generated by a blunted body and comparing the results with experimental investigations of the sonic boom generated by spheres of various diameters in ballistic ranges and wind tunnels. The test calculations show that the method of phantom bodies is applicable for calculating far-field parameters of shock waves generated by both slender and blunted bodies. A possibility of reducing the shock wave intensity in the far field by means of the formation of the aerodynamic shadow behind the disk placed upstream of the body is estimated. The calculations are performed for the incoming flow with the Mach number equal to 2. The effect of the disk size on the sonic boom level is calculated.
Encryption key distribution via chaos synchronization
NASA Astrophysics Data System (ADS)
Keuninckx, Lars; Soriano, Miguel C.; Fischer, Ingo; Mirasso, Claudio R.; Nguimdo, Romain M.; van der Sande, Guy
2017-02-01
We present a novel encryption scheme, wherein an encryption key is generated by two distant complex nonlinear units, forced into synchronization by a chaotic driver. The concept is sufficiently generic to be implemented on either photonic, optoelectronic or electronic platforms. The method for generating the key bitstream from the chaotic signals is reconfigurable. Although derived from a deterministic process, the obtained bit series fulfill the randomness conditions as defined by the National Institute of Standards test suite. We demonstrate the feasibility of our concept on an electronic delay oscillator circuit and test the robustness against attacks using a state-of-the-art system identification method.
Preparation of pyrolysis reference samples: evaluation of a standard method using a tube furnace.
Sandercock, P Mark L
2012-05-01
A new, simple method for the reproducible creation of pyrolysis products from different materials that may be found at a fire scene is described. A temperature programmable steady-state tube furnace was used to generate pyrolysis products from different substrates, including softwoods, paper, vinyl sheet flooring, and carpet. The temperature profile of the tube furnace was characterized, and the suitability of the method to reproducibly create pyrolysates similar to those found in real fire debris was assessed. The use of this method to create proficiency tests to realistically test an examiner's ability to interpret complex gas chromatograph-mass spectrometric fire debris data, and to create a library of pyrolsates generated from materials commonly found at a fire scene, is demonstrated. © 2011 American Academy of Forensic Sciences.
The report gives details of a small-chamber test method developed by the EPA for characterizing volatile organic compound (VOC) emissions from interior latex and alkyd paints. Current knowledge about VOC, including hazardous air pollutant, emissions from interior paints generated...
A Next-Generation Sequencing Primer—How Does It Work and What Can It Do?
Alekseyev, Yuriy O.; Fazeli, Roghayeh; Yang, Shi; Basran, Raveen; Miller, Nancy S.
2018-01-01
Next-generation sequencing refers to a high-throughput technology that determines the nucleic acid sequences and identifies variants in a sample. The technology has been introduced into clinical laboratory testing and produces test results for precision medicine. Since next-generation sequencing is relatively new, graduate students, medical students, pathology residents, and other physicians may benefit from a primer to provide a foundation about basic next-generation sequencing methods and applications, as well as specific examples where it has had diagnostic and prognostic utility. Next-generation sequencing technology grew out of advances in multiple fields to produce a sophisticated laboratory test with tremendous potential. Next-generation sequencing may be used in the clinical setting to look for specific genetic alterations in patients with cancer, diagnose inherited conditions such as cystic fibrosis, and detect and profile microbial organisms. This primer will review DNA sequencing technology, the commercialization of next-generation sequencing, and clinical uses of next-generation sequencing. Specific applications where next-generation sequencing has demonstrated utility in oncology are provided. PMID:29761157
Near-infrared fluorescence image quality test methods for standardized performance evaluation
NASA Astrophysics Data System (ADS)
Kanniyappan, Udayakumar; Wang, Bohan; Yang, Charles; Ghassemi, Pejhman; Wang, Quanzeng; Chen, Yu; Pfefer, Joshua
2017-03-01
Near-infrared fluorescence (NIRF) imaging has gained much attention as a clinical method for enhancing visualization of cancers, perfusion and biological structures in surgical applications where a fluorescent dye is monitored by an imaging system. In order to address the emerging need for standardization of this innovative technology, it is necessary to develop and validate test methods suitable for objective, quantitative assessment of device performance. Towards this goal, we develop target-based test methods and investigate best practices for key NIRF imaging system performance characteristics including spatial resolution, depth of field and sensitivity. Characterization of fluorescence properties was performed by generating excitation-emission matrix properties of indocyanine green and quantum dots in biological solutions and matrix materials. A turbid, fluorophore-doped target was used, along with a resolution target for assessing image sharpness. Multi-well plates filled with either liquid or solid targets were generated to explore best practices for evaluating detection sensitivity. Overall, our results demonstrate the utility of objective, quantitative, target-based testing approaches as well as the need to consider a wide range of factors in establishing standardized approaches for NIRF imaging system performance.
Comparison of measurement methods for capacitive tactile sensors and their implementation
NASA Astrophysics Data System (ADS)
Tarapata, Grzegorz; Sienkiewicz, Rafał
2015-09-01
This paper presents a review of ideas and implementations of measurement methods utilized for capacity measurements in tactile sensors. The paper describes technical method, charge amplification method, generation and as well integration method. Three selected methods were implemented in dedicated measurement system and utilised for capacitance measurements of ourselves made tactile sensors. The tactile sensors tested in this work were fully fabricated with the inkjet printing technology. The tests result were presented and summarised. The charge amplification method (CDC) was selected as the best method for the measurement of the tactile sensors.
Uribe-Convers, Simon; Duke, Justin R.; Moore, Michael J.; Tank, David C.
2014-01-01
• Premise of the study: We present an alternative approach for molecular systematic studies that combines long PCR and next-generation sequencing. Our approach can be used to generate templates from any DNA source for next-generation sequencing. Here we test our approach by amplifying complete chloroplast genomes, and we present a set of 58 potentially universal primers for angiosperms to do so. Additionally, this approach is likely to be particularly useful for nuclear and mitochondrial regions. • Methods and Results: Chloroplast genomes of 30 species across angiosperms were amplified to test our approach. Amplification success varied depending on whether PCR conditions were optimized for a given taxon. To further test our approach, some amplicons were sequenced on an Illumina HiSeq 2000. • Conclusions: Although here we tested this approach by sequencing plastomes, long PCR amplicons could be generated using DNA from any genome, expanding the possibilities of this approach for molecular systematic studies. PMID:25202592
System and Method for Modeling the Flow Performance Features of an Object
NASA Technical Reports Server (NTRS)
Jorgensen, Charles (Inventor); Ross, James (Inventor)
1997-01-01
The method and apparatus includes a neural network for generating a model of an object in a wind tunnel from performance data on the object. The network is trained from test input signals (e.g., leading edge flap position, trailing edge flap position, angle of attack, and other geometric configurations, and power settings) and test output signals (e.g., lift, drag, pitching moment, or other performance features). In one embodiment, the neural network training method employs a modified Levenberg-Marquardt optimization technique. The model can be generated 'real time' as wind tunnel testing proceeds. Once trained, the model is used to estimate performance features associated with the aircraft given geometric configuration and/or power setting input. The invention can also be applied in other similar static flow modeling applications in aerodynamics, hydrodynamics, fluid dynamics, and other such disciplines. For example, the static testing of cars, sails, and foils, propellers, keels, rudders, turbines, fins, and the like, in a wind tunnel, water trough, or other flowing medium.
A Method for Optimal Load Dispatch of a Multi-zone Power System with Zonal Exchange Constraints
NASA Astrophysics Data System (ADS)
Hazarika, Durlav; Das, Ranjay
2018-04-01
This paper presented a method for economic generation scheduling of a multi-zone power system having inter zonal operational constraints. For this purpose, the generator rescheduling for a multi area power system having inter zonal operational constraints has been represented as a two step optimal generation scheduling problem. At first, the optimal generation scheduling has been carried out for the zone having surplus or deficient generation with proper spinning reserve using co-ordination equation. The power exchange required for the deficit zones and zones having no generation are estimated based on load demand and generation for the zone. The incremental transmission loss formulas for the transmission lines participating in the power transfer process among the zones are formulated. Using these, incremental transmission loss expression in co-ordination equation, the optimal generation scheduling for the zonal exchange has been determined. Simulation is carried out on IEEE 118 bus test system to examine the applicability and validity of the method.
A novel method for direct solder bump pull testing using lead-free solders
NASA Astrophysics Data System (ADS)
Turner, Gregory Alan
This thesis focuses on the design, fabrication, and evaluation of a new method for testing the adhesion strength of lead-free solders, named the Isotraction Bump Pull method (IBP). In order to develop a direct solder joint-strength testing method that did not require customization for different solder types, bump sizes, specific equipment, or trial-and-error, a combination of two widely used and accepted standards was created. First, solder bumps were made from three types of lead free solder were generated on untreated copper PCB substrates using an in-house fabricated solder bump-on-demand generator, Following this, the newly developed method made use of a polymer epoxy to encapsulate the solder bumps that could then be tested under tension using a high precision universal vertical load machine. The tests produced repeatable and predictable results for each of the three alloys tested that were in agreement with the relative behavior of the same alloys using other testing methods in the literature. The median peak stress at failure for the three solders tested were 2020.52 psi, 940.57 psi, and 2781.0 psi, and were within one standard deviation of the of all data collected for each solder. The assumptions in this work that brittle fracture occurred through the Intermetallic Compound layer (IMC) were validated with the use of Energy-Dispersive X-Ray Spectrometry and high magnification of the fractured surface of both newly exposed sides of the test specimens. Following this, an examination of the process to apply the results from the tensile tests into standard material science equations for the fracture of the systems was performed..
Acoustic Treatment Design Scaling Methods. Volume 1; Overview, Results, and Recommendations
NASA Technical Reports Server (NTRS)
Kraft, R. E.; Yu, J.
1999-01-01
Scale model fan rigs that simulate new generation ultra-high-bypass engines at about 1/5-scale are achieving increased importance as development vehicles for the design of low-noise aircraft engines. Testing at small scale allows the tests to be performed in existing anechoic wind tunnels, which provides an accurate simulation of the important effects of aircraft forward motion on the noise generation. The ability to design, build, and test miniaturized acoustic treatment panels on scale model fan rigs representative of the fullscale engine provides not only a cost-savings, but an opportunity to optimize the treatment by allowing tests of different designs. The primary objective of this study was to develop methods that will allow scale model fan rigs to be successfully used as acoustic treatment design tools. The study focuses on finding methods to extend the upper limit of the frequency range of impedance prediction models and acoustic impedance measurement methods for subscale treatment liner designs, and confirm the predictions by correlation with measured data. This phase of the program had as a goal doubling the upper limit of impedance measurement from 6 kHz to 12 kHz. The program utilizes combined analytical and experimental methods to achieve the objectives.
NASA Astrophysics Data System (ADS)
Wan, Tian
This work is motivated by the lack of fully coupled computational tool that solves successfully the turbulent chemically reacting Navier-Stokes equation, the electron energy conservation equation and the electric current Poisson equation. In the present work, the abovementioned equations are solved in a fully coupled manner using fully implicit parallel GMRES methods. The system of Navier-Stokes equations are solved using a GMRES method with combined Schwarz and ILU(0) preconditioners. The electron energy equation and the electric current Poisson equation are solved using a GMRES method with combined SOR and Jacobi preconditioners. The fully coupled method has also been implemented successfully in an unstructured solver, US3D, and convergence test results were presented. This new method is shown two to five times faster than the original DPLR method. The Poisson solver is validated with analytic test problems. Then, four problems are selected; two of them are computed to explore the possibility of onboard MHD control and power generation, and the other two are simulation of experiments. First, the possibility of onboard reentry shock control by a magnetic field is explored. As part of a previous project, MHD power generation onboard a re-entry vehicle is also simulated. Then, the MHD acceleration experiments conducted at NASA Ames research center are simulated. Lastly, the MHD power generation experiments known as the HVEPS project are simulated. For code validation, the scramjet experiments at University of Queensland are simulated first. The generator section of the HVEPS test facility is computed then. The main conclusion is that the computational tool is accurate for different types of problems and flow conditions, and its accuracy and efficiency are necessary when the flow complexity increases.
Digital test signal generation: An accurate SNR calibration approach for the DSN
NASA Technical Reports Server (NTRS)
Gutierrez-Luaces, Benito O.
1993-01-01
In support of the on-going automation of the Deep Space Network (DSN) a new method of generating analog test signals with accurate signal-to-noise ratio (SNR) is described. High accuracy is obtained by simultaneous generation of digital noise and signal spectra at the desired bandwidth (base-band or bandpass). The digital synthesis provides a test signal embedded in noise with the statistical properties of a stationary random process. Accuracy is dependent on test integration time and limited only by the system quantization noise (0.02 dB). The monitor and control as well as signal-processing programs reside in a personal computer (PC). Commands are transmitted to properly configure the specially designed high-speed digital hardware. The prototype can generate either two data channels modulated or not on a subcarrier, or one QPSK channel, or a residual carrier with one biphase data channel. The analog spectrum generated is on the DC to 10 MHz frequency range. These spectra may be up-converted to any desired frequency without loss on the characteristics of the SNR provided. Test results are presented.
Kentucky highway rating system
DOT National Transportation Integrated Search
2003-03-01
This study had two goals: 1. Formulate a new method for generating roadway adequacy ratings; 2. Construct an appropriate data set and then test the method by comparing it to the results of the HPMS-AP method. The recommended methodology builds on the...
NASA Astrophysics Data System (ADS)
Pillsbury, Ralph T.
This research examined an instructional strategy called Diagramming the Never Ending Story: A method called diagramming was taught to sixth grade students via an outdoor science inquiry ecology unit. Students generated diagrams of the new ecology concepts they encountered, creating explanatory 'captions' for their newly drawn diagrams while connecting them in a memorable story. The diagramming process culminates in 20-30 meter-long murals called the Never Ending Story: Months of science instruction are constructed as pictorial scrolls, making sense of all new science concepts they encounter. This method was taught at a North Carolina "Public" Charter School, Children's Community School, to measure its efficacy in helping students comprehend scientific concepts and retain them thereby increasing science literacy. There were four demographically similar classes of 20 students each. Two 'treatment' classes, randomly chosen from the four classes, generated their own Never Ending Stories after being taught the diagramming method. A Solomon Four-Group Design was employed: Two Classes (one control, one treatment) were administered pre- and post; two classes received post tests only. The tests were comprised of multiple choice, fill-in and extended response (open-ended) sections. Multiple choice and fill-in test data were not statistically significant whereas extended response test data confirm that treatment classes made statistically significant gains.
Antico, A; Platzgummer, S; Bassetti, D; Bizzaro, N; Tozzoli, R; Villalta, D
2010-07-01
The aim of this study was to evaluate the diagnostic performance of four new enzyme immunoassays (EIAs) for anti-double-stranded-DNA (anti-dsDNA) antibodies, in comparison with the Farr assay and the Crithidia luciliae immunofluorescence test (CLIFT). To this purpose, sera from four patient groups were collected: 52 sera from patients with systemic lupus erythematosus (SLE); 28 from patients with other connective tissue diseases (CTD); 36 from patients with hepatitis C virus (HCV) infection; and 24 from those with acute viral infection. All sera were tested for anti-dsDNA antibodies by four EIA methods using a different antigenic DNA source [synthetic oligonucleotide (Method A), circular plasmid (Method B), recombinant (Method C), and purified extracted (Method D)], and by CLIFT and Farr assays. The diagnostic sensitivity of the assays was as follows: 84.6% (Method A), 73% (B), 82.7% (C), 84.6% (D), 55.8% (CLIFT), and 78.8% (Farr). Specificity was 82.9% (A), 97.7% (B), 96.5% (C), 94.3% (D), 96.5% (CLIFT), and 90.9% (Farr). From these data, we can conclude that the new-generation EIA methods evaluated in this study have higher sensitivity than the CLIFT and Farr assays and, with the exception of Method A, have specificity similar to the CLIFT and slightly higher than the Farr assay. These findings suggest that EIA tests may replace CLIFT as a screening test and the Farr assay as a specific test, for anti-dsDNA antibody detection.
NASA Astrophysics Data System (ADS)
Dağlarli, Evren; Temeltaş, Hakan
2008-04-01
In this study, behavior generation and self-learning paradigms are investigated for the real-time applications of multi-goal mobile robot tasks. The method is capable to generate new behaviors and it combines them in order to achieve multi goal tasks. The proposed method is composed from three layers: Behavior Generating Module, Coordination Level and Emotion -Motivation Level. Last two levels use Hidden Markov models to manage dynamical structure of behaviors. The kinematics and dynamic model of the mobile robot with non-holonomic constraints are considered in the behavior based control architecture. The proposed method is tested on a four-wheel driven and four-wheel steered mobile robot with constraints in simulation environment and results are obtained successfully.
Bertona, E; Radice, M; Rodríguez, C H; Barberis, C; Vay, C; Famiglietti, A; Gutkind, G
2005-01-01
Enterobacter spp. are becoming increasingly frequent nosocomial pathogens with multiple resistance mechanism to beta-lactam antibiotics. We carried out the phenotypic and genotypic characterization of beta-lactamases in 27 Enterobacter spp. (25 Enterobacter cloacae y 2 Enterobacter aerogenes), as well as the ability of different extended spectrum-lactamase (ESBL) screening methods. Resistance to third generation cephalosporins was observed in 15/27 (63%) isolates. Twelve resistant isolates produced high level chromosomal encoded AmpC beta-lactamase; 6 of them were also producers of PER-2. Resistance to third generation cephalosporins in the remaining 3 isolates was due to the presence of ESBLs, PER-2 in 2 cases, and CTX-M-2 in the other. Only CTX-M-2 production was detected with all tested cephalosporins using difusion synergy tests, while cefepime improved ESBLs detection in 7/8 PER-2 producers, 4/8 in the inhibitor approximation test and 7/8 with double disk test using cefepime containing disk with and without clavulanic acid. Dilution method, including cephalosporins with and without the inhibitor detected 1/9 ESBLs producers.
The importance of the keyword-generation method in keyword mnemonics.
Campos, Alfredo; Amor, Angeles; González, María Angeles
2004-01-01
Keyword mnemonics is under certain conditions an effective approach for learning foreign-language vocabulary. It appears to be effective for words with high image vividness but not for words with low image vividness. In this study, two experiments were performed to assess the efficacy of a new keyword-generation procedure (peer generation). In Experiment 1, a sample of 363 high-school students was randomly into four groups. The subjects were required to learn L1 equivalents of a list of 16 Latin words (8 with high image vividness, 8 with low image vividness), using a) the rote method, or the keyword method with b) keywords and images generated and supplied by the experimenter, c) keywords and images generated by themselves, or d) keywords and images previously generated by peers (i.e., subjects with similar sociodemographic characteristics). Recall was tested immediately and one week later. For high-vivideness words, recall was significantly better in the keyword groups than the rote method group. For low-vividness words, learning method had no significant effect. Experiment 2 was basically identical, except that the word lists comprised 32 words (16 high-vividness, 16 low-vividness). In this experiment, the peer-generated-keyword group showed significantly better recall of high-vividness words than the rote method groups and the subject generated keyword group; again, however, learning method had no significant effect on recall of low-vividness words.
Measurement environments and testing
NASA Astrophysics Data System (ADS)
Marvin, A. C.
1991-06-01
The various methods used to assess both the emission (interference generation) performance of electronic equipment and the immunity of electronic equipment to external electromagnetic interference are described. The measurement methods attempt to simulate realistic operating conditions for the equipment being tested, yet at the same time they must be repeatable and practical to operate. This has led to the development of a variety of test methods, each of which has its limitations. Concentration is on the most common measurement methods such as open-field test sites, screened enclosures and transverse electromagnetic (TEM) cells. The physical justification for the methods, their limitations, and measurement precision are described. Ways of relating similar measurements made by different methods are discussed, and some thoughts on future measurement improvements are presented.
NASA Technical Reports Server (NTRS)
Hamell, Robert L.; Kuhnle, Paul F.; Sydnor, Richard L.
1992-01-01
Measuring the performance of ultra stable frequency standards such as the Superconducting Cavity Maser Oscillator (SCMO) necessitates improvement of some test instrumentation. The frequency stability test equipment used at JPL includes a 1 Hz Offset Generator to generate a beat frequency between a pair of 100 MHz signals that are being compared. The noise floor of the measurement system using the current Offset Generator is adequate to characterize stability of hydrogen masers, but it is not adequate for the SCMO. A new Offset Generator with improved stability was designed and tested at JPL. With this Offset Generator and a new Zero Crossing Detector, recently developed at JPL, the measurement flow was reduced by a factor of 5.5 at 1 second tau, 3.0 at 1000 seconds, and 9.4 at 10,000 seconds, compared against the previous design. In addition to the new circuit designs of the Offset Generator and Zero Crossing Detector, tighter control of the measurement equipment environment was required to achieve this improvement. The design of this new Offset Generator are described, along with details of the environment control methods used.
ERIC Educational Resources Information Center
McCoy, Sarah Louise; Nieland, Martin Nicholas Stephen
2011-01-01
Aims: To investigate whether "binge-drinking" is new by comparing the behaviour and attitudes of two generations at the same age and of one generation at different ages. Methods: Fifty-six student/parent pairs completed questionnaires partially based on the Adolescent version of the Alcohol Expectancy Questionnaire (Brown, S.A.,…
SU-E-T-446: Group-Sparsity Based Angle Generation Method for Beam Angle Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, H
2015-06-15
Purpose: This work is to develop the effective algorithm for beam angle optimization (BAO), with the emphasis on enabling further improvement from existing treatment-dependent templates based on clinical knowledge and experience. Methods: The proposed BAO algorithm utilizes a priori beam angle templates as the initial guess, and iteratively generates angular updates for this initial set, namely angle generation method, with improved dose conformality that is quantitatively measured by the objective function. That is, during each iteration, we select “the test angle” in the initial set, and use group-sparsity based fluence map optimization to identify “the candidate angle” for updating “themore » test angle”, for which all the angles in the initial set except “the test angle”, namely “the fixed set”, are set free, i.e., with no group-sparsity penalty, and the rest of angles including “the test angle” during this iteration are in “the working set”. And then “the candidate angle” is selected with the smallest objective function value from the angles in “the working set” with locally maximal group sparsity, and replaces “the test angle” if “the fixed set” with “the candidate angle” has a smaller objective function value by solving the standard fluence map optimization (with no group-sparsity regularization). Similarly other angles in the initial set are in turn selected as “the test angle” for angular updates and this chain of updates is iterated until no further new angular update is identified for a full loop. Results: The tests using the MGH public prostate dataset demonstrated the effectiveness of the proposed BAO algorithm. For example, the optimized angular set from the proposed BAO algorithm was better the MGH template. Conclusion: A new BAO algorithm is proposed based on the angle generation method via group sparsity, with improved dose conformality from the given template. Hao Gao was partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less
Kwon, M-W; Kim, S-C; Yoon, S-E; Ho, Y-S; Kim, E-S
2015-02-09
A new object tracking mask-based novel-look-up-table (OTM-NLUT) method is proposed and implemented on graphics-processing-units (GPUs) for real-time generation of holographic videos of three-dimensional (3-D) scenes. Since the proposed method is designed to be matched with software and memory structures of the GPU, the number of compute-unified-device-architecture (CUDA) kernel function calls and the computer-generated hologram (CGH) buffer size of the proposed method have been significantly reduced. It therefore results in a great increase of the computational speed of the proposed method and enables real-time generation of CGH patterns of 3-D scenes. Experimental results show that the proposed method can generate 31.1 frames of Fresnel CGH patterns with 1,920 × 1,080 pixels per second, on average, for three test 3-D video scenarios with 12,666 object points on three GPU boards of NVIDIA GTX TITAN, and confirm the feasibility of the proposed method in the practical application of electro-holographic 3-D displays.
NASA Astrophysics Data System (ADS)
Delahoyde, Theresa
Nursing education is experiencing a generational phenomenon with student enrollment spanning three generations. Classrooms of the 21st century include the occasional Baby Boomer and a large number of Generation X and Generation Y students. Each of these generations has its own unique set of characteristics that have been shaped by values, trends, behaviors, and events in society. These generational characteristics create vast opportunities to learn, as well as challenges. One such challenge is the use of teaching methods that are congruent with nursing student preferences. Although there is a wide range of studies conducted on student learning styles within the nursing education field, there is little research on the preferred teaching methods of nursing students. The purpose of this quantitative, descriptive study was to compare the preferred teaching methods of multi-generational baccalaureate nursing students with faculty use of teaching methods. The research study included 367 participants; 38 nursing faculty and 329 nursing students from five different colleges within the Midwest region. The results of the two-tailed t-test found four statistically significant findings between Generation X and Y students and their preferred teaching methods including; lecture, listening to the professor lecture versus working in groups; actively participating in group discussion; and the importance of participating in group assignments. The results of the Analysis of Variance (ANOVA) found seventeen statistically significant findings between levels of students (freshmen/sophomores, juniors, & seniors) and their preferred teaching methods. Lecture was found to be the most frequently used teaching method by faculty as well as the most preferred teaching method by students. Overall, the support for a variety of teaching methods was also found in the analysis of data.
Monitoring crack extension in fracture toughness tests by ultrasonics
NASA Technical Reports Server (NTRS)
Klima, S. J.; Fisher, D. M.; Buzzard, R. J.
1975-01-01
An ultrasonic method was used to observe the onset of crack extension and to monitor continued crack growth in fracture toughness specimens during three point bend tests. A 20 MHz transducer was used with commercially available equipment to detect average crack extension less than 0.09 mm. The material tested was a 300-grade maraging steel in the annealed condition. A crack extension resistance curve was developed to demonstrate the usefulness of the ultrasonic method for minimizing the number of tests required to generate such curves.
A Process for Reviewing and Evaluating Generated Test Items
ERIC Educational Resources Information Center
Gierl, Mark J.; Lai, Hollis
2016-01-01
Testing organization needs large numbers of high-quality items due to the proliferation of alternative test administration methods and modern test designs. But the current demand for items far exceeds the supply. Test items, as they are currently written, evoke a process that is both time-consuming and expensive because each item is written,…
Domain Regeneration for Cross-Database Micro-Expression Recognition
NASA Astrophysics Data System (ADS)
Zong, Yuan; Zheng, Wenming; Huang, Xiaohua; Shi, Jingang; Cui, Zhen; Zhao, Guoying
2018-05-01
In this paper, we investigate the cross-database micro-expression recognition problem, where the training and testing samples are from two different micro-expression databases. Under this setting, the training and testing samples would have different feature distributions and hence the performance of most existing micro-expression recognition methods may decrease greatly. To solve this problem, we propose a simple yet effective method called Target Sample Re-Generator (TSRG) in this paper. By using TSRG, we are able to re-generate the samples from target micro-expression database and the re-generated target samples would share same or similar feature distributions with the original source samples. For this reason, we can then use the classifier learned based on the labeled source samples to accurately predict the micro-expression categories of the unlabeled target samples. To evaluate the performance of the proposed TSRG method, extensive cross-database micro-expression recognition experiments designed based on SMIC and CASME II databases are conducted. Compared with recent state-of-the-art cross-database emotion recognition methods, the proposed TSRG achieves more promising results.
Wang, Yong; Wang, Bing-Chuan; Li, Han-Xiong; Yen, Gary G
2016-12-01
When solving constrained optimization problems by evolutionary algorithms, an important issue is how to balance constraints and objective function. This paper presents a new method to address the above issue. In our method, after generating an offspring for each parent in the population by making use of differential evolution (DE), the well-known feasibility rule is used to compare the offspring and its parent. Since the feasibility rule prefers constraints to objective function, the objective function information has been exploited as follows: if the offspring cannot survive into the next generation and if the objective function value of the offspring is better than that of the parent, then the offspring is stored into a predefined archive. Subsequently, the individuals in the archive are used to replace some individuals in the population according to a replacement mechanism. Moreover, a mutation strategy is proposed to help the population jump out of a local optimum in the infeasible region. Note that, in the replacement mechanism and the mutation strategy, the comparison of individuals is based on objective function. In addition, the information of objective function has also been utilized to generate offspring in DE. By the above processes, this paper achieves an effective balance between constraints and objective function in constrained evolutionary optimization. The performance of our method has been tested on two sets of benchmark test functions, namely, 24 test functions at IEEE CEC2006 and 18 test functions with 10-D and 30-D at IEEE CEC2010. The experimental results have demonstrated that our method shows better or at least competitive performance against other state-of-the-art methods. Furthermore, the advantage of our method increases with the increase of the number of decision variables.
RNActive® Technology: Generation and Testing of Stable and Immunogenic mRNA Vaccines.
Rauch, Susanne; Lutz, Johannes; Kowalczyk, Aleksandra; Schlake, Thomas; Heidenreich, Regina
2017-01-01
Developing effective mRNA vaccines poses certain challenges concerning mRNA stability and ability to induce sufficient immune stimulation and requires a specific panel of techniques for production and testing. Here, we describe the production of stabilized mRNA with enhanced immunogenicity, generated using conventional nucleotides only, by introducing changes to the mRNA sequence and by complexation with the nucleotide-binding peptide protamine (RNActive® technology). Methods described here include the synthesis, purification, and protamine complexation of mRNA vaccines as well as a comprehensive panel of in vitro and in vivo methods for evaluation of vaccine quality and immunogenicity.
Encryption key distribution via chaos synchronization
Keuninckx, Lars; Soriano, Miguel C.; Fischer, Ingo; Mirasso, Claudio R.; Nguimdo, Romain M.; Van der Sande, Guy
2017-01-01
We present a novel encryption scheme, wherein an encryption key is generated by two distant complex nonlinear units, forced into synchronization by a chaotic driver. The concept is sufficiently generic to be implemented on either photonic, optoelectronic or electronic platforms. The method for generating the key bitstream from the chaotic signals is reconfigurable. Although derived from a deterministic process, the obtained bit series fulfill the randomness conditions as defined by the National Institute of Standards test suite. We demonstrate the feasibility of our concept on an electronic delay oscillator circuit and test the robustness against attacks using a state-of-the-art system identification method. PMID:28233876
Simulated altitude exposure assessment by hyperspectral imaging
NASA Astrophysics Data System (ADS)
Calin, Mihaela Antonina; Macovei, Adrian; Miclos, Sorin; Parasca, Sorin Viorel; Savastru, Roxana; Hristea, Razvan
2017-05-01
Testing the human body's reaction to hypoxia (including the one generated by high altitude) is important in aeronautic medicine. This paper presents a method of monitoring blood oxygenation during experimental hypoxia using hyperspectral imaging (HSI) and a spectral unmixing model based on a modified Beer-Lambert law. A total of 20 healthy volunteers (males) aged 25 to 60 years were included in this study. A line-scan HSI system was used to acquire images of the faces of the subjects. The method generated oxyhemoglobin and deoxyhemoglobin distribution maps from the foreheads of the subjects at 5 and 10 min of hypoxia and after recovery in a high oxygen breathing mixture. The method also generated oxygen saturation maps that were validated using pulse oximetry. An interesting pattern of desaturation on the forehead was discovered during the study, showing one of the advantages of using HSI for skin oxygenation monitoring in hypoxic conditions. This could bring new insight into the physiological response to high altitude and may become a step forward in air crew testing.
Simulated altitude exposure assessment by hyperspectral imaging.
Calin, Mihaela Antonina; Macovei, Adrian; Miclos, Sorin; Parasca, Sorin Viorel; Savastru, Roxana; Hristea, Razvan
2017-05-01
Testing the human body’s reaction to hypoxia (including the one generated by high altitude) is important in aeronautic medicine. This paper presents a method of monitoring blood oxygenation during experimental hypoxia using hyperspectral imaging (HSI) and a spectral unmixing model based on a modified Beer–Lambert law. A total of 20 healthy volunteers (males) aged 25 to 60 years were included in this study. A line-scan HSI system was used to acquire images of the faces of the subjects. The method generated oxyhemoglobin and deoxyhemoglobin distribution maps from the foreheads of the subjects at 5 and 10 min of hypoxia and after recovery in a high oxygen breathing mixture. The method also generated oxygen saturation maps that were validated using pulse oximetry. An interesting pattern of desaturation on the forehead was discovered during the study, showing one of the advantages of using HSI for skin oxygenation monitoring in hypoxic conditions. This could bring new insight into the physiological response to high altitude and may become a step forward in air crew testing.
Hierarchic Agglomerative Clustering Methods for Automatic Document Classification.
ERIC Educational Resources Information Center
Griffiths, Alan; And Others
1984-01-01
Considers classifications produced by application of single linkage, complete linkage, group average, and word clustering methods to Keen and Cranfield document test collections, and studies structure of hierarchies produced, extent to which methods distort input similarity matrices during classification generation, and retrieval effectiveness…
Does rational selection of training and test sets improve the outcome of QSAR modeling?
Martin, Todd M; Harten, Paul; Young, Douglas M; Muratov, Eugene N; Golbraikh, Alexander; Zhu, Hao; Tropsha, Alexander
2012-10-22
Prior to using a quantitative structure activity relationship (QSAR) model for external predictions, its predictive power should be established and validated. In the absence of a true external data set, the best way to validate the predictive ability of a model is to perform its statistical external validation. In statistical external validation, the overall data set is divided into training and test sets. Commonly, this splitting is performed using random division. Rational splitting methods can divide data sets into training and test sets in an intelligent fashion. The purpose of this study was to determine whether rational division methods lead to more predictive models compared to random division. A special data splitting procedure was used to facilitate the comparison between random and rational division methods. For each toxicity end point, the overall data set was divided into a modeling set (80% of the overall set) and an external evaluation set (20% of the overall set) using random division. The modeling set was then subdivided into a training set (80% of the modeling set) and a test set (20% of the modeling set) using rational division methods and by using random division. The Kennard-Stone, minimal test set dissimilarity, and sphere exclusion algorithms were used as the rational division methods. The hierarchical clustering, random forest, and k-nearest neighbor (kNN) methods were used to develop QSAR models based on the training sets. For kNN QSAR, multiple training and test sets were generated, and multiple QSAR models were built. The results of this study indicate that models based on rational division methods generate better statistical results for the test sets than models based on random division, but the predictive power of both types of models are comparable.
ERDA/Lewis research center photovoltaic systems test facility
NASA Technical Reports Server (NTRS)
Forestieri, A. F.; Johnson, J. A.; Knapp, W. D.; Rigo, H.; Stover, J.; Suhay, R.
1977-01-01
A national photovoltaic power systems test facility (of initial 10-kW peak power rating) is described. It consists of a solar array to generate electrical power, test-hardware for several alternate methods of power conversion, electrical energy storage systems, and an instrumentation and data acquisition system.
Visualization of pass-by noise by means of moving frame acoustic holography.
Park, S H; Kim, Y H
2001-11-01
The noise generated by pass-by test (ISO 362) was visualized. The moving frame acoustic holography was improved to visualize the pass-by noise and predict its level. The proposed method allowed us to visualize tire and engine noise generated by pass-by test based on the following assumption; the noise can be assumed to be quasistationary. This is first because the speed change during the period of our interest is negligible and second because the frequency change of the noise is also negligible. The proposed method was verified by a controlled loud speaker experiment. Effects of running condition, e.g., accelerating according to ISO 362, cruising at constant speed, and coasting down, on the radiated noise were also visualized. The visualized results show where the tire noise is generated and how it propagates.
Determination of LEDs degradation with entropy generation rate
NASA Astrophysics Data System (ADS)
Cuadras, Angel; Yao, Jiaqiang; Quilez, Marcos
2017-10-01
We propose a method to assess the degradation and aging of light emitting diodes (LEDs) based on irreversible entropy generation rate. We degraded several LEDs and monitored their entropy generation rate ( S ˙ ) in accelerated tests. We compared the thermoelectrical results with the optical light emission evolution during degradation. We find a good relationship between aging and S ˙ (t), because S ˙ is both related to device parameters and optical performance. We propose a threshold of S ˙ (t) as a reliable damage indicator of LED end-of-life that can avoid the need to perform optical measurements to assess optical aging. The method lays beyond the typical statistical laws for lifetime prediction provided by manufacturers. We tested different LED colors and electrical stresses to validate the electrical LED model and we analyzed the degradation mechanisms of the devices.
Standing wave performance test of IDT-SAW transducer prepared by silk-screen printing
NASA Astrophysics Data System (ADS)
Wang, Ziping; Jiang, Zhengxuan; Chen, Liangbin; Li, Yefei; Li, Meixia; Wang, Shaohan
2018-05-01
With the advantages of high performance and low loss, interdigital surface acoustic wave (IDT-SAW) transducers are widely used in the fields of nondestructive testing, communication and broadcasting. The production, performance and application of surface acoustic wave (SAW) actuators has become a research hotspot. Based on the basic principle of SAW, an IDT-SAW transducer is designed and fabricated using silk-screen printing in this work. The experiment results show that in terms of SAW performance, the fabricated IDT-SAW transducer can generate standing wave fields comparable to those generated using traditional fabrication methods. The resonant frequency response of the IDT-SAW transducer and SAW attenuation coefficient were obtained by experiments. It has provided a method to test the transducer sensing performance by using fabricated IDT-SAW transducer.
Precision medicine for cancer with next-generation functional diagnostics.
Friedman, Adam A; Letai, Anthony; Fisher, David E; Flaherty, Keith T
2015-12-01
Precision medicine is about matching the right drugs to the right patients. Although this approach is technology agnostic, in cancer there is a tendency to make precision medicine synonymous with genomics. However, genome-based cancer therapeutic matching is limited by incomplete biological understanding of the relationship between phenotype and cancer genotype. This limitation can be addressed by functional testing of live patient tumour cells exposed to potential therapies. Recently, several 'next-generation' functional diagnostic technologies have been reported, including novel methods for tumour manipulation, molecularly precise assays of tumour responses and device-based in situ approaches; these address the limitations of the older generation of chemosensitivity tests. The promise of these new technologies suggests a future diagnostic strategy that integrates functional testing with next-generation sequencing and immunoprofiling to precisely match combination therapies to individual cancer patients.
Vibration and noise analysis of a gear transmission system
NASA Technical Reports Server (NTRS)
Choy, F. K.; Qian, W.; Zakrajsek, J. J.; Oswald, F. B.
1993-01-01
This paper presents a comprehensive procedure to predict both the vibration and noise generated by a gear transmission system under normal operating conditions. The gearbox vibrations were obtained from both numerical simulation and experimental studies using a gear noise test rig. In addition, the noise generated by the gearbox vibrations was recorded during the experimental testing. A numerical method was used to develop linear relationships between the gearbox vibration and the generated noise. The hypercoherence function is introduced to correlate the nonlinear relationship between the fundamental noise frequency and its harmonics. A numerical procedure was developed using both the linear and nonlinear relationships generated from the experimental data to predict noise resulting from the gearbox vibrations. The application of this methodology is demonstrated by comparing the numerical and experimental results from the gear noise test rig.
High-precision Non-Contact Measurement of Creep of Ultra-High Temperature Materials for Aerospace
NASA Technical Reports Server (NTRS)
Rogers, Jan R.; Hyers, Robert
2008-01-01
For high-temperature applications (greater than 2,000 C) such as solid rocket motors, hypersonic aircraft, nuclear electric/thermal propulsion for spacecraft, and more efficient jet engines, creep becomes one of the most important design factors to be considered. Conventional creep-testing methods, where the specimen and test apparatus are in contact with each other, are limited to temperatures approximately 1,700 C. Development of alloys for higher-temperature applications is limited by the availability of testing methods at temperatures above 2000 C. Development of alloys for applications requiring a long service life at temperatures as low as 1500 C, such as the next generation of jet turbine superalloys, is limited by the difficulty of accelerated testing at temperatures above 1700 C. For these reasons, a new, non-contact creep-measurement technique is needed for higher temperature applications. A new non-contact method for creep measurements of ultra-high-temperature metals and ceramics has been developed and validated. Using the electrostatic levitation (ESL) facility at NASA Marshall Space Flight Center, a spherical sample is rotated quickly enough to cause creep deformation due to centrifugal acceleration. Very accurate measurement of the deformed shape through digital image analysis allows the stress exponent n to be determined very precisely from a single test, rather than from numerous conventional tests. Validation tests on single-crystal niobium spheres showed excellent agreement with conventional tests at 1985 C; however the non-contact method provides much greater precision while using only about 40 milligrams of material. This method is being applied to materials including metals and ceramics for non-eroding throats in solid rockets and next-generation superalloys for turbine engines. Recent advances in the method and the current state of these new measurements will be presented.
Analysis of International Space Station Materials on MISSE-3 and MISSE-4
NASA Technical Reports Server (NTRS)
Finckenor, Miria M.; Golden, Johnny L.; O'Rourke, Mary Jane
2008-01-01
For high-temperature applications (> 2,000 C) such as solid rocket motors, hypersonic aircraft, nuclear electric/thermal propulsion for spacecraft, and more efficient jet engines, creep becomes one of the most important design factors to be considered. Conventional creep-testing methods, where the specimen and test apparatus are in contact with each other, are limited to temperatures 1,700 deg. C. Development of alloys for higher-temperature applications is limited by the availability of testing methods at temperatures above 2000 C. Development of alloys for applications requiring a long service life at temperatures as low as 1500 C, such as the next generation of jet turbine superalloys, is limited by the difficulty of accelerated testing at temperatures above 1700 0c. For these reasons, a new, non-contact creep-measurement technique is needed for higher temperature applications. A new non-contact method for creep measurements of ultra-high-temperature metals and ceramics has been developed and validated. Using the electrostatic levitation (ESL) facility at NASA Marshall Space Flight Center, a spherical sample is rotated quickly enough to cause creep deformation due to centrifugal acceleration. Very accurate measurement of the deformed shape through digital image analysis allows the stress exponent n to be determined very precisely from a single test, rather than from numerous conventional tests. Validation tests on single-crystal niobium spheres showed excellent agreement with conventional tests at 1985 C; however the non-contact method provides much greater precision while using only about 40 milligrams of material. This method is being applied to materials including metals and ceramics for noneroding throats in solid rockets and next-generation superalloys for turbine engines. Recent advances in the method and the current state of these new measurements will be presented.
Kremser, Andreas; Dressig, Julia; Grabrucker, Christine; Liepert, Anja; Kroell, Tanja; Scholl, Nina; Schmid, Christoph; Tischer, Johanna; Kufner, Stefanie; Salih, Helmut; Kolb, Hans Jochem; Schmetzer, Helga
2010-01-01
Myeloid-leukemic cells (AML, MDS, CML) can be differentiated to leukemia-derived dendritic cell [DC (DCleu)] potentially presenting the whole leukemic antigen repertoire without knowledge of distinct leukemia antigens and are regarded as promising candidates for a vaccination strategy. We studied the capability of 6 serum-free DC culture methods, chosen according to different mechanisms, to induce DC differentiation in 137 cases of AML and 52 cases of MDS. DC-stimulating substances were cytokines ("standard-medium", "MCM-Mimic", "cytokine-method"), bacterial lysates ("Picibanil"), double-stranded RNA ["Poly (I:C)"] or a cytokine bypass method ("Ca-ionophore"). The quality/quantity of DC generated was estimated by flow cytometry studying (co) expressions of "DC"antigens, costimulatory, maturation, and blast-antigens. Comparing these methods on average 15% to 32% DC, depending on methods used, could be obtained from blast-containing mononuclear cells (MNC) in AML/MDS cases with a DC viability of more than 60%. In all, 39% to 64% of these DC were mature; 31% to 52% of leukemic blasts could be converted to DCleu and DCleu-proportions in the suspension were 2% to 70% (13%). Average results of all culture methods tested were comparable, however not every given case of AML could be differentiated to DC with 1 selected method. However performing a pre-analysis with 3 DC-generating methods (MCM-Mimic, Picibanil, Ca-ionophore) we could generate DC in any given case. Functional analyses provided proof, that DC primed T cells to antileukemia-directed cytotoxic cells, although an anti-leukemic reaction was not achieved in every case. In summary our data show that a successful, quantitative DC/DCleu generation is possible with the best of 3 previously tested methods in any given case. Reasons for different functional behaviors of DC-primed T cells must be evaluated to design a practicable DC-based vaccination strategy.
Paini, Dean R.; Bianchi, Felix J. J. A.; Northfield, Tobin D.; De Barro, Paul J.
2011-01-01
Predicting future species invasions presents significant challenges to researchers and government agencies. Simply considering the vast number of potential species that could invade an area can be insurmountable. One method, recently suggested, which can analyse large datasets of invasive species simultaneously is that of a self organising map (SOM), a form of artificial neural network which can rank species by establishment likelihood. We used this method to analyse the worldwide distribution of 486 fungal pathogens and then validated the method by creating a virtual world of invasive species in which to test the SOM. This novel validation method allowed us to test SOM's ability to rank those species that can establish above those that can't. Overall, we found the SOM highly effective, having on average, a 96–98% success rate (depending on the virtual world parameters). We also found that regions with fewer species present (i.e. 1–10 species) were more difficult for the SOM to generate an accurately ranked list, with success rates varying from 100% correct down to 0% correct. However, we were able to combine the numbers of species present in a region with clustering patterns in the SOM, to further refine confidence in lists generated from these sparsely populated regions. We then used the results from the virtual world to determine confidences for lists generated from the fungal pathogen dataset. Specifically, for lists generated for Australia and its states and territories, the reliability scores were between 84–98%. We conclude that a SOM analysis is a reliable method for analysing a large dataset of potential invasive species and could be used by biosecurity agencies around the world resulting in a better overall assessment of invasion risk. PMID:22016773
Paini, Dean R; Bianchi, Felix J J A; Northfield, Tobin D; De Barro, Paul J
2011-01-01
Predicting future species invasions presents significant challenges to researchers and government agencies. Simply considering the vast number of potential species that could invade an area can be insurmountable. One method, recently suggested, which can analyse large datasets of invasive species simultaneously is that of a self organising map (SOM), a form of artificial neural network which can rank species by establishment likelihood. We used this method to analyse the worldwide distribution of 486 fungal pathogens and then validated the method by creating a virtual world of invasive species in which to test the SOM. This novel validation method allowed us to test SOM's ability to rank those species that can establish above those that can't. Overall, we found the SOM highly effective, having on average, a 96-98% success rate (depending on the virtual world parameters). We also found that regions with fewer species present (i.e. 1-10 species) were more difficult for the SOM to generate an accurately ranked list, with success rates varying from 100% correct down to 0% correct. However, we were able to combine the numbers of species present in a region with clustering patterns in the SOM, to further refine confidence in lists generated from these sparsely populated regions. We then used the results from the virtual world to determine confidences for lists generated from the fungal pathogen dataset. Specifically, for lists generated for Australia and its states and territories, the reliability scores were between 84-98%. We conclude that a SOM analysis is a reliable method for analysing a large dataset of potential invasive species and could be used by biosecurity agencies around the world resulting in a better overall assessment of invasion risk.
Spacecraft Fire Suppression: Testing and Evaluation
NASA Technical Reports Server (NTRS)
Abbud-Madrid, Angel; McKinnon, J. Thomas; Delplanque, Jean-Pierre; Kailasanath, Kazhikathra; Gokoglu, Suleyman; Wu, Ming-Shin
2004-01-01
The objective of this project is the testing and evaluation of the effectiveness of a variety of fire suppressants and fire-response techniques that will be used in the next generation of spacecraft (Crew Exploration Vehicle, CEV) and planetary habitats. From the many lessons learned in the last 40 years of space travel, there is common agreement in the spacecraft fire safety community that a new fire suppression system will be needed for the various types of fire threats anticipated in new space vehicles and habitats. To date, there is no single fire extinguishing system that can address all possible fire situations in a spacecraft in an effective, reliable, clean, and safe way. The testing conducted under this investigation will not only validate the various numerical models that are currently being developed, but it will provide new design standards on fire suppression that can then be applied to the next generation of spacecraft extinguishment systems. The test program will provide validation of scaling methods by conducting small, medium, and large scale fires. A variety of suppression methods will be tested, such as water mist, carbon dioxide, and nitrogen with single and multiple injection points and direct or distributed agent deployment. These injection methods cover the current ISS fire suppression method of a portable hand-held fire extinguisher spraying through a port in a rack and also next generation spacecraft units that may have a multi-point suppression delivery system built into the design. Consideration will be given to the need of a crew to clean-up the agent and recharge the extinguishers in flight in a long-duration mission. The fire suppression methods mentioned above will be used to extinguish several fire scenarios that have been identified as the most relevant to spaceflight, such as overheated wires, cable bundles, and circuit boards, as well as burning cloth and paper. Further testing will be conducted in which obstructions and ventilation will be added to represent actual spacecraft conditions (e.g., a series of cards in a card rack).
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
The objectives of the SNAP 7A program were to design, manufacture, test, and deliver a five-watt electric generation system for a U. S. Coast Guard 8 x 26E light buoy. The 10-watt Sr/sup 90/ thermoelectric generator, the d-c-to-d-c converter, batteries and the method of installation in the light buoy are describcd. The SNAP 7A generator was fueled with four capsules containing a total of 40,800 curies of Sr/sup 90/ titanate. After fueling and testing, the SNAP 7A electric generating system was installed in the Coast Guard light buoy at Baltimore, Maryland, on December 15, 1961. Operation of the buoy lampmore » is continuous. (auth)« less
Drawbar Pull (DP) Procedures for Off-Road Vehicle Testing
NASA Technical Reports Server (NTRS)
Creager, Colin; Asnani, Vivake; Oravec, Heather; Woodward, Adam
2017-01-01
As NASA strives to explore the surface of the Moon and Mars, there is a continued need for improved tire and vehicle development. When tires or vehicles are being designed for off-road conditions where significant thrust generation is required, such as climbing out of craters on the Moon, it is important to use a standard test method for evaluating their tractive performance. The drawbar pull (DP) test is a way of measuring the net thrust generated by tires or a vehicle with respect to performance metrics such as travel reduction, sinkage, or power efficiency. DP testing may be done using a single tire on a traction rig, or with a set of tires on a vehicle; this report focuses on vehicle DP tests. Though vehicle DP tests have been used for decades, there are no standard procedures that apply to exploration vehicles. This report summarizes previous methods employed, shows the sensitivity of certain test parameters, and provides a body of knowledge for developing standard testing procedures. The focus of this work is on lunar applications, but these test methods can be applied to terrestrial and planetary conditions as well. Section 1.0 of this report discusses the utility of DP testing for off-road vehicle evaluation and the metrics used. Section 2.0 focuses on test-terrain preparation, using the example case of lunar terrain. There is a review of lunar terrain analogs implemented in the past and a discussion on the lunar terrain conditions created at the NASA Glenn Research Center, including methods of evaluating the terrain strength variation and consistency from test to test. Section 3.0 provides details of the vehicle test procedures. These consist of a review of past methods, a comprehensive study on the sensitivity of test parameters, and a summary of the procedures used for DP testing at Glenn.
Pseudorandom number generation using chaotic true orbits of the Bernoulli map
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saito, Asaki, E-mail: saito@fun.ac.jp; Yamaguchi, Akihiro
We devise a pseudorandom number generator that exactly computes chaotic true orbits of the Bernoulli map on quadratic algebraic integers. Moreover, we describe a way to select the initial points (seeds) for generating multiple pseudorandom binary sequences. This selection method distributes the initial points almost uniformly (equidistantly) in the unit interval, and latter parts of the generated sequences are guaranteed not to coincide. We also demonstrate through statistical testing that the generated sequences possess good randomness properties.
A Secure Test Technique for Pipelined Advanced Encryption Standard
NASA Astrophysics Data System (ADS)
Shi, Youhua; Togawa, Nozomu; Yanagisawa, Masao; Ohtsuki, Tatsuo
In this paper, we presented a Design-for-Secure-Test (DFST) technique for pipelined AES to guarantee both the security and the test quality during testing. Unlike previous works, the proposed method can keep all the secrets inside and provide high test quality and fault diagnosis ability as well. Furthermore, the proposed DFST technique can significantly reduce test application time, test data volume, and test generation effort as additional benefits.
A Comparison of Methods for Assessing Space Suit Joint Ranges of Motion
NASA Technical Reports Server (NTRS)
Aitchison, Lindsay T.
2012-01-01
Through the Advanced Exploration Systems (AES) Program, NASA is attempting to use the vast collection of space suit mobility data from 50 years worth of space suit testing to build predictive analysis tools to aid in early architecture decisions for future missions and exploration programs. However, the design engineers must first understand if and how data generated by different methodologies can be compared directly and used in an essentially interchangeable manner. To address this question, the isolated joint range of motion data from two different test series were compared. Both data sets were generated from participants wearing the Mark III Space Suit Technology Demonstrator (MK-III), Waist Entry I-suit (WEI), and minimal clothing. Additionally the two tests shared a common test subject that allowed for within subject comparisons of the methods that greatly reduced the number of variables in play. The tests varied in their methodologies: the Space Suit Comparative Technologies Evaluation used 2-D photogrammetry to analyze isolated ranges of motion while the Constellation space suit benchmarking and requirements development used 3-D motion capture to evaluate both isolated and functional joint ranges of motion. The isolated data from both test series were compared graphically, as percent differences, and by simple statistical analysis. The results indicated that while the methods generate results that are statistically the same (significance level p= 0.01), the differences are significant enough in the practical sense to make direct comparisons ill advised. The concluding recommendations propose direction for how to bridge the data gaps and address future mobility data collection to allow for backward compatibility.
Tomasino, Stephen F; Hamilton, Martin A
2007-01-01
Two quantitative carrier-based test methods for determining the efficacy of liquid sporicides and sterilants on a hard surface, the Standard Quantitative Carrier Test Method-ASTM E 2111-00 and an adaptation of a quantitative micro-method as reported by Sagripanti and Bonifacino, were compared in this study. The methods were selected based on their desirable characteristics (e.g., well-developed protocol, previous use with spores, fully quantitative, and use of readily available equipment) for testing liquid sporicides and sterilants on a hard surface. In this paper, the Sagripanti-Bonifacino procedure is referred to as the Three Step Method (TSM). AOAC Official Method 966.04 was included in this study as a reference method. Three laboratories participated in the evaluation. Three chemical treatments were tested: (1) 3000 ppm sodium hypochlorite with pH adjusted to 7.0, (2) a hydrogen peroxide/peroxyacetic acid product, and (3) 3000 ppm sodium hypochlorite with pH unadjusted (pH of approximately 10.0). A fourth treatment, 6000 ppm sodium hypochlorite solution with pH adjusted to 7.0, was included only for Method 966.04 as a positive control (high level of efficacy). The contact time was 10 min for all chemical treatments except the 6000 ppm sodium hypochlorite treatment which was tested at 30 min. Each chemical treatment was tested 3 times using each of the methods. Only 2 of the laboratories performed the AOAC method. Method performance was assessed by the within-laboratory variance, between-laboratory variance, and total variance associated with the log reduction (LR) estimates generated by each quantitative method. The quantitative methods performed similarly, and the LR values generated by each method were not statistically different for the 3 treatments evaluated. Based on feedback from the participating laboratories, compared to the TSM, ASTM E 2111-00 was more resource demanding and required more set-up time. The logistical and resource concerns identified for ASTM E 2111-00 were largely associated with the filtration process and counting bacterial colonies on filters. Thus, the TSM was determined to be the most suitable method.
Designing the Nuclear Energy Attitude Scale.
ERIC Educational Resources Information Center
Calhoun, Lawrence; And Others
1988-01-01
Presents a refined method for designing a valid and reliable Likert-type scale to test attitudes toward the generation of electricity from nuclear energy. Discusses various tests of validity that were used on the nuclear energy scale. Reports results of administration and concludes that the test is both reliable and valid. (CW)
Kwon, Min-Woo; Kim, Seung-Cheol; Kim, Eun-Soo
2016-01-20
A three-directional motion-compensation mask-based novel look-up table method is proposed and implemented on graphics processing units (GPUs) for video-rate generation of digital holographic videos of three-dimensional (3D) scenes. Since the proposed method is designed to be well matched with the software and memory structures of GPUs, the number of compute-unified-device-architecture kernel function calls can be significantly reduced. This results in a great increase of the computational speed of the proposed method, allowing video-rate generation of the computer-generated hologram (CGH) patterns of 3D scenes. Experimental results reveal that the proposed method can generate 39.8 frames of Fresnel CGH patterns with 1920×1080 pixels per second for the test 3D video scenario with 12,088 object points on dual GPU boards of NVIDIA GTX TITANs, and they confirm the feasibility of the proposed method in the practical application fields of electroholographic 3D displays.
Comparison of two methods for composite score generation in dry eye syndrome.
See, Craig; Bilonick, Richard A; Feuer, William; Galor, Anat
2013-09-19
To compare two methods of composite score generation in dry eye syndrome (DES). Male patients seen in the Miami Veterans Affairs eye clinic with normal eyelid, corneal, and conjunctival anatomy were recruited to participate in the study. Patients filled out the Dry Eye Questionnaire 5 (DEQ5) and underwent measurement of tear film parameters. DES severity scores were generated by independent component analysis (ICA) and latent class analysis (LCA). A total of 247 men were included in the study. Mean age was 69 years (SD 9). Using ICA analysis, osmolarity was found to carry the largest weight, followed by eyelid vascularity and meibomian orifice plugging. Conjunctival injection and tear breakup time (TBUT) carried the lowest weights. Using LCA analysis, TBUT was found to be best at discriminating healthy from diseased eyes, followed closely by Schirmer's test. DEQ5, eyelid vascularity, and conjunctival injection were the poorest at discrimination. The adjusted correlation coefficient between the two generated composite scores was 0.63, indicating that the shared variance was less than 40%. Both ICA and LCA produced composite scores for dry eye severity, with weak to moderate agreement; however, agreement for the relative importance of single diagnostic tests was poor between the two methods.
Comparing Different Fault Identification Algorithms in Distributed Power System
NASA Astrophysics Data System (ADS)
Alkaabi, Salim
A power system is a huge complex system that delivers the electrical power from the generation units to the consumers. As the demand for electrical power increases, distributed power generation was introduced to the power system. Faults may occur in the power system at any time in different locations. These faults cause a huge damage to the system as they might lead to full failure of the power system. Using distributed generation in the power system made it even harder to identify the location of the faults in the system. The main objective of this work is to test the different fault location identification algorithms while tested on a power system with the different amount of power injected using distributed generators. As faults may lead the system to full failure, this is an important area for research. In this thesis different fault location identification algorithms have been tested and compared while the different amount of power is injected from distributed generators. The algorithms were tested on IEEE 34 node test feeder using MATLAB and the results were compared to find when these algorithms might fail and the reliability of these methods.
Method to Generate Full-Span Ice Shape on Swept Wing Using Icing Tunnel Data
NASA Technical Reports Server (NTRS)
Lee, Sam; Camello, Stephanie
2015-01-01
There is a collaborative research program by NASA, FAA, ONERA, and university partners to improve the fidelity of experimental and computational simulation methods for swept-wing ice accretion formulations and resultant aerodynamic effects on large transport aircraft. This research utilizes a 65 scale Common Research Model as the baseline configuration. In order to generate the ice shapes for the aerodynamic testing, ice-accretion testing will be conducted in the NASA Icing Research Tunnel utilizing hybrid model from the 20, 64, and 83 spanwise locations. The models will have full-scale leading edges with truncated chord in order to fit the IRT test section. The ice shapes from the IRT tests will be digitized using a commercially available articulated-arm 3D laser scanning system. The methodology to acquire 3D ice shapes using a laser scanner was developed and validated in a previous research effort. Each of these models will yield a 1.5ft span of ice than can be used. However, a full-span ice accretion will require 75 ft span of ice. This means there will be large gaps between these spanwise ice sections that must be filled, while maintaining all of the important aerodynamic features. A method was developed to generate a full-span ice shape from the three 1.5 ft span ice shapes from the three models.
Mathematical model of snake-type multi-directional wave generation
NASA Astrophysics Data System (ADS)
Muarif; Halfiani, Vera; Rusdiana, Siti; Munzir, Said; Ramli, Marwan
2018-01-01
Research on extreme wave generation is one intensive research on water wave study because the fact that the occurrence of this wave in the ocean can cause serious damage to the ships and offshore structures. One method to be used to generate the wave is self-correcting. This method controls the signal on the wavemakers in a wave tank. Some studies also consider the nonlinear wave generation in a wave tank by using numerical approach. Study on wave generation is essential in the effectiveness and efficiency of offshore structure model testing before it can be operated in the ocean. Generally, there are two types of wavemakers implemented in the hydrodynamic laboratory, piston-type and flap-type. The flap-type is preferred to conduct a testing to a ship in deep water. Single flap wavemaker has been explained in many studies yet snake-type wavemaker (has more than one flap) is still a case needed to be examined. Hence, the formulation in controlling the wavemaker need to be precisely analyzed such that the given input can generate the desired wave in the space-limited wave tank. By applying the same analogy and methodhology as the previous study, this article represents multi-directional wave generation by implementing snake-type wavemakers.
Parametric analysis of ATM solar array.
NASA Technical Reports Server (NTRS)
Singh, B. K.; Adkisson, W. B.
1973-01-01
The paper discusses the methods used for the calculation of ATM solar array performance characteristics and provides the parametric analysis of solar panels used in SKYLAB. To predict the solar array performance under conditions other than test conditions, a mathematical model has been developed. Four computer programs have been used to convert the solar simulator test data to the parametric curves. The first performs module summations, the second determines average solar cell characteristics which will cause a mathematical model to generate a curve matching the test data, the third is a polynomial fit program which determines the polynomial equations for the solar cell characteristics versus temperature, and the fourth program uses the polynomial coefficients generated by the polynomial curve fit program to generate the parametric data.
Embedded object concept with a telepresence robot system
NASA Astrophysics Data System (ADS)
Vallius, Tero; Röning, Juha
2005-10-01
This paper presents the Embedded Object Concept (EOC) and a telepresence robot system which is a test case for the EOC. The EOC utilizes common object-oriented methods used in software by applying them to combined Lego-like software-hardware entities. These entities represent objects in object-oriented design methods, and they are the building blocks of embedded systems. The goal of the EOC is to make the designing of embedded systems faster and easier. This concept enables people without comprehensive knowledge in electronics design to create new embedded systems, and for experts it shortens the design time of new embedded systems. We present the current status of the EOC, including two generations of embedded objects named Atomi objects. The first generation of the Atomi objects has been tested with different applications, and found to be functional, but not optimal. The second generation aims to correct the issues found with the first generation, and it is being tested in a relatively complex test case. The test case is a telepresence robot consisting of a two wheeled human height robot and its computer counter part. The robot has been constructed using incremental device development, which is made possible by the architecture of the EOC. The robot contains video and audio exchange capability, and a controlling and balancing system for driving with two wheels. The robot is built in two versions, the first consisting of a PDA device and Atomi objects, and the second consisting of only Atomi objects. The robot is currently incomplete, but for the most part it has been successfully tested.
Generation of Cardiomyocytes from Pluripotent Stem Cells.
Nakahama, Hiroko; Di Pasquale, Elisa
2016-01-01
The advent of pluripotent stem cells (PSCs) enabled a multitude of studies for modeling the development of diseases and testing pharmaceutical therapeutic potential in vitro. These PSCs have been differentiated to multiple cell types to demonstrate its pluripotent potential, including cardiomyocytes (CMs). However, the efficiency and efficacy of differentiation vary greatly between different cell lines and methods. Here, we describe two different methods for acquiring CMs from human pluripotent lines. One method involves the generation of embryoid bodies, which emulates the natural developmental process, while the other method chemically activates the canonical Wnt signaling pathway to induce a monolayer of cardiac differentiation.
Methods for the design and analysis of power optimized finite-state machines using clock gating
NASA Astrophysics Data System (ADS)
Chodorowski, Piotr
2017-11-01
The paper discusses two methods of design of power optimized FSMs. Both methods use clock gating techniques. The main objective of the research was to write a program capable of generating automatic hardware description of finite-state machines in VHDL and testbenches to help power analysis. The creation of relevant output files is detailed step by step. The program was tested using the LGSynth91 FSM benchmark package. An analysis of the generated circuits shows that the second method presented in this paper leads to significant reduction of power consumption.
Model Checking Artificial Intelligence Based Planners: Even the Best Laid Plans Must Be Verified
NASA Technical Reports Server (NTRS)
Smith, Margaret H.; Holzmann, Gerard J.; Cucullu, Gordon C., III; Smith, Benjamin D.
2005-01-01
Automated planning systems (APS) are gaining acceptance for use on NASA missions as evidenced by APS flown On missions such as Orbiter and Deep Space 1 both of which were commanded by onboard planning systems. The planning system takes high level goals and expands them onboard into a detailed of action fiat the spacecraft executes. The system must be verified to ensure that the automatically generated plans achieve the goals as expected and do not generate actions that would harm the spacecraft or mission. These systems are typically tested using empirical methods. Formal methods, such as model checking, offer exhaustive or measurable test coverage which leads to much greater confidence in correctness. This paper describes a formal method based on the SPIN model checker. This method guarantees that possible plans meet certain desirable properties. We express the input model in Promela, the language of SPIN and express the properties of desirable plans formally.
Large-Scale Multiobjective Static Test Generation for Web-Based Testing with Integer Programming
ERIC Educational Resources Information Center
Nguyen, M. L.; Hui, Siu Cheung; Fong, A. C. M.
2013-01-01
Web-based testing has become a ubiquitous self-assessment method for online learning. One useful feature that is missing from today's web-based testing systems is the reliable capability to fulfill different assessment requirements of students based on a large-scale question data set. A promising approach for supporting large-scale web-based…
Random vs. Combinatorial Methods for Discrete Event Simulation of a Grid Computer Network
NASA Technical Reports Server (NTRS)
Kuhn, D. Richard; Kacker, Raghu; Lei, Yu
2010-01-01
This study compared random and t-way combinatorial inputs of a network simulator, to determine if these two approaches produce significantly different deadlock detection for varying network configurations. Modeling deadlock detection is important for analyzing configuration changes that could inadvertently degrade network operations, or to determine modifications that could be made by attackers to deliberately induce deadlock. Discrete event simulation of a network may be conducted using random generation, of inputs. In this study, we compare random with combinatorial generation of inputs. Combinatorial (or t-way) testing requires every combination of any t parameter values to be covered by at least one test. Combinatorial methods can be highly effective because empirical data suggest that nearly all failures involve the interaction of a small number of parameters (1 to 6). Thus, for example, if all deadlocks involve at most 5-way interactions between n parameters, then exhaustive testing of all n-way interactions adds no additional information that would not be obtained by testing all 5-way interactions. While the maximum degree of interaction between parameters involved in the deadlocks clearly cannot be known in advance, covering all t-way interactions may be more efficient than using random generation of inputs. In this study we tested this hypothesis for t = 2, 3, and 4 for deadlock detection in a network simulation. Achieving the same degree of coverage provided by 4-way tests would have required approximately 3.2 times as many random tests; thus combinatorial methods were more efficient for detecting deadlocks involving a higher degree of interactions. The paper reviews explanations for these results and implications for modeling and simulation.
Verification and Validation in a Rapid Software Development Process
NASA Technical Reports Server (NTRS)
Callahan, John R.; Easterbrook, Steve M.
1997-01-01
The high cost of software production is driving development organizations to adopt more automated design and analysis methods such as rapid prototyping, computer-aided software engineering (CASE) tools, and high-level code generators. Even developers of safety-critical software system have adopted many of these new methods while striving to achieve high levels Of quality and reliability. While these new methods may enhance productivity and quality in many cases, we examine some of the risks involved in the use of new methods in safety-critical contexts. We examine a case study involving the use of a CASE tool that automatically generates code from high-level system designs. We show that while high-level testing on the system structure is highly desirable, significant risks exist in the automatically generated code and in re-validating releases of the generated code after subsequent design changes. We identify these risks and suggest process improvements that retain the advantages of rapid, automated development methods within the quality and reliability contexts of safety-critical projects.
NASA Technical Reports Server (NTRS)
O'Brien, T. Kevin; Johnston, William M.; Toland, Gregory J.
2010-01-01
Mode II interlaminar fracture toughness and delamination onset and growth characterization data were generated for IM7/8552 graphite epoxy composite materials from two suppliers for use in fracture mechanics analyses. Both the fracture toughness testing and the fatigue testing were conducted using the End-notched Flexure (ENF) test. The ENF test for mode II fracture toughness is currently under review by ASTM as a potential standard test method. This current draft ASTM protocol was used as a guide to conduct the tests on the IM7/8552 material. This report summarizes the test approach, methods, procedures and results of this characterization effort.
Preparation for foam composites. [using polybenzimidazole for fireproofing panels
NASA Technical Reports Server (NTRS)
Maximovich, M. G.
1974-01-01
Methods were developed for the fabrication of fire resistant panels utilizing polybenzimidazole (PBI) and Kerimid 601 resins along with glass, quartz, and Kevlar reinforcements. Stitched truss structure, both unfilled and filled with PBI foam, were successfully fabricated and tested. Second generation structures were then selected, fabricated, and tested, with a PBI/glass skin/PBI foam sandwich structure emerging as the optimum panel concept. Mechanical properties, smoke generation, and fire resistance were determined for the candidate panels.
Non-animal methods to predict skin sensitization (I): the Cosmetics Europe database.
Hoffmann, Sebastian; Kleinstreuer, Nicole; Alépée, Nathalie; Allen, David; Api, Anne Marie; Ashikaga, Takao; Clouet, Elodie; Cluzel, Magalie; Desprez, Bertrand; Gellatly, Nichola; Goebel, Carsten; Kern, Petra S; Klaric, Martina; Kühnl, Jochen; Lalko, Jon F; Martinozzi-Teissier, Silvia; Mewes, Karsten; Miyazawa, Masaaki; Parakhia, Rahul; van Vliet, Erwin; Zang, Qingda; Petersohn, Dirk
2018-05-01
Cosmetics Europe, the European Trade Association for the cosmetics and personal care industry, is conducting a multi-phase program to develop regulatory accepted, animal-free testing strategies enabling the cosmetics industry to conduct safety assessments. Based on a systematic evaluation of test methods for skin sensitization, five non-animal test methods (DPRA (Direct Peptide Reactivity Assay), KeratinoSens TM , h-CLAT (human cell line activation test), U-SENS TM , SENS-IS) were selected for inclusion in a comprehensive database of 128 substances. Existing data were compiled and completed with newly generated data, the latter amounting to one-third of all data. The database was complemented with human and local lymph node assay (LLNA) reference data, physicochemical properties and use categories, and thoroughly curated. Focused on the availability of human data, the substance selection resulted nevertheless resulted in a high diversity of chemistries in terms of physico-chemical property ranges and use categories. Predictivities of skin sensitization potential and potency, where applicable, were calculated for the LLNA as compared to human data and for the individual test methods compared to both human and LLNA reference data. In addition, various aspects of applicability of the test methods were analyzed. Due to its high level of curation, comprehensiveness, and completeness, we propose our database as a point of reference for the evaluation and development of testing strategies, as done for example in the associated work of Kleinstreuer et al. We encourage the community to use it to meet the challenge of conducting skin sensitization safety assessment without generating new animal data.
Test Scheduling for Core-Based SOCs Using Genetic Algorithm Based Heuristic Approach
NASA Astrophysics Data System (ADS)
Giri, Chandan; Sarkar, Soumojit; Chattopadhyay, Santanu
This paper presents a Genetic algorithm (GA) based solution to co-optimize test scheduling and wrapper design for core based SOCs. Core testing solutions are generated as a set of wrapper configurations, represented as rectangles with width equal to the number of TAM (Test Access Mechanism) channels and height equal to the corresponding testing time. A locally optimal best-fit heuristic based bin packing algorithm has been used to determine placement of rectangles minimizing the overall test times, whereas, GA has been utilized to generate the sequence of rectangles to be considered for placement. Experimental result on ITC'02 benchmark SOCs shows that the proposed method provides better solutions compared to the recent works reported in the literature.
Investigation of thermolytic hydrogen generation rate of tank farm simulated and actual waste
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martino, C.; Newell, D.; Woodham, W.
To support resolution of Potential Inadequacies in the Safety Analysis for the Savannah River Site (SRS) Tank Farm, Savannah River National Laboratory conducted research to determine the thermolytic hydrogen generation rate (HGR) with simulated and actual waste. Gas chromatography methods were developed and used with air-purged flow systems to quantify hydrogen generation from heated simulated and actual waste at rates applicable to the Tank Farm Documented Safety Analysis (DSA). Initial simulant tests with a simple salt solution plus sodium glycolate demonstrated the behavior of the test apparatus by replicating known HGR kinetics. Additional simulant tests with the simple salt solutionmore » excluding organics apart from contaminants provided measurement of the detection and quantification limits for the apparatus with respect to hydrogen generation. Testing included a measurement of HGR on actual SRS tank waste from Tank 38. A final series of measurements examined HGR for a simulant with the most common SRS Tank Farm organics at temperatures up to 140 °C. The following conclusions result from this testing.« less
3D Face Modeling Using the Multi-Deformable Method
Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun
2012-01-01
In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper. PMID:23201976
Study and application of acoustic emission testing in fault diagnosis of low-speed heavy-duty gears.
Gao, Lixin; Zai, Fenlou; Su, Shanbin; Wang, Huaqing; Chen, Peng; Liu, Limei
2011-01-01
Most present studies on the acoustic emission signals of rotating machinery are experiment-oriented, while few of them involve on-spot applications. In this study, a method of redundant second generation wavelet transform based on the principle of interpolated subdivision was developed. With this method, subdivision was not needed during the decomposition. The lengths of approximation signals and detail signals were the same as those of original ones, so the data volume was twice that of original signals; besides, the data redundancy characteristic also guaranteed the excellent analysis effect of the method. The analysis of the acoustic emission data from the faults of on-spot low-speed heavy-duty gears validated the redundant second generation wavelet transform in the processing and denoising of acoustic emission signals. Furthermore, the analysis illustrated that the acoustic emission testing could be used in the fault diagnosis of on-spot low-speed heavy-duty gears and could be a significant supplement to vibration testing diagnosis.
Study and Application of Acoustic Emission Testing in Fault Diagnosis of Low-Speed Heavy-Duty Gears
Gao, Lixin; Zai, Fenlou; Su, Shanbin; Wang, Huaqing; Chen, Peng; Liu, Limei
2011-01-01
Most present studies on the acoustic emission signals of rotating machinery are experiment-oriented, while few of them involve on-spot applications. In this study, a method of redundant second generation wavelet transform based on the principle of interpolated subdivision was developed. With this method, subdivision was not needed during the decomposition. The lengths of approximation signals and detail signals were the same as those of original ones, so the data volume was twice that of original signals; besides, the data redundancy characteristic also guaranteed the excellent analysis effect of the method. The analysis of the acoustic emission data from the faults of on-spot low-speed heavy-duty gears validated the redundant second generation wavelet transform in the processing and denoising of acoustic emission signals. Furthermore, the analysis illustrated that the acoustic emission testing could be used in the fault diagnosis of on-spot low-speed heavy-duty gears and could be a significant supplement to vibration testing diagnosis. PMID:22346592
A Statistical Method to Distinguish Functional Brain Networks
Fujita, André; Vidal, Maciel C.; Takahashi, Daniel Y.
2017-01-01
One major problem in neuroscience is the comparison of functional brain networks of different populations, e.g., distinguishing the networks of controls and patients. Traditional algorithms are based on search for isomorphism between networks, assuming that they are deterministic. However, biological networks present randomness that cannot be well modeled by those algorithms. For instance, functional brain networks of distinct subjects of the same population can be different due to individual characteristics. Moreover, networks of subjects from different populations can be generated through the same stochastic process. Thus, a better hypothesis is that networks are generated by random processes. In this case, subjects from the same group are samples from the same random process, whereas subjects from different groups are generated by distinct processes. Using this idea, we developed a statistical test called ANOGVA to test whether two or more populations of graphs are generated by the same random graph model. Our simulations' results demonstrate that we can precisely control the rate of false positives and that the test is powerful to discriminate random graphs generated by different models and parameters. The method also showed to be robust for unbalanced data. As an example, we applied ANOGVA to an fMRI dataset composed of controls and patients diagnosed with autism or Asperger. ANOGVA identified the cerebellar functional sub-network as statistically different between controls and autism (p < 0.001). PMID:28261045
A Statistical Method to Distinguish Functional Brain Networks.
Fujita, André; Vidal, Maciel C; Takahashi, Daniel Y
2017-01-01
One major problem in neuroscience is the comparison of functional brain networks of different populations, e.g., distinguishing the networks of controls and patients. Traditional algorithms are based on search for isomorphism between networks, assuming that they are deterministic. However, biological networks present randomness that cannot be well modeled by those algorithms. For instance, functional brain networks of distinct subjects of the same population can be different due to individual characteristics. Moreover, networks of subjects from different populations can be generated through the same stochastic process. Thus, a better hypothesis is that networks are generated by random processes. In this case, subjects from the same group are samples from the same random process, whereas subjects from different groups are generated by distinct processes. Using this idea, we developed a statistical test called ANOGVA to test whether two or more populations of graphs are generated by the same random graph model. Our simulations' results demonstrate that we can precisely control the rate of false positives and that the test is powerful to discriminate random graphs generated by different models and parameters. The method also showed to be robust for unbalanced data. As an example, we applied ANOGVA to an fMRI dataset composed of controls and patients diagnosed with autism or Asperger. ANOGVA identified the cerebellar functional sub-network as statistically different between controls and autism ( p < 0.001).
Systematic evaluation of non-animal test methods for skin sensitisation safety assessment.
Reisinger, Kerstin; Hoffmann, Sebastian; Alépée, Nathalie; Ashikaga, Takao; Barroso, Joao; Elcombe, Cliff; Gellatly, Nicola; Galbiati, Valentina; Gibbs, Susan; Groux, Hervé; Hibatallah, Jalila; Keller, Donald; Kern, Petra; Klaric, Martina; Kolle, Susanne; Kuehnl, Jochen; Lambrechts, Nathalie; Lindstedt, Malin; Millet, Marion; Martinozzi-Teissier, Silvia; Natsch, Andreas; Petersohn, Dirk; Pike, Ian; Sakaguchi, Hitoshi; Schepky, Andreas; Tailhardat, Magalie; Templier, Marie; van Vliet, Erwin; Maxwell, Gavin
2015-02-01
The need for non-animal data to assess skin sensitisation properties of substances, especially cosmetics ingredients, has spawned the development of many in vitro methods. As it is widely believed that no single method can provide a solution, the Cosmetics Europe Skin Tolerance Task Force has defined a three-phase framework for the development of a non-animal testing strategy for skin sensitization potency prediction. The results of the first phase – systematic evaluation of 16 test methods – are presented here. This evaluation involved generation of data on a common set of ten substances in all methods and systematic collation of information including the level of standardisation, existing test data,potential for throughput, transferability and accessibility in cooperation with the test method developers.A workshop was held with the test method developers to review the outcome of this evaluation and to discuss the results. The evaluation informed the prioritisation of test methods for the next phase of the non-animal testing strategy development framework. Ultimately, the testing strategy – combined with bioavailability and skin metabolism data and exposure consideration – is envisaged to allow establishment of a data integration approach for skin sensitisation safety assessment of cosmetic ingredients.
Network Traffic Generator for Low-rate Small Network Equipment Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lanzisera, Steven
2013-05-28
Application that uses the Python low-level socket interface to pass network traffic between devices on the local side of a NAT router and the WAN side of the NAT router. This application is designed to generate traffic that complies with the Energy Star Small Network Equipment Test Method.
Automated Testcase Generation for Numerical Support Functions in Embedded Systems
NASA Technical Reports Server (NTRS)
Schumann, Johann; Schnieder, Stefan-Alexander
2014-01-01
We present a tool for the automatic generation of test stimuli for small numerical support functions, e.g., code for trigonometric functions, quaternions, filters, or table lookup. Our tool is based on KLEE to produce a set of test stimuli for full path coverage. We use a method of iterative deepening over abstractions to deal with floating-point values. During actual testing the stimuli exercise the code against a reference implementation. We illustrate our approach with results of experiments with low-level trigonometric functions, interpolation routines, and mathematical support functions from an open source UAS autopilot.
A Model Based Security Testing Method for Protocol Implementation
Fu, Yu Long; Xin, Xiao Long
2014-01-01
The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation. PMID:25105163
A model based security testing method for protocol implementation.
Fu, Yu Long; Xin, Xiao Long
2014-01-01
The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation.
NASA Technical Reports Server (NTRS)
Olson, S. L.; Beeson, H. D.; Haas, J. P.; Baas, J. S.
2004-01-01
The objective of this research is to modify the well-instrumented standard cone configuration to provide a reproducible bench-scale test environment that simulates the buoyant or ventilation flow that would be generated by or around a burning surface in a spacecraft or extraterrestrial gravity level. We will then develop a standard test method with pass-fail criteria for future use in spacecraft materials flammability screening. (For example, dripping of molten material will be an automatic fail.)
Assessing Multiple Choice Question (MCQ) Tests--A Mathematical Perspective
ERIC Educational Resources Information Center
Scharf, Eric M.; Baldwin, Lynne P.
2007-01-01
The reasoning behind popular methods for analysing the raw data generated by multiple choice question (MCQ) tests is not always appreciated, occasionally with disastrous results. This article discusses and analyses three options for processing the raw data produced by MCQ tests. The article shows that one extreme option is not to penalize a…
DOT National Transportation Integrated Search
2009-03-21
This study investigates all of the generated soils data in an attempt to use the more 'routine' laboratory tests to determine geotechnical design parameters (such as phiangle, cohesion, wet unit weight, unconfined compression, consolidation character...
A novel orthoimage mosaic method using a weighted A∗ algorithm - Implementation and evaluation
NASA Astrophysics Data System (ADS)
Zheng, Maoteng; Xiong, Xiaodong; Zhu, Junfeng
2018-04-01
The implementation and evaluation of a weighted A∗ algorithm for orthoimage mosaic with UAV (Unmanned Aircraft Vehicle) imagery is proposed. The initial seam-line network is firstly generated by standard Voronoi Diagram algorithm; an edge diagram is generated based on DSM (Digital Surface Model) data; the vertices (conjunction nodes of seam-lines) of the initial network are relocated if they are on high objects (buildings, trees and other artificial structures); and the initial seam-lines are refined using the weighted A∗ algorithm based on the edge diagram and the relocated vertices. Our method was tested with three real UAV datasets. Two quantitative terms are introduced to evaluate the results of the proposed method. Preliminary results show that the method is suitable for regular and irregular aligned UAV images for most terrain types (flat or mountainous areas), and is better than the state-of-the-art method in both quality and efficiency based on the test datasets.
The design method of CGH for testing the Φ404, F2 primary mirror
NASA Astrophysics Data System (ADS)
Xie, Nian; Duan, Xueting; Li, Hua
2014-09-01
In order to accurately test shape quality of the large diameter aspherical mirror, a kind of binary optical element called Computer generated holograms (CGHs) are widely used .The primary role of the CGHs is to generate any desired wavefronts to realize phase compensation. In this paper, the CGH design principle and design process are reviewed at first. Then an optical testing system for testing the aspheric mirror includes a computer generated hologram (CGH) and an imaging element (IE) is disposed. And an optical testing system only concludes a CGH is proposed too. The CGH is designed for measurement of an aspheric mirror (diameter=404mm, F-number=2). Interferometric simulation test results of the aspheric mirror show that the whole test system obtains the demanded high accuracy. When combined the CGH with an imaging element in the Aspheric Compensator, the smallest feature in the CGH should be decreased. The CGH can also be used to test freeform surface with high precision, it is of great significance to the development of the freeform surface.
Image encryption using random sequence generated from generalized information domain
NASA Astrophysics Data System (ADS)
Xia-Yan, Zhang; Guo-Ji, Zhang; Xuan, Li; Ya-Zhou, Ren; Jie-Hua, Wu
2016-05-01
A novel image encryption method based on the random sequence generated from the generalized information domain and permutation-diffusion architecture is proposed. The random sequence is generated by reconstruction from the generalized information file and discrete trajectory extraction from the data stream. The trajectory address sequence is used to generate a P-box to shuffle the plain image while random sequences are treated as keystreams. A new factor called drift factor is employed to accelerate and enhance the performance of the random sequence generator. An initial value is introduced to make the encryption method an approximately one-time pad. Experimental results show that the random sequences pass the NIST statistical test with a high ratio and extensive analysis demonstrates that the new encryption scheme has superior security.
Nitric oxide therapies for local inhibition of platelets' activitation on blood-contacting surfaces
NASA Astrophysics Data System (ADS)
Amoako, Kagya Agyeman
Blood-contacting devices interact with blood during their function much like the endothelium that modulates hemostasis. The surfaces of these devices however, lack endothelial-like properties, and consequently, upon blood contact, activate clotting factors to form clots. Systemic heparinization for inhibiting clot formation can cause bleeding and surface coatings show insignificant benefits. This research investigated nitric oxide (NO) production mimicry of the endothehum on artificial lungs (ALs) and pediatric catheters. Their surfaces were functionalized either by (1) entrapping NO donors inside their bulk, (2) incorporating catalysts to generate NO from NO-donors or (3) supplementing NO into sweep gas of artificial lungs. Pediatric catheters functionalized with NO-donor thin coats using method 1 is limited by short NO release duration. Method 2 has not been applied to large surface-area, low-flow devices like the AL. In this work NO-generating silicone membranes were synthesized and characterized to determine the relationship between surface properties, NO flux, and blood clotting time. These outcomes helped develop and optimize NO-generating gas-exchange silicone fibers that represent the majority of ALs surface area. The first NO-generating AL prototypes, using those fibers, were manufactured, incorporated into NO-generating circuits and tested for their non-thrombogenicity. To test for NO-release duration and non-thrombogenicity, catheters were fabricated to incorporate NO-donors inside their walls, characterized for NO flux and release duration by chemilumincscence, and tested for patency using a thrombogenicity model in rabbits. Methods 1-2 involve material modification using complicated and expensive chemical formulations and/or manufacturing. Method 3 however, functionalizes ALs by only adding NO into sweep gas. Decade-long anti-clotting testing using a wide range of NO concentrations has been conducted without knowledge of what concentration yields endothelial NO flux levels in the AL. This concentration was determined for the MC3 Biolung and the Terumo capiox rx25 ALs in vitro. All these ideas have shown positive results in short-term studies, and each may play a necessary role in inhibiting clot formation in future ALs. The sufficiency however, of each idea or of a combination for clot inhibition in long-term ALs remains to be determined.
NASA Astrophysics Data System (ADS)
Kitko, Jennifer V.
2011-12-01
Nursing educators face the challenge of meeting the needs of a multi-generational classroom. The reality of having members from the Veteran and Baby Boomer generations in a classroom with Generation X and Y students provides an immediate need for faculty to examine students' teaching method preferences as well as their own use of teaching methods. Most importantly, faculty must facilitate an effective multi-generational learning environment. Research has shown that the generation to which a person belongs is likely to affect the ways in which he/she learns (Hammill, 2005). Characterized by its own attitudes, behaviors, beliefs, and motivational needs, each generation also has distinct educational expectations. It is imperative, therefore, that nurse educators be aware of these differences and develop skills through which to communicate with the different generations, thereby reducing teaching/learning problems in the classroom. This is a quantitative, descriptive study that compared the teaching methods preferred by different generations of associate degree nursing students with the teaching methods that the instructors actually use. The research study included 289 participants; 244 nursing student participants and 45 nursing faculty participants from four nursing departments in colleges in Pennsylvania. Overall, the results of the study found many statistically significant findings. The results of the ANOVA test revealed eight statistically significant findings among Generation Y, Generation X and Baby boomers. The preferred teaching methods included: lecture, self-directed learning, web-based course with no class meetings, important for faculty to know my name, classroom structure, know why I am learning what I am learning, learning for the sake of learning and grade is all that matters. Lecture was found to be the most frequently used teaching method by faculty as well as the most preferred teaching methods by students. Overall, the support for a variety of teaching methods was also found in the analysis of the data.
A New Generation of Leaching Tests – The Leaching Environmental Assessment Framework
Provides an overview of newly released leaching tests that provide a more accurate source term when estimating environmental release of metals and other constituents of potential concern (COPCs). The Leaching Environmental Assessment Framework (LEAF) methods have been (1) develo...
Communicating and Translating EPA's Computational Toxicology Research (WC10)
US EPA’s National Center for Computational Toxicology (NCCT) develops and uses alternative testing methods to accelerate the pace of chemical evaluations, reduce reliance on animal testing, and address the significant lack of chemical data. The chemical data is generated through ...
NEXT GENERATION SEDIMENT TOXICITY TESTING VIA DNA MICROARRAYS - PHASE I
The current SBIR solicitation states that the EPA is seeking “better sampling, analysis, and monitoring technologies” to improve hazardous waste management. Development of new methods for testing contaminated sediments is an area of particular concern because many industri...
METHODS FOR THE SPIRAL SALMONELLA MUTAGENICITY ASSAY INCLUDING SPECIALIZED APPLICATIONS
ABSTRACT
An automated approach to bacterial mutagenicity testing--the spiral Salmonella assay--was developed to simplify testing and to reduce the labor and materials required to generate dose-responsive mutagenicity information. This document provides the reader with an ...
Infrared thermal integrity testing quality assurance test method to detect drilled shaft defects.
DOT National Transportation Integrated Search
2011-06-01
Thermal integrity profiling uses the measured temperature generated in curing concrete to assess the quality of cast in place concrete foundations (i.e. drilled shafts or ACIP piles) which can include effective shaft size (diameter and length), anoma...
NASA Technical Reports Server (NTRS)
Vali, G.; Rogers, D.; Gordon, G.; Saunders, C. P. R.; Reischel, M.; Black, R.
1978-01-01
Tasks performed in the development of an ice nucleus generator which, within the facility concept of the ACPL, would provide a test aerosol suitable for a large number and variety of potential experiments are described. The impact of Atmospheric Cloud Physics Laboratory scientific functional requirements on ice nuclei generation and characterization subsystems was established. Potential aerosol generating systems were evaluated with special emphasis on reliability, repeatability and general suitability for application in Spacelab. Possible contamination problems associated with aerosol generation techniques were examined. The ice nucleating abilities of candidate test aerosols were examined and the possible impact of impurities on the nucleating abilities of those aerosols were assessed as well as the relative merits of various methods of aerosol size and number density measurements.
Development of a high-efficiency motor/generator for flywheel energy storage
NASA Astrophysics Data System (ADS)
Lashley, Christopher; Anand, Dave K.; Kirk, James A.; Zmood, Ronald B.
This study addresses the design changes and extensions necessary to construct and test a working prototype of a motor/generator for a magnetically suspended flywheel energy storage system. The brushless motor controller for the motor was specified and the electronic commutation arrangement designed. The laminations were redesigned and fabricated using laser machining. Flux density measurements were made and the results used to redesign the armature windings. A test rig was designed and built, and the motor/generator was installed and speed tested to 9000 rpm. Experimental methods of obtaining the machine voltage and torque constants Kv and Kt, obtaining the useful air-gap flux density, and characterizing the motor and other system components are described. The measured Kv and Kt were approximately 40 percent greater than predicted by theory and initial experiment.
Development of a high-efficiency motor/generator for flywheel energy storage
NASA Technical Reports Server (NTRS)
Lashley, Christopher; Anand, Dave K.; Kirk, James A.; Zmood, Ronald B.
1991-01-01
This study addresses the design changes and extensions necessary to construct and test a working prototype of a motor/generator for a magnetically suspended flywheel energy storage system. The brushless motor controller for the motor was specified and the electronic commutation arrangement designed. The laminations were redesigned and fabricated using laser machining. Flux density measurements were made and the results used to redesign the armature windings. A test rig was designed and built, and the motor/generator was installed and speed tested to 9000 rpm. Experimental methods of obtaining the machine voltage and torque constants Kv and Kt, obtaining the useful air-gap flux density, and characterizing the motor and other system components are described. The measured Kv and Kt were approximately 40 percent greater than predicted by theory and initial experiment.
NASA Technical Reports Server (NTRS)
Aoyagi, Kiyoshi; Olson, Lawrence E.; Peterson, Randall L.; Yamauchi, Gloria K.; Ross, James C.; Norman, Thomas R.
1987-01-01
Time-averaged aerodynamic loads are estimated for each of the vane sets in the National Full-Scale Aerodynamic Complex (NFAC). The methods used to compute global and local loads are presented. Experimental inputs used to calculate these loads are based primarily on data obtained from tests conducted in the NFAC 1/10-Scale Vane-Set Test Facility and from tests conducted in the NFAC 1/50-Scale Facility. For those vane sets located directly downstream of either the 40- by 80-ft test section or the 80- by 120-ft test section, aerodynamic loads caused by the impingement of model-generated wake vortices and model-generated jet and propeller wakes are also estimated.
Using Apex To Construct CPM-GOMS Models
NASA Technical Reports Server (NTRS)
John, Bonnie; Vera, Alonso; Matessa, Michael; Freed, Michael; Remington, Roger
2006-01-01
process for automatically generating computational models of human/computer interactions as well as graphical and textual representations of the models has been built on the conceptual foundation of a method known in the art as CPM-GOMS. This method is so named because it combines (1) the task decomposition of analysis according to an underlying method known in the art as the goals, operators, methods, and selection (GOMS) method with (2) a model of human resource usage at the level of cognitive, perceptual, and motor (CPM) operations. CPM-GOMS models have made accurate predictions about behaviors of skilled computer users in routine tasks, but heretofore, such models have been generated in a tedious, error-prone manual process. In the present process, CPM-GOMS models are generated automatically from a hierarchical task decomposition expressed by use of a computer program, known as Apex, designed previously to be used to model human behavior in complex, dynamic tasks. An inherent capability of Apex for scheduling of resources automates the difficult task of interleaving the cognitive, perceptual, and motor resources that underlie common task operators (e.g., move and click mouse). The user interface of Apex automatically generates Program Evaluation Review Technique (PERT) charts, which enable modelers to visualize the complex parallel behavior represented by a model. Because interleaving and the generation of displays to aid visualization are automated, it is now feasible to construct arbitrarily long sequences of behaviors. The process was tested by using Apex to create a CPM-GOMS model of a relatively simple human/computer-interaction task and comparing the time predictions of the model and measurements of the times taken by human users in performing the various steps of the task. The task was to withdraw $80 in cash from an automated teller machine (ATM). For the test, a Visual Basic mockup of an ATM was created, with a provision for input from (and measurement of the performance of) the user via a mouse. The times predicted by the automatically generated model turned out to approximate the measured times fairly well (see figure). While these results are promising, there is need for further development of the process. Moreover, it will also be necessary to test other, more complex models: The actions required of the user in the ATM task are too sequential to involve substantial parallelism and interleaving and, hence, do not serve as an adequate test of the unique strength of CPM-GOMS models to accommodate parallelism and interleaving.
NASA Astrophysics Data System (ADS)
Kim, Duk-hyun; Lee, Hyoung-Jin
2018-04-01
A study of efficient aerodynamic database modeling method was conducted. A creation of database using periodicity and symmetry characteristic of missile aerodynamic coefficient was investigated to minimize the number of wind tunnel test cases. In addition, studies of how to generate the aerodynamic database when the periodicity changes due to installation of protuberance and how to conduct a zero calibration were carried out. Depending on missile configurations, the required number of test cases changes and there exist tests that can be omitted. A database of aerodynamic on deflection angle of control surface can be constituted using phase shift. A validity of modeling method was demonstrated by confirming that the result which the aerodynamic coefficient calculated by using the modeling method was in agreement with wind tunnel test results.
voom: precision weights unlock linear model analysis tools for RNA-seq read counts
2014-01-01
New normal linear modeling strategies are presented for analyzing read counts from RNA-seq experiments. The voom method estimates the mean-variance relationship of the log-counts, generates a precision weight for each observation and enters these into the limma empirical Bayes analysis pipeline. This opens access for RNA-seq analysts to a large body of methodology developed for microarrays. Simulation studies show that voom performs as well or better than count-based RNA-seq methods even when the data are generated according to the assumptions of the earlier methods. Two case studies illustrate the use of linear modeling and gene set testing methods. PMID:24485249
voom: Precision weights unlock linear model analysis tools for RNA-seq read counts.
Law, Charity W; Chen, Yunshun; Shi, Wei; Smyth, Gordon K
2014-02-03
New normal linear modeling strategies are presented for analyzing read counts from RNA-seq experiments. The voom method estimates the mean-variance relationship of the log-counts, generates a precision weight for each observation and enters these into the limma empirical Bayes analysis pipeline. This opens access for RNA-seq analysts to a large body of methodology developed for microarrays. Simulation studies show that voom performs as well or better than count-based RNA-seq methods even when the data are generated according to the assumptions of the earlier methods. Two case studies illustrate the use of linear modeling and gene set testing methods.
Fast solver for large scale eddy current non-destructive evaluation problems
NASA Astrophysics Data System (ADS)
Lei, Naiguang
Eddy current testing plays a very important role in non-destructive evaluations of conducting test samples. Based on Faraday's law, an alternating magnetic field source generates induced currents, called eddy currents, in an electrically conducting test specimen. The eddy currents generate induced magnetic fields that oppose the direction of the inducing magnetic field in accordance with Lenz's law. In the presence of discontinuities in material property or defects in the test specimen, the induced eddy current paths are perturbed and the associated magnetic fields can be detected by coils or magnetic field sensors, such as Hall elements or magneto-resistance sensors. Due to the complexity of the test specimen and the inspection environments, the availability of theoretical simulation models is extremely valuable for studying the basic field/flaw interactions in order to obtain a fuller understanding of non-destructive testing phenomena. Theoretical models of the forward problem are also useful for training and validation of automated defect detection systems. Theoretical models generate defect signatures that are expensive to replicate experimentally. In general, modelling methods can be classified into two categories: analytical and numerical. Although analytical approaches offer closed form solution, it is generally not possible to obtain largely due to the complex sample and defect geometries, especially in three-dimensional space. Numerical modelling has become popular with advances in computer technology and computational methods. However, due to the huge time consumption in the case of large scale problems, accelerations/fast solvers are needed to enhance numerical models. This dissertation describes a numerical simulation model for eddy current problems using finite element analysis. Validation of the accuracy of this model is demonstrated via comparison with experimental measurements of steam generator tube wall defects. These simulations generating two-dimension raster scan data typically takes one to two days on a dedicated eight-core PC. A novel direct integral solver for eddy current problems and GPU-based implementation is also investigated in this research to reduce the computational time.
NASA Astrophysics Data System (ADS)
Roadman, Jason Markos
Modern technology operating in the atmospheric boundary layer can always benefit from more accurate wind tunnel testing. While scaled atmospheric boundary layer tunnels have been well developed, tunnels replicating portions of the atmospheric boundary layer turbulence at full scale are a comparatively new concept. Testing at full-scale Reynolds numbers with full-scale turbulence in an "atmospheric wind tunnel" is sought. Many programs could utilize such a tool including Micro Aerial Vehicle(MAV) development, the wind energy industry, fuel efficient vehicle design, and the study of bird and insect flight, to name just a few. The small scale of MAVs provide the somewhat unique capability of full scale Reynolds number testing in a wind tunnel. However, that same small scale creates interactions under real world flight conditions, atmospheric gusts for example, that lead to a need for testing under more complex flows than the standard uniform flow found in most wind tunnels. It is for these reasons that MAVs are used as the initial testing application for the atmospheric gust tunnel. An analytical model for both discrete gusts and a continuous spectrum of gusts is examined. Then, methods for generating gusts in agreement with that model are investigated. Previously used methods are reviewed and a gust generation apparatus is designed. Expected turbulence and gust characteristics of this apparatus are compared with atmospheric data. The construction of an active "gust generator" for a new atmospheric tunnel is reviewed and the turbulence it generates is measured utilizing single and cross hot wires. Results from this grid are compared to atmospheric turbulence and it is shown that various gust strengths can be produced corresponding to weather ranging from calm to quite gusty. An initial test is performed in the atmospheric wind tunnel whereby the effects of various turbulence conditions on transition and separation on the upper surface of a MAV wing is investigated using the surface oil flow visualization technique.
Comparison of two methods to determine fan performance curves using computational fluid dynamics
NASA Astrophysics Data System (ADS)
Onma, Patinya; Chantrasmi, Tonkid
2018-01-01
This work investigates a systematic numerical approach that employs Computational Fluid Dynamics (CFD) to obtain performance curves of a backward-curved centrifugal fan. Generating the performance curves requires a number of three-dimensional simulations with varying system loads at a fixed rotational speed. Two methods were used and their results compared to experimental data. The first method incrementally changes the mass flow late through the inlet boundary condition while the second method utilizes a series of meshes representing the physical damper blade at various angles. The generated performance curves from both methods are compared with an experiment setup in accordance with the AMCA fan performance testing standard.
A Three-Stage Enhanced Reactive Power and Voltage Optimization Method for High Penetration of Solar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ke, Xinda; Huang, Renke; Vallem, Mallikarjuna R.
This paper presents a three-stage enhanced volt/var optimization method to stabilize voltage fluctuations in transmission networks by optimizing the usage of reactive power control devices. In contrast with existing volt/var optimization algorithms, the proposed method optimizes the voltage profiles of the system, while keeping the voltage and real power output of the generators as close to the original scheduling values as possible. This allows the method to accommodate realistic power system operation and market scenarios, in which the original generation dispatch schedule will not be affected. The proposed method was tested and validated on a modified IEEE 118-bus system withmore » photovoltaic data.« less
Development of a Novel Quantitative Adverse Outcome Pathway Predictive Model for Lung Cancer
Traditional methods for carcinogenicity testing are resource-intensive, retrospective, and time consuming. An increasing testing burden has generated interest in the adverse outcome pathway (AOP) concept as a tool to evaluate chemical safety in a more efficient, rapid and effecti...
Generating virtual training samples for sparse representation of face images and face recognition
NASA Astrophysics Data System (ADS)
Du, Yong; Wang, Yu
2016-03-01
There are many challenges in face recognition. In real-world scenes, images of the same face vary with changing illuminations, different expressions and poses, multiform ornaments, or even altered mental status. Limited available training samples cannot convey these possible changes in the training phase sufficiently, and this has become one of the restrictions to improve the face recognition accuracy. In this article, we view the multiplication of two images of the face as a virtual face image to expand the training set and devise a representation-based method to perform face recognition. The generated virtual samples really reflect some possible appearance and pose variations of the face. By multiplying a training sample with another sample from the same subject, we can strengthen the facial contour feature and greatly suppress the noise. Thus, more human essential information is retained. Also, uncertainty of the training data is simultaneously reduced with the increase of the training samples, which is beneficial for the training phase. The devised representation-based classifier uses both the original and new generated samples to perform the classification. In the classification phase, we first determine K nearest training samples for the current test sample by calculating the Euclidean distances between the test sample and training samples. Then, a linear combination of these selected training samples is used to represent the test sample, and the representation result is used to classify the test sample. The experimental results show that the proposed method outperforms some state-of-the-art face recognition methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abrams, W.T.; Cope, A.W.; Orsulak, R.M.
The overall objective of Task 1 was to demonstrate an effective method for removing tenacious corrosion products in a pressurized water reactor steam generator and thus significantly reduce radiation exposure during subsequent maintenance activities. Various decontamination methods were evaluated and a multistep, low concentration chemical process originated by Kraftwerk Union A.G. (KWU) of the Federal Republic of Germany was selected. The process was further developed and tested by C-E and KWU in West Germany and at C-E's facilities in Windsor, Connecticut. C-E designed, fabricated and tested a portable system to apply the process at Millstone Point II. The decontamination ofmore » the primary channel heads of the two Millstone steam generators was performed by C-E and NUSCO during the 1983 refueling shutdown of Millstone Point II plant. Results of the decontamination were very satisfactory. NUSCO determined that a net savings of 3660 man-rem of personnel exposure was realized during the decontamination demonstration and the subsequent maintenance work on the steam generators.« less
Application of fiber spectrometers for etch depth measurement of binary computer-generated holograms
NASA Astrophysics Data System (ADS)
Korolkov, V. P.; Konchenko, A. S.; Poleshchuk, A. G.
2013-01-01
Novel spectrophotometric method of computer-generated holograms depth measurement is presented. It is based on spectral properties of binary phase multi-order gratings. An intensity of zero order is a periodical function of illumination light wave number. The grating grooves depth can be calculated as it is inversely proportional to the period. Measurement in reflection allows one to increase a phase depth of the grooves by factor of 2 and measure more precisely shallow phase gratings. Diffraction binary structures with depth from several hundreds to thousands nanometers could be measured by the method. Measurement uncertainty is mainly defined by following parameters - shifts of the spectrum maximums that are occurred due to the tilted grooves sidewalls, uncertainty of light incidence angle measurement, and spectrophotometer wavelength error. It is theoretically and experimentally shown that the method can ensure 0.25-1% error for desktop spectrophotometers. However fiber spectrometers are more convenient for creation of real measurement system with scanning measurement of large area computer-generated holograms which are used for optical testing of aspheric optics. Especially diffractive Fizeau null lenses need to be carefully tested for uniformity of etch depth. Experimental system for characterization of binary computer-generated holograms was developed using spectrophotometric unit of confocal sensor CHR-150 (STIL SA).
A prevalence-based association test for case-control studies.
Ryckman, Kelli K; Jiang, Lan; Li, Chun; Bartlett, Jacquelaine; Haines, Jonathan L; Williams, Scott M
2008-11-01
Genetic association is often determined in case-control studies by the differential distribution of alleles or genotypes. Recent work has demonstrated that association can also be assessed by deviations from the expected distributions of alleles or genotypes. Specifically, multiple methods motivated by the principles of Hardy-Weinberg equilibrium (HWE) have been developed. However, these methods do not take into account many of the assumptions of HWE. Therefore, we have developed a prevalence-based association test (PRAT) as an alternative method for detecting association in case-control studies. This method, also motivated by the principles of HWE, uses an estimated population allele frequency to generate expected genotype frequencies instead of using the case and control frequencies separately. Our method often has greater power, under a wide variety of genetic models, to detect association than genotypic, allelic or Cochran-Armitage trend association tests. Therefore, we propose PRAT as a powerful alternative method of testing for association.
Summary of CPAS Gen II Parachute Analysis
NASA Technical Reports Server (NTRS)
Morris, Aaron L.; Bledsoe, Kristin J.; Fraire, Usbaldo, Jr.; Moore, James W.; Olson, Leah M.; Ray, Eric
2011-01-01
The Orion spacecraft is currently under development by NASA and Lockheed Martin. Like Apollo, Orion will use a series of parachutes to slow its descent and splashdown safely. The Orion parachute system, known as the CEV Parachute Assembly System (CPAS), is being designed by NASA, the Engineering and Science Contract Group (ESCG), and Airborne Systems. The first generation (Gen I) of CPAS testing consisted of thirteen tests and was executed in the 2007-2008 timeframe. The Gen I tests provided an initial understanding of the CPAS parachutes. Knowledge gained from Gen I testing was used to plan the second generation of testing (Gen II). Gen II consisted of six tests: three singleparachute tests, designated as Main Development Tests, and three Cluster Development Tests. Gen II required a more thorough investigation into parachute performance than Gen I. Higher fidelity instrumentation, enhanced analysis methods and tools, and advanced test techniques were developed. The results of the Gen II test series are being incorporated into the CPAS design. Further testing and refinement of the design and model of parachute performance will occur during the upcoming third generation of testing (Gen III). This paper will provide an overview of the developments in CPAS analysis following the end of Gen I, including descriptions of new tools and techniques as well as overviews of the Gen II tests.
Assessment of microwave-based clinical waste decontamination unit.
Hoffman, P N; Hanley, M J
1994-12-01
A clinical waste decontamination unit that used microwave-generated heat was assessed for operator safety and efficacy. Tests with loads artificially contaminated with aerosol-forming particles showed that no particles were detected outside the machine provided the seals and covers were correctly seated. Thermometric measurement of a self-generated steam decontamination cycle was used to determine the parameters needed to ensure heat disinfection of the waste reception hopper, prior to entry for maintenance or repair. Bacterial and thermometric test pieces were passed through the machine within a full load of clinical waste. These test pieces, designed to represent a worst case situation, were enclosed in aluminium foil to shield them from direct microwave energy. None of the 100 bacterial test pieces yielded growth on culture and all 100 thermal test pieces achieved temperatures in excess of 99 degrees C during their passage through the decontamination unit. It was concluded that this method may be used to render safe the bulk of of ward-generated clinical waste.
Generating quality word sense disambiguation test sets based on MeSH indexing.
Fan, Jung-Wei; Friedman, Carol
2009-11-14
Word sense disambiguation (WSD) determines the correct meaning of a word that has more than one meaning, and is a critical step in biomedical natural language processing, as interpretation of information in text can be correct only if the meanings of their component terms are correctly identified first. Quality evaluation sets are important to WSD because they can be used as representative samples for developing automatic programs and as referees for comparing different WSD programs. To help create quality test sets for WSD, we developed a MeSH-based automatic sense-tagging method that preferentially annotates terms being topical of the text. Preliminary results were promising and revealed important issues to be addressed in biomedical WSD research. We also suggest that, by cross-validating with 2 or 3 annotators, the method should be able to efficiently generate quality WSD test sets. Online supplement is available at: http://www.dbmi.columbia.edu/~juf7002/AMIA09.
We tested two methods for dataset generation and model construction, and three tree-classifier variants to identify the most parsimonious and thematically accurate mapping methodology for the SW ReGAP project. Competing methodologies were tested in the East Great Basin mapping un...
NASA Technical Reports Server (NTRS)
Bridges, P. G.; Cross, E. J., Jr.; Boatwright, D. W.
1977-01-01
The overall drag of the aircraft is expressed in terms of the measured increment of power required to overcome a corresponding known increment of drag, which is generated by a towed drogue. The simplest form of the governing equations, D = delta D SHP/delta SHP, is such that all of the parameters on the right side of the equation can be measured in flight. An evaluation of the governing equations has been performed using data generated by flight test of a Beechcraft T-34B. The simplicity of this technique and its proven applicability to sailplanes and small aircraft is well known. However, the method fails to account for airframe-propulsion system.
Using Innovative Techniques for Manufacturing Rocket Engine Hardware
NASA Technical Reports Server (NTRS)
Betts, Erin M.; Reynolds, David C.; Eddleman, David E.; Hardin, Andy
2011-01-01
Many of the manufacturing techniques that are currently used for rocket engine component production are traditional methods that have been proven through years of experience and historical precedence. As we enter into a new space age where new launch vehicles are being designed and propulsion systems are being improved upon, it is sometimes necessary to adopt new and innovative techniques for manufacturing hardware. With a heavy emphasis on cost reduction and improvements in manufacturing time, manufacturing techniques such as Direct Metal Laser Sintering (DMLS) are being adopted and evaluated for their use on J-2X, with hopes of employing this technology on a wide variety of future projects. DMLS has the potential to significantly reduce the processing time and cost of engine hardware, while achieving desirable material properties by using a layered powder metal manufacturing process in order to produce complex part geometries. Marshall Space Flight Center (MSFC) has recently hot-fire tested a J-2X gas generator discharge duct that was manufactured using DMLS. The duct was inspected and proof tested prior to the hot-fire test. Using the Workhorse Gas Generator (WHGG) test setup at MSFC?s East Test Area test stand 116, the duct was subject to extreme J-2X gas generator environments and endured a total of 538 seconds of hot-fire time. The duct survived the testing and was inspected after the test. DMLS manufacturing has proven to be a viable option for manufacturing rocket engine hardware, and further development and use of this manufacturing method is recommended.
Model-independent test for scale-dependent non-Gaussianities in the cosmic microwave background.
Räth, C; Morfill, G E; Rossmanith, G; Banday, A J; Górski, K M
2009-04-03
We present a model-independent method to test for scale-dependent non-Gaussianities in combination with scaling indices as test statistics. Therefore, surrogate data sets are generated, in which the power spectrum of the original data is preserved, while the higher order correlations are partly randomized by applying a scale-dependent shuffling procedure to the Fourier phases. We apply this method to the Wilkinson Microwave Anisotropy Probe data of the cosmic microwave background and find signatures for non-Gaussianities on large scales. Further tests are required to elucidate the origin of the detected anomalies.
Integrating conventional and inverse representation for face recognition.
Xu, Yong; Li, Xuelong; Yang, Jian; Lai, Zhihui; Zhang, David
2014-10-01
Representation-based classification methods are all constructed on the basis of the conventional representation, which first expresses the test sample as a linear combination of the training samples and then exploits the deviation between the test sample and the expression result of every class to perform classification. However, this deviation does not always well reflect the difference between the test sample and each class. With this paper, we propose a novel representation-based classification method for face recognition. This method integrates conventional and the inverse representation-based classification for better recognizing the face. It first produces conventional representation of the test sample, i.e., uses a linear combination of the training samples to represent the test sample. Then it obtains the inverse representation, i.e., provides an approximation representation of each training sample of a subject by exploiting the test sample and training samples of the other subjects. Finally, the proposed method exploits the conventional and inverse representation to generate two kinds of scores of the test sample with respect to each class and combines them to recognize the face. The paper shows the theoretical foundation and rationale of the proposed method. Moreover, this paper for the first time shows that a basic nature of the human face, i.e., the symmetry of the face can be exploited to generate new training and test samples. As these new samples really reflect some possible appearance of the face, the use of them will enable us to obtain higher accuracy. The experiments show that the proposed conventional and inverse representation-based linear regression classification (CIRLRC), an improvement to linear regression classification (LRC), can obtain very high accuracy and greatly outperforms the naive LRC and other state-of-the-art conventional representation based face recognition methods. The accuracy of CIRLRC can be 10% greater than that of LRC.
Moles: Tool-Assisted Environment Isolation with Closures
NASA Astrophysics Data System (ADS)
de Halleux, Jonathan; Tillmann, Nikolai
Isolating test cases from environment dependencies is often desirable, as it increases test reliability and reduces test execution time. However, code that calls non-virtual methods or consumes sealed classes is often impossible to test in isolation. Moles is a new lightweight framework which addresses this problem. For any .NET method, Moles allows test-code to provide alternative implementations, given as .NET delegates, for which C# provides very concise syntax while capturing local variables in a closure object. Using code instrumentation, the Moles framework will redirect calls to provided delegates instead of the original methods. The Moles framework is designed to work together with the dynamic symbolic execution tool Pex to enable automated test generation. In a case study, testing code programmed against the Microsoft SharePoint Foundation API, we achieved full code coverage while running tests in isolation without an actual SharePoint server. The Moles framework integrates with .NET and Visual Studio.
Capacitive charge generation apparatus and method for testing circuits
Cole, E.I. Jr.; Peterson, K.A.; Barton, D.L.
1998-07-14
An electron beam apparatus and method for testing a circuit are disclosed. The electron beam apparatus comprises an electron beam incident on an outer surface of an insulating layer overlying one or more electrical conductors of the circuit for generating a time varying or alternating current electrical potential on the surface; and a measurement unit connected to the circuit for measuring an electrical signal capacitively coupled to the electrical conductors to identify and map a conduction state of each of the electrical conductors, with or without an electrical bias signal being applied to the circuit. The electron beam apparatus can further include a secondary electron detector for forming a secondary electron image for registration with a map of the conduction state of the electrical conductors. The apparatus and method are useful for failure analysis or qualification testing to determine the presence of any open-circuits or short-circuits, and to verify the continuity or integrity of electrical conductors buried below an insulating layer thickness of 1-100 {micro}m or more without damaging or breaking down the insulating layer. The types of electrical circuits that can be tested include integrated circuits, multi-chip modules, printed circuit boards and flexible printed circuits. 7 figs.
Generation of Fullspan Leading-Edge 3D Ice Shapes for Swept-Wing Aerodynamic Testing
NASA Technical Reports Server (NTRS)
Camello, Stephanie C.; Lee, Sam; Lum, Christopher; Bragg, Michael B.
2016-01-01
The deleterious effect of ice accretion on aircraft is often assessed through dry-air flight and wind tunnel testing with artificial ice shapes. This paper describes a method to create fullspan swept-wing artificial ice shapes from partial span ice segments acquired in the NASA Glenn Icing Reserch Tunnel for aerodynamic wind-tunnel testing. Full-scale ice accretion segments were laser scanned from the Inboard, Midspan, and Outboard wing station models of the 65% scale Common Research Model (CRM65) aircraft configuration. These were interpolated and extrapolated using a weighted averaging method to generate fullspan ice shapes from the root to the tip of the CRM65 wing. The results showed that this interpolation method was able to preserve many of the highly three dimensional features typically found on swept-wing ice accretions. The interpolated fullspan ice shapes were then scaled to fit the leading edge of a 8.9% scale version of the CRM65 wing for aerodynamic wind-tunnel testing. Reduced fidelity versions of the fullspan ice shapes were also created where most of the local three-dimensional features were removed. The fullspan artificial ice shapes and the reduced fidelity versions were manufactured using stereolithography.
Capacitive charge generation apparatus and method for testing circuits
Cole, Jr., Edward I.; Peterson, Kenneth A.; Barton, Daniel L.
1998-01-01
An electron beam apparatus and method for testing a circuit. The electron beam apparatus comprises an electron beam incident on an outer surface of an insulating layer overlying one or more electrical conductors of the circuit for generating a time varying or alternating current electrical potential on the surface; and a measurement unit connected to the circuit for measuring an electrical signal capacitively coupled to the electrical conductors to identify and map a conduction state of each of the electrical conductors, with or without an electrical bias signal being applied to the circuit. The electron beam apparatus can further include a secondary electron detector for forming a secondary electron image for registration with a map of the conduction state of the electrical conductors. The apparatus and method are useful for failure analysis or qualification testing to determine the presence of any open-circuits or short-circuits, and to verify the continuity or integrity of electrical conductors buried below an insulating layer thickness of 1-100 .mu.m or more without damaging or breaking down the insulating layer. The types of electrical circuits that can be tested include integrated circuits, multi-chip modules, printed circuit boards and flexible printed circuits.
NASA Astrophysics Data System (ADS)
Ayu Nurul Handayani, Hemas; Waspada, Indra
2018-05-01
Non-formal Early Childhood Education (non-formal ECE) is an education that is held for children under 4 years old. The implementation in District of Banyumas, Non-formal ECE is monitored by The District Government of Banyumas and helped by Sanggar Kegiatan Belajar (SKB) Purwokerto as one of the organizer of Non-formal Education. The government itself has a program for distributing ECE to all villages in Indonesia. However, The location to construct the ECE school in several years ahead is not arranged yet. Therefore, for supporting that program, a decision support system is made to give some recommendation villages for constructing The ECE building. The data are projected based on Brown’s Double Exponential Smoothing Method and utilizing Preference Ranking Organization Method for Enrichment Evaluation (Promethee) to generate priority order. As the recommendations system, it generates map visualization which is colored according to the priority level of sub-district and village area. The system was tested with black box testing, Promethee testing, and usability testing. The results showed that the system functionality and Promethee algorithm were working properly, and the user was satisfied.
NASA Technical Reports Server (NTRS)
Schmitz, F. H.; Allmen, J. R.; Soderman, P. T.
1994-01-01
The development of a large-scale anechoic test facility where large models of engine/airframe/high-lift systems can be tested for both improved noise reduction and minimum performance degradation is described. The facility development is part of the effort to investigate economically viable methods of reducing second generation high speed civil transport noise during takeoff and climb-out that is now under way in the United States. This new capability will be achieved through acoustic modifications of NASA's second largest subsonic wind tunnel: the 40-by 80-Foot Wind Tunnel at the NASA Ames Research Center. Three major items are addressed in the design of this large anechoic and quiet wind tunnel: a new deep (42 inch (107 cm)) test section liner, expansion of the wind tunnel drive operating envelope at low rpm to reduce background noise, and other promising methods of improving signal-to-noise levels of inflow microphones. Current testing plans supporting the U.S. high speed civil transport program are also outlined.
NASA Astrophysics Data System (ADS)
Tian, Jiajun; Zhang, Qi; Han, Ming
2013-05-01
Fiber-optic ultrasonic transducers are an important component of an active ultrasonic testing system for structural health monitoring. Fiber-optic transducers have several advantages such as small size, light weight, and immunity to electromagnetic interference that make them much more attractive than the current available piezoelectric transducers, especially as embedded and permanent transducers in active ultrasonic testing for structural health monitoring. In this paper, a distributed fiber-optic laser-ultrasound generation based on the ghost-mode of tilted fiber Bragg gratings is studied. The influences of the laser power and laser pulse duration on the laser-ultrasound generation are investigated. The results of this paper are helpful to understand the working principle of this laser-ultrasound method and improve the ultrasonic generation efficiency.
How to test herbicides at forest tree nurseries.
Roger E. Sandquist; Peyton W. Owston; Stephen E. McDonald
1981-01-01
Procedures developed in a cooperative westwide study of weed control in forest tree nurseries are described in a form modified for use by nursery managers. The proven, properly designed test and evaluation methods can be used to generate data needed for evaluation and registration of herbicides.
Leak Detection by Acoustic Emission Monitoring. Phase 1. Feasibility Study
1994-05-26
various signal processing and noise descrimInation techniques during the Data Processing task. C. TEST DESCRIPTION 1. Laboratory Tests Following normal...success in applying these methods to descriminating between the AE bursts generated by two close AE sources In a section of an aircraft structure
NASA Technical Reports Server (NTRS)
Workman, Gary L.; Davis, Jason; Farrington, Seth; Walker, James
2007-01-01
Low density polyurethane foam has been an important insulation material for space launch vehicles for several decades. The potential for damage from foam breaking away from the NASA External Tank was not realized until the foam impacts on the Columbia Orbiter vehicle caused damage to its Leading Edge thermal protection systems (TPS). Development of improved inspection techniques on the foam TPS is necessary to prevent similar occurrences in the future. Foamed panels with drilled holes for volumetric flaws and Teflon inserts to simulate debonded conditions have been used to evaluate and calibrate nondestructive testing (NDT) methods. Unfortunately the symmetric edges and dissimilar materials used in the preparation of these simulated flaws provide an artificially large signal while very little signal is generated from the actual defects themselves. In other words, the same signal are not generated from the artificial defects in the foam test panels as produced when inspecting natural defect in the ET foam TPS. A project to create more realistic voids similar to what actually occurs during manufacturing operations was began in order to improve detection of critical voids during inspections. This presentation describes approaches taken to create more natural voids in foam TPS in order to provide a more realistic evaluation of what the NDT methods can detect. These flaw creation techniques were developed with both sprayed foam and poured foam used for insulation on the External Tank. Test panels with simulated defects have been used to evaluate NDT methods for the inspection of the External Tank. A comparison of images between natural flaws and machined flaws generated from backscatter x-ray radiography, x-ray laminography, terahertz imaging and millimeter wave imaging show significant differences in identifying defect regions.
Madsen, Daniel Elenius; Nichols, Timothy C.; Merricks, Elizabeth P.; Waters, Emily K.; Wiinberg, Bo
2017-01-01
Introduction Canine models of severe haemophilia resemble their human equivalents both regarding clinical bleeding phenotype and response to treatment. Therefore pre-clinical studies in haemophilia dogs have allowed researchers to make valuable translational predictions regarding the potency and efficacy of new anti-haemophilia drugs (AHDs) in humans. To refine in vivo experiments and reduce number of animals, such translational studies are ideally preceded by in vitro prediction of compound efficacy using a plasma based global coagulation method. One such widely used method is the thrombin generation test (TGT). Unfortunately, commercially available TGTs are incapable of distinguishing between normal and haemophilia canine plasma, and therefore in vitro prediction using TGT has so far not been possible in canine plasma material. Aim Establish a modified TGT capable of: 1) distinguishing between normal and haemophilia canine plasma, 2) monitoring correlation between canine plasma levels of coagulation factor VIII (FVIII) and IX (FIX) and thrombin generation, 3) assessing for agreement between compound activity and thrombin generation in ex vivo samples. Methods A modified TGT assay was established where coagulation was triggered using a commercially available activated partial thromboplastin time reagent. Results With the modified TGT a significant difference was observed in thrombin generation between normal and haemophilia canine plasma. A dose dependent thrombin generation was observed when assessing haemophilia A and B plasma spiked with dilution series of FVIII and FIX, respectively. Correlation between FVIII activity and thrombin generation was observed when analyzing samples from haemophilia A dogs dosed with canine FVIII. Limit of detection was 0.1% (v/v) FVIII or FIX. Conclusion A novel modified TGT suitable for monitoring and prediction of replacement therapy efficacy in plasma from haemophilia A and B dogs was established. PMID:28384182
The Ins and Outs of DNA Fingerprinting the Infectious Fungi
Soll, David R.
2000-01-01
DNA fingerprinting methods have evolved as major tools in fungal epidemiology. However, no single method has emerged as the method of choice, and some methods perform better than others at different levels of resolution. In this review, requirements for an effective DNA fingerprinting method are proposed and procedures are described for testing the efficacy of a method. In light of the proposed requirements, the most common methods now being used to DNA fingerprint the infectious fungi are described and assessed. These methods include restriction fragment length polymorphisms (RFLP), RFLP with hybridization probes, randomly amplified polymorphic DNA and other PCR-based methods, electrophoretic karyotyping, and sequencing-based methods. Procedures for computing similarity coefficients, generating phylogenetic trees, and testing the stability of clusters are then described. To facilitate the analysis of DNA fingerprinting data, computer-assisted methods are described. Finally, the problems inherent in the collection of test and control isolates are considered, and DNA fingerprinting studies of strain maintenance during persistent or recurrent infections, microevolution in infecting strains, and the origin of nosocomial infections are assessed in light of the preceding discussion of the ins and outs of DNA fingerprinting. The intent of this review is to generate an awareness of the need to verify the efficacy of each DNA fingerprinting method for the level of genetic relatedness necessary to answer the epidemiological question posed, to use quantitative methods to analyze DNA fingerprint data, to use computer-assisted DNA fingerprint analysis systems to analyze data, and to file data in a form that can be used in the future for retrospective and comparative studies. PMID:10756003
Securing Digital Audio using Complex Quadratic Map
NASA Astrophysics Data System (ADS)
Suryadi, MT; Satria Gunawan, Tjandra; Satria, Yudi
2018-03-01
In This digital era, exchanging data are common and easy to do, therefore it is vulnerable to be attacked and manipulated from unauthorized parties. One data type that is vulnerable to attack is digital audio. So, we need data securing method that is not vulnerable and fast. One of the methods that match all of those criteria is securing the data using chaos function. Chaos function that is used in this research is complex quadratic map (CQM). There are some parameter value that causing the key stream that is generated by CQM function to pass all 15 NIST test, this means that the key stream that is generated using this CQM is proven to be random. In addition, samples of encrypted digital sound when tested using goodness of fit test are proven to be uniform, so securing digital audio using this method is not vulnerable to frequency analysis attack. The key space is very huge about 8.1×l031 possible keys and the key sensitivity is very small about 10-10, therefore this method is also not vulnerable against brute-force attack. And finally, the processing speed for both encryption and decryption process on average about 450 times faster that its digital audio duration.
Ey, E; Yang, M; Katz, A M; Woldeyohannes, L; Silverman, J L; Leblond, C S; Faure, P; Torquet, N; Le Sourd, A-M; Bourgeron, T; Crawley, J N
2012-11-01
Mutations in NLGN4X have been identified in individuals with autism spectrum disorders and other neurodevelopmental disorders. A previous study reported that adult male mice lacking neuroligin4 (Nlgn4) displayed social approach deficits in the three-chambered test, altered aggressive behaviors and reduced ultrasonic vocalizations. To replicate and extend these findings, independent comprehensive analyses of autism-relevant behavioral phenotypes were conducted in later generations of the same line of Nlgn4 mutant mice at the National Institute of Mental Health in Bethesda, MD, USA and at the Institut Pasteur in Paris, France. Adult social approach was normal in all three genotypes of Nlgn4 mice tested at both sites. Reciprocal social interactions in juveniles were similarly normal across genotypes. No genotype differences were detected in ultrasonic vocalizations in pups separated from the nest or in adults during reciprocal social interactions. Anxiety-like behaviors, self-grooming, rotarod and open field exploration did not differ across genotypes, and measures of developmental milestones and general health were normal. Our findings indicate an absence of autism-relevant behavioral phenotypes in subsequent generations of Nlgn4 mice tested at two locations. Testing environment and methods differed from the original study in some aspects, although the presence of normal sociability was seen in all genotypes when methods taken from Jamain et al. (2008) were used. The divergent results obtained from this study indicate that phenotypes may not be replicable across breeding generations, and highlight the significant roles of environmental, generational and/or procedural factors on behavioral phenotypes. Published 2012. This article is a U.S. Government work and is in the public domain in the USA.
Infrared non-destructive evaluation method and apparatus
Baleine, Erwan; Erwan, James F; Lee, Ching-Pang; Stinelli, Stephanie
2014-10-21
A method of nondestructive evaluation and related system. The method includes arranging a test piece (14) having an internal passage (18) and an external surface (15) and a thermal calibrator (12) within a field of view (42) of an infrared sensor (44); generating a flow (16) of fluid characterized by a fluid temperature; exposing the test piece internal passage (18) and the thermal calibrator (12) to fluid from the flow (16); capturing infrared emission information of the test piece external surface (15) and of the thermal calibrator (12) simultaneously using the infrared sensor (44), wherein the test piece infrared emission information includes emission intensity information, and wherein the thermal calibrator infrared emission information includes a reference emission intensity associated with the fluid temperature; and normalizing the test piece emission intensity information against the reference emission intensity.
Method of and apparatus for testing the integrity of filters
Herman, R.L.
1985-05-07
A method of and apparatus are disclosed for testing the integrity of individual filters or filter stages of a multistage filtering system including a diffuser permanently mounted upstream and/or downstream of the filter stage to be tested for generating pressure differentials to create sufficient turbulence for uniformly dispersing trace agent particles within the airstream upstream and downstream of such filter stage. Samples of the particle concentration are taken upstream and downstream of the filter stage for comparison to determine the extent of particle leakage past the filter stage. 5 figs.
Method of and apparatus for testing the integrity of filters
Herman, Raymond L [Richland, WA
1985-01-01
A method of and apparatus for testing the integrity of individual filters or filter stages of a multistage filtering system including a diffuser permanently mounted upstream and/or downstream of the filter stage to be tested for generating pressure differentials to create sufficient turbulence for uniformly dispersing trace agent particles within the airstream upstream and downstream of such filter stage. Samples of the particle concentration are taken upstream and downstream of the filter stage for comparison to determine the extent of particle leakage past the filter stage.
Methods of and apparatus for testing the integrity of filters
Herman, R.L.
1984-01-01
A method of and apparatus for testing the integrity of individual filters or filter stages of a multistage filtering system including a diffuser permanently mounted upstream and/or downstream of the filter stage to be tested for generating pressure differentials to create sufficient turbulence for uniformly dispersing trace agent particles within the airstram upstream and downstream of such filter stage. Samples of the particel concentration are taken upstream and downstream of the filter stage for comparison to determine the extent of particle leakage past the filter stage.
Docherty, Paul D; Schranz, Christoph; Chase, J Geoffrey; Chiew, Yeong Shiong; Möller, Knut
2014-05-01
Accurate model parameter identification relies on accurate forward model simulations to guide convergence. However, some forward simulation methodologies lack the precision required to properly define the local objective surface and can cause failed parameter identification. The role of objective surface smoothness in identification of a pulmonary mechanics model was assessed using forward simulation from a novel error-stepping method and a proprietary Runge-Kutta method. The objective surfaces were compared via the identified parameter discrepancy generated in a Monte Carlo simulation and the local smoothness of the objective surfaces they generate. The error-stepping method generated significantly smoother error surfaces in each of the cases tested (p<0.0001) and more accurate model parameter estimates than the Runge-Kutta method in three of the four cases tested (p<0.0001) despite a 75% reduction in computational cost. Of note, parameter discrepancy in most cases was limited to a particular oblique plane, indicating a non-intuitive multi-parameter trade-off was occurring. The error-stepping method consistently improved or equalled the outcomes of the Runge-Kutta time-integration method for forward simulations of the pulmonary mechanics model. This study indicates that accurate parameter identification relies on accurate definition of the local objective function, and that parameter trade-off can occur on oblique planes resulting prematurely halted parameter convergence. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Medical ultrasonic tomographic system
NASA Technical Reports Server (NTRS)
Heyser, R. C.; Lecroissette, D. H.; Nathan, R.; Wilson, R. L.
1977-01-01
An electro-mechanical scanning assembly was designed and fabricated for the purpose of generating an ultrasound tomogram. A low cost modality was demonstrated in which analog instrumentation methods formed a tomogram on photographic film. Successful tomogram reconstructions were obtained on in vitro test objects by using the attenuation of the fist path ultrasound signal as it passed through the test object. The nearly half century tomographic methods of X-ray analysis were verified as being useful for ultrasound imaging.
Mitchell, J. T.; Perepelitsa, D. V.; Tannenbaum, M. J.; ...
2016-05-23
Here, several methods of generating three constituent quarks in a nucleon are evaluated which explicitly maintain the nucleon's center of mass and desired radial distribution and can be used within Monte Carlo Glauber frameworks. The geometric models provided by each method are used to generate distributions over the number of constituent quark participants ( N qp) in p+p,d+Au, and Au+Au collisions. The results are compared with each other and to a previous result of N qp calculations, without this explicit constraint, used in measurements of √S NN = 200 GeV p+p,d+Au, and Au+Au collisions at the BNL Relativistic Heavy Ionmore » Collider.« less
Automated objective characterization of visual field defects in 3D
NASA Technical Reports Server (NTRS)
Fink, Wolfgang (Inventor)
2006-01-01
A method and apparatus for electronically performing a visual field test for a patient. A visual field test pattern is displayed to the patient on an electronic display device and the patient's responses to the visual field test pattern are recorded. A visual field representation is generated from the patient's responses. The visual field representation is then used as an input into a variety of automated diagnostic processes. In one process, the visual field representation is used to generate a statistical description of the rapidity of change of a patient's visual field at the boundary of a visual field defect. In another process, the area of a visual field defect is calculated using the visual field representation. In another process, the visual field representation is used to generate a statistical description of the volume of a patient's visual field defect.
Method and apparatus for nondestructive testing. [using high frequency arc discharges
NASA Technical Reports Server (NTRS)
Hoop, J. M. (Inventor)
1974-01-01
High voltage is applied to an arc gap adjacent to a test specimen to develop a succession of high frequency arc discharges. Those high frequency arc discharges generate pulses of ultrasonic energy within the test specimen without requiring the arc discharges to contact that test specimen and without requiring a coupling medium. Those pulses can be used for detection of flaws and measurements of certain properties and stresses within the test specimen.
Designing Test Suites for Software Interactions Testing
2004-01-01
the annual cost of insufficient software testing methods and tools in the United States is between 22.2 to 59.5 billion US dollars [13, 14]. This study...10 (2004), 1–29. [21] Cheng, C., Dumitrescu, A., and Schroeder , P. Generating small com- binatorial test suites to cover input-output relationships... Proceedings of the Conference on the Future of Software Engineering (May 2000), pp. 61 – 72. [51] Hartman, A. Software and hardware testing using
Tang, Qi; Li, Qiang; Xie, Dong; Chu, Ketao; Liu, Lidong; Liao, Chengcheng; Qin, Yunying; Wang, Zheng; Su, Danke
2018-05-21
This study aimed to investigate the utility of a volumetric apparent diffusion coefficient (ADC) histogram method for distinguishing non-puerperal mastitis (NPM) from breast cancer (BC) and to compare this method with a traditional 2-dimensional measurement method. Pretreatment diffusion-weighted imaging data at 3.0 T were obtained for 80 patients (NPM, n = 27; BC, n = 53) and were retrospectively assessed. Two readers measured ADC values according to 2 distinct region-of-interest (ROI) protocols. The first protocol included the generation of ADC histograms for each lesion, and various parameters were examined. In the second protocol, 3 freehand (TF) ROIs for local lesions were generated to obtain a mean ADC value (defined as ADC-ROITF). All of the ADC values were compared by an independent-samples t test or the Mann-Whitney U test. Receiver operating characteristic curves and a leave-one-out cross-validation method were also used to determine diagnostic deficiencies of the significant parameters. The ADC values for NPM were characterized by significantly higher mean, 5th to 95th percentiles, and maximum and mode ADCs compared with the corresponding ADCs for BC (all P < 0.05). However, the minimum, skewness, and kurtosis ADC values, as well as ADC-ROITF, did not significantly differ between the NPM and BC cases. Thus, the generation of volumetric ADC histograms seems to be a superior method to the traditional 2-dimensional method that was examined, and it also seems to represent a promising image analysis method for distinguishing NPM from BC.
Development of the GPM Observatory Thermal Vacuum Test Model
NASA Technical Reports Server (NTRS)
Yang, Kan; Peabody, Hume
2012-01-01
A software-based thermal modeling process was documented for generating the thermal panel settings necessary to simulate worst-case on-orbit flight environments in an observatory-level thermal vacuum test setup. The method for creating such a thermal model involved four major steps: (1) determining the major thermal zones for test as indicated by the major dissipating components on the spacecraft, then mapping the major heat flows between these components; (2) finding the flight equivalent sink temperatures for these test thermal zones; (3) determining the thermal test ground support equipment (GSE) design and initial thermal panel settings based on the equivalent sink temperatures; and (4) adjusting the panel settings in the test model to match heat flows and temperatures with the flight model. The observatory test thermal model developed from this process allows quick predictions of the performance of the thermal vacuum test design. In this work, the method described above was applied to the Global Precipitation Measurement (GPM) core observatory spacecraft, a joint project between NASA and the Japanese Aerospace Exploration Agency (JAXA) which is currently being integrated at NASA Goddard Space Flight Center for launch in Early 2014. From preliminary results, the thermal test model generated from this process shows that the heat flows and temperatures match fairly well with the flight thermal model, indicating that the test model can simulate fairly accurately the conditions on-orbit. However, further analysis is needed to determine the best test configuration possible to validate the GPM thermal design before the start of environmental testing later this year. Also, while this analysis method has been applied solely to GPM, it should be emphasized that the same process can be applied to any mission to develop an effective test setup and panel settings which accurately simulate on-orbit thermal environments.
NASA Technical Reports Server (NTRS)
1972-01-01
The development of nondestructive testing procedures by NASA and the transfer of nondestructive testing to technology to civilian industry are discussed. The subjects presented are: (1) an overview of the nondestructive testing field, (2) NASA contributions to the field of nondestructive testing, (3) dissemination of NASA contributions, and (4) a transfer profile. Attachments are included which provide a brief description of common nondestructive testing methods and summarize the technology transfer reports involving NASA generated nondestructive testing technology.
Real-Time Extended Interface Automata for Software Testing Cases Generation
Yang, Shunkun; Xu, Jiaqi; Man, Tianlong; Liu, Bin
2014-01-01
Testing and verification of the interface between software components are particularly important due to the large number of complex interactions, which requires the traditional modeling languages to overcome the existing shortcomings in the aspects of temporal information description and software testing input controlling. This paper presents the real-time extended interface automata (RTEIA) which adds clearer and more detailed temporal information description by the application of time words. We also establish the input interface automaton for every input in order to solve the problems of input controlling and interface covering nimbly when applied in the software testing field. Detailed definitions of the RTEIA and the testing cases generation algorithm are provided in this paper. The feasibility and efficiency of this method have been verified in the testing of one real aircraft braking system. PMID:24892080
Resonant frequency method for bearing ball inspection
Khuri-Yakub, B. T.; Hsieh, Chung-Kao
1993-01-01
The present invention provides for an inspection system and method for detecting defects in test objects which includes means for generating expansion inducing energy focused upon the test object at a first location, such expansion being allowed to contract, thereby causing pressure wave within and on the surface of the test object. Such expansion inducing energy may be provided by, for example, a laser beam or ultrasonic energy. At a second location, the amplitudes and phases of the acoustic waves are detected and the resonant frequencies' quality factors are calculated and compared to predetermined quality factor data, such comparison providing information of whether the test object contains a defect. The inspection system and method also includes means for mounting the bearing ball for inspection.
Resonant frequency method for bearing ball inspection
Khuri-Yakub, B.T.; Chungkao Hsieh.
1993-11-02
The present invention provides for an inspection system and method for detecting defects in test objects which includes means for generating expansion inducing energy focused upon the test object at a first location, such expansion being allowed to contract, thereby causing pressure wave within and on the surface of the test object. Such expansion inducing energy may be provided by, for example, a laser beam or ultrasonic energy. At a second location, the amplitudes and phases of the acoustic waves are detected and the resonant frequencies' quality factors are calculated and compared to predetermined quality factor data, such comparison providing information of whether the test object contains a defect. The inspection system and method also includes means for mounting the bearing ball for inspection. 5 figures.
AN AUTOMATED SYSTEM FOR PRODUCING UNIFORM SURFACE DEPOSITS OF DRY PARTICLES
A laboratory system has been constructed that uniformly deposits dry particles onto any type of test surface. Devised as a quality assurance tool for the purpose of evaluating surface sampling methods for lead, it also may be used to generate test surfaces for any contaminant ...
Code of Federal Regulations, 2011 CFR
2011-01-01
...) Standard 112-1996, Test Procedure for Polyphase Induction Motors and Generators, and Test Method (1) of CSA... a system, that has its own rules of procedure and management, for giving written assurance that a... operated by an entity independent of both the party seeking the written assurance and the party providing...
Code of Federal Regulations, 2012 CFR
2012-01-01
...) Standard 112-1996, Test Procedure for Polyphase Induction Motors and Generators, and Test Method (1) of CSA... a system, that has its own rules of procedure and management, for giving written assurance that a... operated by an entity independent of both the party seeking the written assurance and the party providing...
NEXT GENERATION LEACHING TESTS FOR EVALUATING LEACHING OF INORGANIC CONSTITUENTS
In the U.S. as in other countries, there is increased interest in using industrial by-products as alternative or secondary materials, helping to conserve virgin or raw materials. The LEAF and associated test methods are being used to develop the source term for leaching or any i...
Small, high-pressure liquid hydrogen turbopump
NASA Technical Reports Server (NTRS)
Csomor, A.; Sutton, R.
1977-01-01
A high pressure, liquid hydrogen turbopump was designed, fabricated, and tested to a maximum speed of 9739 rad/s and a maximum pump discharge pressure of 2861 N/sq. cm. The approaches used in the analysis and design of the turbopump are described, and fabrication methods are discussed. Data obtained from gas generator tests, turbine performance calibration, and turbopump testing are presented.
Laser Synthesis of Supported Catalysts for Carbon Nanotubes
NASA Technical Reports Server (NTRS)
VanderWal, Randall L.; Ticich, Thomas M.; Sherry, Leif J.; Hall, Lee J.; Schubert, Kathy (Technical Monitor)
2003-01-01
Four methods of laser assisted catalyst generation for carbon nanotube (CNT) synthesis have been tested. These include pulsed laser transfer (PLT), photolytic deposition (PLD), photothermal deposition (PTD) and laser ablation deposition (LABD). Results from each method are compared based on CNT yield, morphology and structure. Under the conditions tested, the PLT was the easiest method to implement, required the least time and also yielded the best pattemation. The photolytic and photothermal methods required organometallics, extended processing time and partial vacuums. The latter two requirements also held for the ablation deposition approach. In addition to control of the substrate position, controlled deposition duration was necessary to achieve an active catalyst layer. Although all methods were tested on both metal and quartz substrates, only the quartz substrates proved to be inactive towards the deposited catalyst particles.
Generating method-specific Reference Ranges - A harmonious outcome?
Lee, Graham R; Griffin, Alison; Halton, Kieran; Fitzgibbon, Maria C
2017-12-01
When laboratory Reference Ranges (RR) do not reflect analytical methodology, result interpretation can cause misclassification of patients and inappropriate management. This can be mitigated by determining and implementing method-specific RRs, which was the main objective of this study. Serum was obtained from healthy volunteers (Male + Female, n > 120) attending hospital health-check sessions during June and July 2011. Pseudo-anonymised aliquots were stored (at - 70 °C) prior t° analysis on Abbott ARCHITECT c16000 chemistry and i 2000SR immunoassay analysers. Data were stratified by gender where appropriate. Outliers were excluded statistically (Tukey method) to generate non-parametric RRs (2.5th + 97.5th percentiles). RRs were compared to those quoted by Abbott and UK Pathology Harmony (PH) where possible. For 7 selected tests, RRs were verified using a data mining approach. For chemistry tests (n = 23), Upper or Lower Reference Limits (LRL or URL) were > 20% different from Abbott ranges in 25% of tests (11% from PH ranges) but in 38% for immunoassay tests (n = 13). RRs (mmol/L) for sodium (138-144), potassium (3.8-4.9) and chloride (102-110) were considerably narrower than PH ranges (133-146, 3.5-5.0 and 95-108, respectively). The gender difference for ferritin (M: 29-441, F: 8-193 ng/mL) was more pronounced than reported by Abbott (M: 22-275, F: 5-204 ng/mL). Verification studies showed good agreement for chemistry tests (mean [SD] difference = 0.4% [1.2%]) but less so for immunoassay tests (27% [29%]), particularly for TSH (LRL). Where resource permits, we advocate using method-specific RRs in preference to other sources, particularly where method bias and lack of standardisation limits RR transferability and harmonisation.
Remote Acoustic Emission Monitoring of Metal Ware and Welded Joints
NASA Astrophysics Data System (ADS)
Kapranov, Boris I.; Sutorikhin, Vladimir A.
2017-10-01
An unusual phenomenon was revealed in the metal-ultrasound interaction. Microwave sensor generates surface electric conductivity oscillations from exposure to elastic ultrasonic vibrations on regions of defects embracing micro-defects termed as “crack mouth.” They are known as the region of “acoustic activity,” method of Acoustic Emission (AE) method. It was established that the high phase-modulation coefficient of reflected field generates intentional Doppler radar signal with the following parameters: amplitude-1-5 nm, 6-30 dB adjusted to 70- 180 mm. This phenomenon is termed as “Gorbunov effect,” which is applied as a remote non-destructive testing method replacing ultrasonic flaw detection and acoustic emission methods.
Computer-generated holograms by multiple wavefront recording plane method with occlusion culling.
Symeonidou, Athanasia; Blinder, David; Munteanu, Adrian; Schelkens, Peter
2015-08-24
We propose a novel fast method for full parallax computer-generated holograms with occlusion processing, suitable for volumetric data such as point clouds. A novel light wave propagation strategy relying on the sequential use of the wavefront recording plane method is proposed, which employs look-up tables in order to reduce the computational complexity in the calculation of the fields. Also, a novel technique for occlusion culling with little additional computation cost is introduced. Additionally, the method adheres a Gaussian distribution to the individual points in order to improve visual quality. Performance tests show that for a full-parallax high-definition CGH a speedup factor of more than 2,500 compared to the ray-tracing method can be achieved without hardware acceleration.
A Novel Polygonal Finite Element Method: Virtual Node Method
NASA Astrophysics Data System (ADS)
Tang, X. H.; Zheng, C.; Zhang, J. H.
2010-05-01
Polygonal finite element method (PFEM), which can construct shape functions on polygonal elements, provides greater flexibility in mesh generation. However, the non-polynomial form of traditional PFEM, such as Wachspress method and Mean Value method, leads to inexact numerical integration. Since the integration technique for non-polynomial functions is immature. To overcome this shortcoming, a great number of integration points have to be used to obtain sufficiently exact results, which increases computational cost. In this paper, a novel polygonal finite element method is proposed and called as virtual node method (VNM). The features of present method can be list as: (1) It is a PFEM with polynomial form. Thereby, Hammer integral and Gauss integral can be naturally used to obtain exact numerical integration; (2) Shape functions of VNM satisfy all the requirements of finite element method. To test the performance of VNM, intensive numerical tests are carried out. It found that, in standard patch test, VNM can achieve significantly better results than Wachspress method and Mean Value method. Moreover, it is observed that VNM can achieve better results than triangular 3-node elements in the accuracy test.
Design and analysis of the federal aviation administration next generation fire test burner
NASA Astrophysics Data System (ADS)
Ochs, Robert Ian
The United States Federal Aviation Administration makes use of threat-based fire test methods for the certification of aircraft cabin materials to enhance the level of safety in the event of an in-flight or post-crash fire on a transport airplane. The global nature of the aviation industry results in these test methods being performed at hundreds of laboratories around the world; in some cases testing identical materials at multiple labs but yielding different results. Maintenance of this standard for an elevated level of safety requires that the test methods be as well defined as possible, necessitating a comprehensive understanding of critical test method parameters. The tests have evolved from simple Bunsen burner material tests to larger, more complicated apparatuses, requiring greater understanding of the device for proper application. The FAA specifies a modified home heating oil burner to simulate the effects of large, intense fires for testing of aircraft seat cushions, cargo compartment liners, power plant components, and thermal acoustic insulation. Recently, the FAA has developed a Next Generation (NexGen) Fire Test burner to replace the original oil burner that has become commercially unavailable. The NexGen burner design is based on the original oil burner but with more precise control of the air and fuel flow rates with the addition of a sonic nozzle and a pressurized fuel system. Knowledge of the fundamental flow properties created by various burner configurations is desired to develop an updated and standardized burner configuration for use around the world for aircraft materials fire testing and airplane certification. To that end, the NexGen fire test burner was analyzed with Particle Image Velocimetry (PIV) to resolve the non-reacting exit flow field and determine the influence of the configuration of burner components. The correlation between the measured flow fields and the standard burner performance metrics of flame temperature and burnthrough time was studied. Potential design improvements were also evaluated that could simplify burner set up and operation.
Sauer, Ursula G; Hill, Erin H; Curren, Rodger D; Raabe, Hans A; Kolle, Susanne N; Teubner, Wera; Mehling, Annette; Landsiedel, Robert
2016-07-01
In general, no single non-animal method can cover the complexity of any given animal test. Therefore, fixed sets of in vitro (and in chemico) methods have been combined into testing strategies for skin and eye irritation and skin sensitisation testing, with pre-defined prediction models for substance classification. Many of these methods have been adopted as OECD test guidelines. Various testing strategies have been successfully validated in extensive in-house and inter-laboratory studies, but they have not yet received formal acceptance for substance classification. Therefore, under the European REACH Regulation, data from testing strategies can, in general, only be used in so-called weight-of-evidence approaches. While animal testing data generated under the specific REACH information requirements are per se sufficient, the sufficiency of weight-of-evidence approaches can be questioned under the REACH system, and further animal testing can be required. This constitutes an imbalance between the regulatory acceptance of data from approved non-animal methods and animal tests that is not justified on scientific grounds. To ensure that testing strategies for local tolerance testing truly serve to replace animal testing for the REACH registration 2018 deadline (when the majority of existing chemicals have to be registered), clarity on their regulatory acceptance as complete replacements is urgently required. 2016 FRAME.
Application of Laser Based Ultrasound for NDE of Damage in Thick Stitched Composites
NASA Technical Reports Server (NTRS)
Anastasi, Robert F.; Friedman, Adam D.; Hinders, Mark K.; Madaras, Eric I.
1997-01-01
As design engineers implement new composite systems such as thick, load bearing composite structures, they must have certifiable confidence in structure s durability and worthiness. This confidence builds from understanding the structural response and failure characteristics of simple components loaded in testing machines to tests on full scale sections. Nondestructive evaluation is an important element which can provide quantitative information on the damage initiation, propagation, and final failure modes for the composite structural components. Although ultrasound is generally accepted as a test method, the use of conventional ultrasound for in-situ monitoring of damage during tests of large structures is not practical. The use of lasers to both generate and detect ultrasound extends the application of ultrasound to in- situ sensing of damage in a deformed structure remotely and in a non-contact manner. The goal of the present research is to utilize this technology to monitor damage progression during testing. The present paper describes the application of laser based ultrasound to quantify damage in thick stitched composite structural elements to demonstrate the method. This method involves using a Q-switched laser to generate a rapid, local linear thermal strain on the surface of the structure. This local strain causes the generation of ultrasonic waves into the material. A second laser used with a Fabry-Perot interferometer detects the surface deflections. The use of fiber optics provides for eye safety and a convenient method of delivering the laser over long distances to the specimens. The material for these structural elements is composed of several stacks of composite material assembled together by stitching through the laminate thickness that ranging from 0.5 to 0.8 inches. The specimens used for these nondestructive evaluation studies had either impact damage or skin/stiffener interlaminar failure. Although little or no visible surface damage existed, internal damage was detected by laser based ultrasound.
Hypothesis test for synchronization: twin surrogates revisited.
Romano, M Carmen; Thiel, Marco; Kurths, Jürgen; Mergenthaler, Konstantin; Engbert, Ralf
2009-03-01
The method of twin surrogates has been introduced to test for phase synchronization of complex systems in the case of passive experiments. In this paper we derive new analytical expressions for the number of twins depending on the size of the neighborhood, as well as on the length of the trajectory. This allows us to determine the optimal parameters for the generation of twin surrogates. Furthermore, we determine the quality of the twin surrogates with respect to several linear and nonlinear statistics depending on the parameters of the method. In the second part of the paper we perform a hypothesis test for phase synchronization in the case of experimental data from fixational eye movements. These miniature eye movements have been shown to play a central role in neural information processing underlying the perception of static visual scenes. The high number of data sets (21 subjects and 30 trials per person) allows us to compare the generated twin surrogates with the "natural" surrogates that correspond to the different trials. We show that the generated twin surrogates reproduce very well all linear and nonlinear characteristics of the underlying experimental system. The synchronization analysis of fixational eye movements by means of twin surrogates reveals that the synchronization between the left and right eye is significant, indicating that either the centers in the brain stem generating fixational eye movements are closely linked, or, alternatively that there is only one center controlling both eyes.
Poptani, Bruhvi; Gohil, K. S.; Ganjiwale, Jaishree; Shukla, Manisha
2012-01-01
Objectives: The objective of this in vitro study was to compare the microtensile dentin bond strength (μTBS) of five seventh-generation dentin bonding agents (DBA) with fifth-generation DBA before and after thermocycling. Materials and Methods: Ten extracted teeth were assigned to fifth generation control group (optibond solo) and each of the five experimental groups namely, Group I (G-Bond) ,Group II (S3 Clearfil), Group III (One Coat 7.0), Group IV (Xeno V), and Group V (Optibond all in one). The crown portions of the teeth were horizontally sectioned below the central groove to expose the dentin. The adhesive resins from all groups were bonded to the teeth with their respective composites. Specimens of sizes 1 × 1 × 6 mm3 were obtained. Fifty specimens that bonded to dentin from each group were selected. Twenty-five of the specimens were tested for debonding without thermocycling and the remaining were subjected to thermocycling followed by μTBS testing. The data were analyzed with one-way ANOVA and Dunnett's-test for comparison with the reference group(Vth Generation). Results: There was no significant difference (P > 0.05) between the fifth- and seventh-generation adhesives before and after thermocycling. The results of our study showed significantly higher value (P < 0.05) of μTBS of seventh-generation Group II (Clearfil S3) compared to the fifth-generation before and after thermocycling. Conclusion: The study demonstrated that the Clearfil S3 bond had the highest μTBS values. In addition, of the five tested seventh-generation adhesive resins were comparable to the fifth-generation DBA. PMID:23230355
Hsieh, Chi-Wen; Liu, Tzu-Chiang; Jong, Tai-Lang; Tiu, Chui-Mei
2013-01-01
The Tanner-Whitehouse (TW) method is one of the well-known techniques in determining the bone age. According to the objectivity of TW3, the secular trend was investigated to discover whether the skeletal maturation of Taiwanese children between two generations was different. The large-scale database of Taiwan was collected. The first group, called mid-1960s, included 265 boys and 295 girls in the agricultural generation (between 1966 and 1967). The second group, called mid-2000s, includes 114 boys and 616 girls in the contemporary generation (after 2000s). The bone age was determined by three radiologists using the carpals-only system of the TW3 method and by two physicians using the Greulich and Pyle method. A comparison of the means (independent-samples t-test) was applied by examining the difference of the children's skeletal maturation between the two generations in the same chronological age. The significant difference was considered while the p-value was 0.05 or less (95% confidence interval). A significant difference of the mean bone age (by, on average, three radiologists using the TW3 method) between the mid-1960s and mid-2000s in the same gender and chronological age was presented by the independent-samples t-test (p<0.001 with 95% confidence interval), and the bone age, determined by the TW3 method, of the mid-2000s group was higher than that of the mid-1960s group. This scenario corresponded with the children's bone age determined by pediatricians. Besides, it deserved to notice that the bone age of boys in the mid-2000s was larger than that of the girls in the mid-1960s. Furthermore, by comparing the environmental condition, we suspect that the difference of bone age of children between the two generations was attributed to the discrepancy in nutrition and socioeconomic variation during the four decades in Taiwan. The study presents that the secular trend of skeletal maturation of children in the mid-2000s is faster than that in the mid-1960s.
NASA Astrophysics Data System (ADS)
Hyun, Jae-Sang; Li, Beiwen; Zhang, Song
2017-07-01
This paper presents our research findings on high-speed high-accuracy three-dimensional shape measurement using digital light processing (DLP) technologies. In particular, we compare two different sinusoidal fringe generation techniques using the DLP projection devices: direct projection of computer-generated 8-bit sinusoidal patterns (a.k.a., the sinusoidal method), and the creation of sinusoidal patterns by defocusing binary patterns (a.k.a., the binary defocusing method). This paper mainly examines their performance on high-accuracy measurement applications under precisely controlled settings. Two different projection systems were tested in this study: a commercially available inexpensive projector and the DLP development kit. Experimental results demonstrated that the binary defocusing method always outperforms the sinusoidal method if a sufficient number of phase-shifted fringe patterns can be used.
High-speed 3D imaging using digital binary defocusing method vs sinusoidal method
NASA Astrophysics Data System (ADS)
Zhang, Song; Hyun, Jae-Sang; Li, Beiwen
2017-02-01
This paper presents our research findings on high-speed 3D imaging using digital light processing (DLP) technologies. In particular, we compare two different sinusoidal fringe generation techniques using the DLP projection devices: direct projection of 8-bit computer generated sinusoidal patterns (a.k.a, the sinusoidal method), and the creation of sinusoidal patterns by defocusing binary patterns (a.k.a., the binary defocusing method). This paper mainly examines their performance on high-accuracy measurement applications under precisely controlled settings. Two different projection systems were tested in this study: the commercially available inexpensive projector, and the DLP development kit. Experimental results demonstrated that the binary defocusing method always outperforms the sinusoidal method if a sufficient number of phase-shifted fringe patterns can be used.
NASA Astrophysics Data System (ADS)
Guerra, J. E.; Ullrich, P. A.
2015-12-01
Tempest is a next-generation global climate and weather simulation platform designed to allow experimentation with numerical methods at very high spatial resolutions. The atmospheric fluid equations are discretized by continuous / discontinuous finite elements in the horizontal and by a staggered nodal finite element method (SNFEM) in the vertical, coupled with implicit/explicit time integration. At global horizontal resolutions below 10km, many important questions remain on optimal techniques for solving the fluid equations. We present results from a suite of meso-scale test cases to validate the performance of the SNFEM applied in the vertical. Internal gravity wave, mountain wave, convective, and Cartesian baroclinic instability tests will be shown at various vertical orders of accuracy and compared with known results.
Real time testing of intelligent relays for synchronous distributed generation islanding detection
NASA Astrophysics Data System (ADS)
Zhuang, Davy
As electric power systems continue to grow to meet ever-increasing energy demand, their security, reliability, and sustainability requirements also become more stringent. The deployment of distributed energy resources (DER), including generation and storage, in conventional passive distribution feeders, gives rise to integration problems involving protection and unintentional islanding. Distributed generators need to be islanded for safety reasons when disconnected or isolated from the main feeder as distributed generator islanding may create hazards to utility and third-party personnel, and possibly damage the distribution system infrastructure, including the distributed generators. This thesis compares several key performance indicators of a newly developed intelligent islanding detection relay, against islanding detection devices currently used by the industry. The intelligent relay employs multivariable analysis and data mining methods to arrive at decision trees that contain both the protection handles and the settings. A test methodology is developed to assess the performance of these intelligent relays on a real time simulation environment using a generic model based on a real-life distribution feeder. The methodology demonstrates the applicability and potential advantages of the intelligent relay, by running a large number of tests, reflecting a multitude of system operating conditions. The testing indicates that the intelligent relay often outperforms frequency, voltage and rate of change of frequency relays currently used for islanding detection, while respecting the islanding detection time constraints imposed by standing distributed generator interconnection guidelines.
Cummings, Kevin J.; Warnick, Lorin D.; Schukken, Ynte H.; Siler, Julie D.; Gröhn, Yrjo T.; Davis, Margaret A.; Besser, Tom E.; Wiedmann, Martin
2011-01-01
Abstract Data generated using different antimicrobial testing methods often have to be combined, but the equivalence of such results is difficult to assess. Here we compared two commonly used antimicrobial susceptibility testing methods, automated microbroth dilution and agar disk diffusion, for 8 common drugs, using 222 Salmonella isolates of serotypes Newport, Typhimurium, and 4,5,12:i-, which had been isolated from clinical salmonellosis cases among cattle and humans. Isolate classification corresponded well between tests, with 95% overall category agreement. Test results were significantly negatively correlated, and Spearman's correlation coefficients ranged from −0.98 to −0.38. Using Cox's proportional hazards model we determined that for most drugs, a 1 mm increase in zone diameter resulted in an estimated 20%–40% increase in the hazard of growth inhibition. However, additional parameters such as isolation year or serotype often impacted the hazard of growth inhibition as well. Comparison of economical feasibility showed that agar disk diffusion is clearly more cost-effective if the average sample throughput is small but that both methods are comparable at high sample throughput. In conclusion, for the Salmonella serotypes and antimicrobial drugs analyzed here, antimicrobial susceptibility data generated based on either test are qualitatively very comparable, and the current published break points for both methods are in excellent agreement. Economic feasibility clearly depends on the specific laboratory settings, and disk diffusion might be an attractive alternative for certain applications such as surveillance studies. PMID:21877930
Generating moment matching scenarios using optimization techniques
Mehrotra, Sanjay; Papp, Dávid
2013-05-16
An optimization based method is proposed to generate moment matching scenarios for numerical integration and its use in stochastic programming. The main advantage of the method is its flexibility: it can generate scenarios matching any prescribed set of moments of the underlying distribution rather than matching all moments up to a certain order, and the distribution can be defined over an arbitrary set. This allows for a reduction in the number of scenarios and allows the scenarios to be better tailored to the problem at hand. The method is based on a semi-infinite linear programming formulation of the problem thatmore » is shown to be solvable with polynomial iteration complexity. A practical column generation method is implemented. The column generation subproblems are polynomial optimization problems; however, they need not be solved to optimality. It is found that the columns in the column generation approach can be efficiently generated by random sampling. The number of scenarios generated matches a lower bound of Tchakaloff's. The rate of convergence of the approximation error is established for continuous integrands, and an improved bound is given for smooth integrands. Extensive numerical experiments are presented in which variants of the proposed method are compared to Monte Carlo and quasi-Monte Carlo methods on both numerical integration problems and stochastic optimization problems. The benefits of being able to match any prescribed set of moments, rather than all moments up to a certain order, is also demonstrated using optimization problems with 100-dimensional random vectors. Here, empirical results show that the proposed approach outperforms Monte Carlo and quasi-Monte Carlo based approaches on the tested problems.« less
Russell, Steven M; Doménech-Sánchez, Antonio; de la Rica, Roberto
2017-06-23
Colorimetric tests are becoming increasingly popular in point-of-need analyses due to the possibility of detecting the signal with the naked eye, which eliminates the utilization of bulky and costly instruments only available in laboratories. However, colorimetric tests may be interpreted incorrectly by nonspecialists due to disparities in color perception or a lack of training. Here we solve this issue with a method that not only detects colorimetric signals but also interprets them so that the test outcome is understandable for anyone. It consists of an augmented reality (AR) app that uses a camera to detect the colored signals generated by a nanoparticle-based immunoassay, and that yields a warning symbol or message when the concentration of analyte is higher than a certain threshold. The proposed method detected the model analyte mouse IgG with a limit of detection of 0.3 μg mL -1 , which was comparable to the limit of detection afforded by classical densitometry performed with a nonportable device. When adapted to the detection of E. coli, the app always yielded a "hazard" warning symbol when the concentration of E. coli in the sample was above the infective dose (10 6 cfu mL -1 or higher). The proposed method could help nonspecialists make a decision about drinking from a potentially contaminated water source by yielding an unambiguous message that is easily understood by anyone. The widespread availability of smartphones along with the inexpensive paper test that requires no enzymes to generate the signal makes the proposed assay promising for analyses in remote locations and developing countries.
Development of a PC-based diabetes simulator in collaboration with teenagers with type 1 diabetes.
Nordfeldt, S; Hanberger, L; Malm, F; Ludvigsson, J
2007-02-01
The main aim of this study was to develop and test in a pilot study a PC-based interactive diabetes simulator prototype as a part of future Internet-based support systems for young teenagers and their families. A second aim was to gain experience in user-centered design (UCD) methods applied to such subjects. Using UCD methods, a computer scientist participated in iterative user group sessions involving teenagers with Type 1 diabetes 13-17 years old and parents. Input was transformed into a requirements specification by the computer scientist and advisors. This was followed by gradual prototype development based on a previously developed mathematical core. Individual test sessions were followed by a pilot study with five subjects testing a prototype. The process was evaluated by registration of flow and content of input and opinions from expert advisors. It was initially difficult to motivate teenagers to participate. User group discussion topics ranged from concrete to more academic matters. The issue of a simulator created active discussions among parents and teenagers. A large amount of input was generated from discussions among the teenagers. Individual test runs generated useful input. A pilot study suggested that the gradually elaborated software was functional. A PC-based diabetes simulator may create substantial interest among teenagers and parents, and the prototype seems worthy of further development and studies. UCD methods may generate significant input for computer support system design work and contribute to a functional design. Teenager involvement in design work may require time, patience, and flexibility.
2014-09-01
Redesign .................................122 d. Screen 10/Final Review Redesign ........................................123 F. TEST SET- UP INITIAL TEST...user with a chance to review his or her inputs and send the request by his or her preferred method (digital or voice). The screen breaks down the line...user with a chance to review his or her inputs and send the request by his or her preferred method (digital or voice). The screen breaks down the
Design, processing and testing of LSI arrays: Hybrid microelectronics task
NASA Technical Reports Server (NTRS)
Himmel, R. P.; Stuhlbarg, S. M.; Ravetti, R. G.; Zulueta, P. J.
1979-01-01
Mathematical cost factors were generated for both hybrid microcircuit and printed wiring board packaging methods. A mathematical cost model was created for analysis of microcircuit fabrication costs. The costing factors were refined and reduced to formulae for computerization. Efficient methods were investigated for low cost packaging of LSI devices as a function of density and reliability. Technical problem areas such as wafer bumping, inner/outer leading bonding, testing on tape, and tape processing, were investigated.
Improved pressure measurement system for calibration of the NASA LeRC 10x10 supersonic wind tunnel
NASA Technical Reports Server (NTRS)
Blumenthal, Philip Z.; Helland, Stephen M.
1994-01-01
This paper discusses a method used to provide a significant improvement in the accuracy of the Electronically Scanned Pressure (ESP) Measurement System by means of a fully automatic floating pressure generating system for the ESP calibration and reference pressures. This system was used to obtain test section Mach number and flow angularity measurements over the full envelope of test conditions for the 10 x 10 Supersonic Wind Tunnel. The uncertainty analysis and actual test data demonstrated that, for most test conditions, this method could reduce errors to about one-third to one-half that obtained with the standard system.
Omori, Satoshi; Kitao, Akio
2013-06-01
We propose a fast clustering and reranking method, CyClus, for protein-protein docking decoys. This method enables comprehensive clustering of whole decoys generated by rigid-body docking using cylindrical approximation of the protein-proteininterface and hierarchical clustering procedures. We demonstrate the clustering and reranking of 54,000 decoy structures generated by ZDOCK for each complex within a few minutes. After parameter tuning for the test set in ZDOCK benchmark 2.0 with the ZDOCK and ZRANK scoring functions, blind tests for the incremental data in ZDOCK benchmark 3.0 and 4.0 were conducted. CyClus successfully generated smaller subsets of decoys containing near-native decoys. For example, the number of decoys required to create subsets containing near-native decoys with 80% probability was reduced from 22% to 50% of the number required in the original ZDOCK. Although specific ZDOCK and ZRANK results were demonstrated, the CyClus algorithm was designed to be more general and can be applied to a wide range of decoys and scoring functions by adjusting just two parameters, p and T. CyClus results were also compared to those from ClusPro. Copyright © 2013 Wiley Periodicals, Inc.
The Alzheimer's Disease Knowledge Scale: Development and Psychometric Properties
ERIC Educational Resources Information Center
Carpenter, Brian D.; Balsis, Steve; Otilingam, Poorni G.; Hanson, Priya K.; Gatz, Margaret
2009-01-01
Purpose: This study provides preliminary evidence for the acceptability, reliability, and validity of the new Alzheimer's Disease Knowledge Scale (ADKS), a content and psychometric update to the Alzheimer's Disease Knowledge Test. Design and Methods: Traditional scale development methods were used to generate items and evaluate their psychometric…
Does Active Learning Improve Students' Knowledge of and Attitudes toward Research Methods?
ERIC Educational Resources Information Center
Campisi, Jay; Finn, Kevin E.
2011-01-01
We incorporated an active, collaborative-based research project in our undergraduate Research Methods course for first-year sports medicine majors. Working in small groups, students identified a research question, generated a hypothesis to be tested, designed an experiment, implemented the experiment, analyzed the data, and presented their…
Razalas' Grouping Method and Mathematics Achievement
ERIC Educational Resources Information Center
Salazar, Douglas A.
2015-01-01
This study aimed to raise the achievement level of students in Integral Calculus using Direct Instruction with Razalas' Method of Grouping. The study employed qualitative and quantitative analysis relative to data generated by the Achievement Test and Math journal with follow-up interview. Within the framework of the limitations of the study, the…
A Comparison of Four Methods of IRT Subscoring
ERIC Educational Resources Information Center
de la Torre, Jimmy; Song, Hao; Hong, Yuan
2011-01-01
Lack of sufficient reliability is the primary impediment for generating and reporting subtest scores. Several current methods of subscore estimation do so either by incorporating the correlational structure among the subtest abilities or by using the examinee's performance on the overall test. This article conducted a systematic comparison of four…
A Review of Treatment Adherence Measurement Methods
ERIC Educational Resources Information Center
Schoenwald, Sonja K.; Garland, Ann F.
2013-01-01
Fidelity measurement is critical for testing the effectiveness and implementation in practice of psychosocial interventions. Adherence is a critical component of fidelity. The purposes of this review were to catalogue adherence measurement methods and assess existing evidence for the valid and reliable use of the scores that they generate and the…
Testing quantum gravity through dumb holes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pourhassan, Behnam, E-mail: b.pourhassan@du.ac.ir; Faizal, Mir, E-mail: f2mir@uwaterloo.ca; Irving K. Barber School of Arts and Sciences, University of British Columbia - Okanagan, Kelowna, BC V1V 1V7
We propose a method to test the effects of quantum fluctuations on black holes by analyzing the effects of thermal fluctuations on dumb holes, the analogs for black holes. The proposal is based on the Jacobson formalism, where the Einstein field equations are viewed as thermodynamical relations, and so the quantum fluctuations are generated from the thermal fluctuations. It is well known that all approaches to quantum gravity generate logarithmic corrections to the entropy of a black hole and the coefficient of this term varies according to the different approaches to the quantum gravity. It is possible to demonstrate thatmore » such logarithmic terms are also generated from thermal fluctuations in dumb holes. In this paper, we claim that it is possible to experimentally test such corrections for dumb holes, and also obtain the correct coefficient for them. This fact can then be used to predict the effects of quantum fluctuations on realistic black holes, and so it can also be used, in principle, to experimentally test the different approaches to quantum gravity.« less
Kamble, Suresh S; Kandasamy, Baburajan; Thillaigovindan, Ranjani; Goyal, Nitin Kumar; Talukdar, Pratim; Seal, Mukut
2015-05-01
Newer dentin bonding agents were developed to improve the quality of composite restoration and to reduce time consumption in its application. The aim of the present study was to evaluate tensile bond strength of 6(th), 7(th) and 8(th) generation bonding agents by in vitro method. Selected 60 permanent teeth were assigned into 20 in each group (Group I: 6(th) generation bonding agent-Adper SE plus 3M ESPE, Group II: 7(th) generation bonding agent-G-Bond GC Corp Japan and Group III: 8(th) generation dentin adhesives-FuturaBond, DC, Voco, Germany). With high-speed diamond disc, coronal dentin was exposed, and selected dentin bonding agents were applied, followed by composite restoration. All samples were saved in saline for 24 h and tensile bond strength testing was done using a universal testing machine. The obtained data were tabulated and statistically analyzed using ANOVA test. The tensile bond strength readings for 6(th) generation bonding agent was 32.2465, for 7(th) generation was 31.6734, and for 8(th)-generation dentine bonding agent was 34.74431. The highest tensile bond strength was seen in 8(th) generation bonding agent compared to 6(th) and 7(th) generation bonding agents. From the present study it can be conclude that 8(th) generation dentine adhesive (Futura DC, Voco, Germany) resulted in highest tensile bond strength compared to 6(th) (Adper SE plus, 3M ESPE) and 7(th) generation (G-Bond) dentin bonding agents.
Preliminary results of 3D dose calculations with MCNP-4B code from a SPECT image.
Rodríguez Gual, M; Lima, F F; Sospedra Alfonso, R; González González, J; Calderón Marín, C
2004-01-01
Interface software was developed to generate the input file to run Monte Carlo MCNP-4B code from medical image in Interfile format version 3.3. The software was tested using a spherical phantom of tomography slides with known cumulated activity distribution in Interfile format generated with IMAGAMMA medical image processing system. The 3D dose calculation obtained with Monte Carlo MCNP-4B code was compared with the voxel S factor method. The results show a relative error between both methods less than 1 %.
Hontinfinde, Régis; Coulibaly, Saliya; Megret, Patrice; Taki, Majid; Wuilpart, Marc
2017-05-01
Supercontinuum generation (SCG) in optical fibers arises from the spectral broadening of an intense light, which results from the interplay of both linear and nonlinear optical effects. In this Letter, a nondestructive optical time domain reflectometry method is proposed for the first time, to the best of our knowledge, to measure the spatial (longitudinal) evolution of the SC induced along an optical fiber. The method was experimentally tested on highly nonlinear fibers. The experimental results are in a good agreement with the optical spectra measured at the fiber outputs.
Fatigue testing of weldable high strength steels under simulated service conditions
NASA Astrophysics Data System (ADS)
Tantbirojn, Natee
There have been concerns over the effect of Cathodic Protection (CP) on weldable high strength steels employed in Jack-up production platform. The guidance provided by the Department of Energy HSE on higher strength steels, based on previous work, was to avoid overprotection as this could cause hydrogen embrittlement. However, the tests conducted so far at UCL for the SE702 type high strength steels (yields strength around 690 MPa) have shown that the effect of over protection on high strength steels may not be as severe as previously thought. For this thesis, SE702 high strength steels have been investigated in more detail. Thick (85mm) parent and ground welded plates were tested under constant amplitude in air and seawater with CP. Tests were also conducted on Thick (40mm) T-butt welded plates under variable amplitude loading in air and seawater with two CP levels (-800mV and -1050mV). Different backing materials (ceramic and metallic) for the welding process of the T-butt plates were also investigated. The variable amplitude sequences employed were generated using the Jack-up Offshore Standard load History (JOSH). The fatigue results are presented as crack growth and S/N curves. They were compared to the conventional offshore steel (BS 4360 50D). The results suggested that the fatigue life of the high strength steels was comparable to the BS 4360 50D steels. The effect of increasing the CP was found to be detrimental to the fatigue life but the effect was not large. The effect of CP was less noticeable in T-butt welded plates. However, in general, the effect of overprotection is not as detrimental to the Jack-up steels as previously thought. The load histories generated by JOSH were found to have some unfavourable characteristics. The framework is based on Markov Chain method and pseudo-random number generator for selecting sea-states. A study was carried out on the sequence generated by JOSH. The generated sequences were analysed for their validity for fatigue testing. This has resulted in recommendations on the methods for generating standard load histories.
Rodnick, Melissa E.; Brooks, Allen F.; Hockley, Brian G.; Henderson, Bradford D.; Scott, Peter J. H.
2013-01-01
Introduction A novel one-pot method for preparing [18F]fluoromethylcholine ([18F]FCH) via in situ generation of [18F]fluoromethyl tosylate ([18F]FCH2OTs), and subsequent [18F]fluoromethylation of dimethylaminoethanol (DMAE), has been developed. Methods [18F]FCH was prepared using a GE TRACERlab FXFN, although the method should be readily adaptable to any other fluorine-18 synthesis module. Initially ditosylmethane was fluorinated to generate [18F]FCH2OTs. DMAE was then added and the reaction was heated at 120°C for 10 min to generate [18F]FCH. After this time, reaction solvent was evaporated, and the crude reaction mixture was purified by solid-phase extraction using C18-Plus and CM-Light Sep-Pak cartridges to provide [18F]FCH formulated in USP saline. The formulated product was passed through a 0.22 μm filter into a sterile dose vial, and submitted for quality control testing. Total synthesis time was 1.25 hours from end-of-bombardment. Results Typical non-decay-corrected yields of [18F]FCH prepared using this method were 91 mCi (7% non-decay corrected based upon ~1.3 Ci [18F]fluoride), and doses passed all other quality control (QC) tests. Conclusion A one-pot liquid-phase synthesis of [18F]FCH has been developed. Doses contain extremely low levels of residual DMAE (31.6 μg / 10 mL dose or ~3 ppm) and passed all other requisite QC testing, confirming their suitability for use in clinical imaging studies. PMID:23665261
Aerodynamic stability analysis of NASA J85-13/planar pressure pulse generator installation
NASA Technical Reports Server (NTRS)
Chung, K.; Hosny, W. M.; Steenken, W. G.
1980-01-01
A digital computer simulation model for the J85-13/Planar Pressure Pulse Generator (P3 G) test installation was developed by modifying an existing General Electric compression system model. This modification included the incorporation of a novel method for describing the unsteady blade lift force. This approach significantly enhanced the capability of the model to handle unsteady flows. In addition, the frequency response characteristics of the J85-13/P3G test installation were analyzed in support of selecting instrumentation locations to avoid standing wave nodes within the test apparatus and thus, low signal levels. The feasibility of employing explicit analytical expression for surge prediction was also studied.
Cornelia Pinchot; Stacy Clark; Scott Schlarbaum; Arnold Saxton; Shi-Jean Sung; Frederick. Hebard
2015-01-01
Blight-resistant American chestnut (Castanea dentata) may soon be commercially available, but few studies have tested methods to produce high quality seedlings that will be competitive after planting. This study evaluated the performance of one American, one Chinese (C. mollissima), one second-generation backcross (BC3...
Student-Generated Visualization as a Study Strategy for Science Concept Learning
ERIC Educational Resources Information Center
Hsieh, Yi-Chuan Jane; Cifuentes, Lauren
2006-01-01
Mixed methods were adopted to explore the effects of student-generated visualization on paper and on computers as a study strategy for middle school science concept learning. In a post-test-only-control-group design, scores were compared among a control-group (n=28), a group that was trained to visualize on paper (n=30), and a group that was…
NASA Astrophysics Data System (ADS)
Korolkov, Victor P.; Konchenko, Alexander S.; Cherkashin, Vadim V.; Mironnikov, Nikolay G.; Poleshchuk, Alexander G.
2013-09-01
Detailed analysis of etch depth map for phase binary computer-generated holograms intended for testing aspheric optics is a very important task. In particular, diffractive Fizeau null lenses need to be carefully tested for uniformity of etch depth. We offer a simplified version of the specular spectroscopic scatterometry method. It is based on the spectral properties of binary phase multi-order gratings. An intensity of zero order is a periodical function of illumination light wave number. The grating grooves depth can be calculated as it is inversely proportional to the period. Measurement in reflection allows one to increase the phase depth of the grooves by a factor of 2 and measure more precisely shallow phase gratings. Measurement uncertainty is mainly defined by the following parameters: shifts of the spectrum maximums that occur due to the tilted grooves sidewalls, uncertainty of light incidence angle measurement, and spectrophotometer wavelength error. It is theoretically and experimentally shown that the method we describe can ensure 1% error. However, fiber spectrometers are more convenient for scanning measurements of large area computer-generated holograms. Our experimental system for characterization of binary computer-generated holograms was developed using a fiber spectrometer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Shaobu; Lu, Shuai; Zhou, Ning
In interconnected power systems, dynamic model reduction can be applied on generators outside the area of interest to mitigate the computational cost with transient stability studies. This paper presents an approach of deriving the reduced dynamic model of the external area based on dynamic response measurements, which comprises of three steps, dynamic-feature extraction, attribution and reconstruction (DEAR). In the DEAR approach, a feature extraction technique, such as singular value decomposition (SVD), is applied to the measured generator dynamics after a disturbance. Characteristic generators are then identified in the feature attribution step for matching the extracted dynamic features with the highestmore » similarity, forming a suboptimal ‘basis’ of system dynamics. In the reconstruction step, generator state variables such as rotor angles and voltage magnitudes are approximated with a linear combination of the characteristic generators, resulting in a quasi-nonlinear reduced model of the original external system. Network model is un-changed in the DEAR method. Tests on several IEEE standard systems show that the proposed method gets better reduction ratio and response errors than the traditional coherency aggregation methods.« less
Refatul Haq, Muhammad; Kim, Youngkyu; Kim, Jun; Oh, Pyoung-hwa; Ju, Jonghyun; Kim, Seok-Min; Lim, Jiseok
2017-01-01
This study reports a cost-effective method of replicating glass microfluidic chips using a vitreous carbon (VC) stamp. A glass replica with the required microfluidic microstructures was synthesized without etching. The replication method uses a VC stamp fabricated by combining thermal replication using a furan-based, thermally-curable polymer with carbonization. To test the feasibility of this method, a flow focusing droplet generator with flow-focusing and channel widths of 50 µm and 100 µm, respectively, was successfully fabricated in a soda-lime glass substrate. Deviation between the geometries of the initial shape and the vitreous carbon mold occurred because of shrinkage during the carbonization process, however this effect could be predicted and compensated for. Finally, the monodispersity of the droplets generated by the fabricated microfluidic device was evaluated. PMID:29286341
Optimal placement and sizing of wind / solar based DG sources in distribution system
NASA Astrophysics Data System (ADS)
Guan, Wanlin; Guo, Niao; Yu, Chunlai; Chen, Xiaoguang; Yu, Haiyang; Liu, Zhipeng; Cui, Jiapeng
2017-06-01
Proper placement and sizing of Distributed Generation (DG) in distribution system can obtain maximum potential benefits. This paper proposes quantum particle swarm algorithm (QPSO) based wind turbine generation unit (WTGU) and photovoltaic (PV) array placement and sizing approach for real power loss reduction and voltage stability improvement of distribution system. Performance modeling of wind and solar generation system are described and classified into PQ\\PQ (V)\\PI type models in power flow. Considering the WTGU and PV based DGs in distribution system is geographical restrictive, the optimal area and DG capacity limits of each bus in the setting area need to be set before optimization, the area optimization method is proposed . The method has been tested on IEEE 33-bus radial distribution systems to demonstrate the performance and effectiveness of the proposed method.
ERIC Educational Resources Information Center
ALTMANN, BERTHOLD
AFTER A BRIEF SUMMARY OF THE TEST PROGRAM (DESCRIBED MORE FULLY IN LI 000 318), THE STATISTICAL RESULTS TABULATED AS OVERALL "ABC (APPROACH BY CONCEPT)-RELEVANCE RATIOS" AND "ABC-RECALL FIGURES" ARE PRESENTED AND REVIEWED. AN ABSTRACT MODEL DEVELOPED IN ACCORDANCE WITH MAX WEBER'S "IDEALTYPUS" ("DIE OBJEKTIVITAET…
Rosnell, Tomi; Honkavaara, Eija
2012-01-01
The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation. PMID:22368479
Rosnell, Tomi; Honkavaara, Eija
2012-01-01
The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems' SOCET SET classical commercial photogrammetric software and another is built using Microsoft(®)'s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation.
Effect of accuracy of wind power prediction on power system operator
NASA Technical Reports Server (NTRS)
Schlueter, R. A.; Sigari, G.; Costi, T.
1985-01-01
This research project proposed a modified unit commitment that schedules connection and disconnection of generating units in response to load. A modified generation control is also proposed that controls steam units under automatic generation control, fast responding diesels, gas turbines and hydro units under a feedforward control, and wind turbine array output under a closed loop array control. This modified generation control and unit commitment require prediction of trend wind power variation one hour ahead and the prediction of error in this trend wind power prediction one half hour ahead. An improved meter for predicting trend wind speed variation is developed. Methods for accurately simulating the wind array power from a limited number of wind speed prediction records was developed. Finally, two methods for predicting the error in the trend wind power prediction were developed. This research provides a foundation for testing and evaluating the modified unit commitment and generation control that was developed to maintain operating reliability at a greatly reduced overall production cost for utilities with wind generation capacity.
Aeroservoelastic Modeling of Body Freedom Flutter for Control System Design
NASA Technical Reports Server (NTRS)
Ouellette, Jeffrey
2017-01-01
The communication of this method is being used by NASA in the ongoing collaborations with groups interested in the X-56A flight test program. Model generation for body freedom flutter Addressing issues in: State Consistency, Low frequency dynamics, Unsteady aerodynamics. Applied approach to X-56A MUTT: Comparing to flight test data.
40 CFR 53.64 - Test procedure: Static fractionator test.
Code of Federal Regulations, 2012 CFR
2012-07-01
... particles of a given size reaching the sampler filter to the mass concentration of particles of the same.... Methods for generating aerosols shall be identical to those prescribed in § 53.62(c)(2). (2) Particle... (with or without an in-line mixing chamber). Validation particle size and quality shall be conducted at...
40 CFR 53.64 - Test procedure: Static fractionator test.
Code of Federal Regulations, 2013 CFR
2013-07-01
... particles of a given size reaching the sampler filter to the mass concentration of particles of the same.... Methods for generating aerosols shall be identical to those prescribed in § 53.62(c)(2). (2) Particle... (with or without an in-line mixing chamber). Validation particle size and quality shall be conducted at...
40 CFR 53.64 - Test procedure: Static fractionator test.
Code of Federal Regulations, 2014 CFR
2014-07-01
... particles of a given size reaching the sampler filter to the mass concentration of particles of the same.... Methods for generating aerosols shall be identical to those prescribed in § 53.62(c)(2). (2) Particle... (with or without an in-line mixing chamber). Validation particle size and quality shall be conducted at...
Tribological Technology. Volume II.
1982-09-01
rolling bearings, gears, and sliding bearings produce distinctive particles. An atlas of such particles is available2 9 . Atlases of characteristic...Gravitational methods cover both sedimentation and elutration techniques. Inertial type separators perform cyclonic classification. Ferrography is the...generated after each size exposure of contaminant. This can be done today using Ferrography . Standard contaminant sensitivity tests require test
A Java-based web service is being developed within the US EPA’s Chemistry Dashboard to provide real time estimates of toxicity values and physical properties. WebTEST can generate toxicity predictions directly from a simple URL which includes the endpoint, QSAR method, and ...
A Java-based web service is being developed within the US EPA’s Chemistry Dashboard to provide real time estimates of toxicity values and physical properties. WebTEST can generate toxicity predictions directly from a simple URL which includes the endpoint, QSAR method, and ...
40 CFR 52.128 - Rule for unpaved parking lots, unpaved roads and vacant lots.
Code of Federal Regulations, 2014 CFR
2014-07-01
.... Research Triangle Park, N.C. May 1982. 3. “Method 9—Visible Determination of the Opacity of Emissions from... other dust generating operations which have been terminated for over eight months. (3) The test methods... than or equal to a nominal 10 micrometers as measured by reference or equivalent methods that meet the...
40 CFR 52.128 - Rule for unpaved parking lots, unpaved roads and vacant lots.
Code of Federal Regulations, 2012 CFR
2012-07-01
.... Research Triangle Park, N.C. May 1982. 3. “Method 9—Visible Determination of the Opacity of Emissions from... other dust generating operations which have been terminated for over eight months. (3) The test methods... than or equal to a nominal 10 micrometers as measured by reference or equivalent methods that meet the...
40 CFR 52.128 - Rule for unpaved parking lots, unpaved roads and vacant lots.
Code of Federal Regulations, 2013 CFR
2013-07-01
.... Research Triangle Park, N.C. May 1982. 3. “Method 9—Visible Determination of the Opacity of Emissions from... other dust generating operations which have been terminated for over eight months. (3) The test methods... than or equal to a nominal 10 micrometers as measured by reference or equivalent methods that meet the...
Generation of a monodispersed aerosol
NASA Technical Reports Server (NTRS)
Schenck, H.; Mikasa, M.; Devicariis, R.
1974-01-01
The identity and laboratory test methods for the generation of a monodispersed aerosol are reported on, and are subjected to the following constraints and parameters; (1) size distribution; (2) specific gravity; (3) scattering properties; (4) costs; (5) production. The procedure called for the collection of information from the literature, commercial available products, and experts working in the field. The following topics were investigated: (1) aerosols; (2) air pollution -- analysis; (3) atomizers; (4) dispersion; (5) particles -- optics, size analysis; (6) smoke -- generators, density measurements; (7) sprays; (8) wind tunnels -- visualization.
Fracture Test Methods for Plastically Responding COPV Liners
NASA Technical Reports Server (NTRS)
Dawicke, David S.; Lewis, Joseph C.
2009-01-01
An experimental procedure for evaluating the validity of using uniaxial tests to provide a conservative bound on the fatigue crack growth rate behavior small cracks in bi-axially loaded Composite Overwrapped Pressure Vessel (COPV) liners is described. The experimental procedure included the use of a laser notch to quickly generate small surface fatigue cracks with the desired size and aspect ratios. An out-of-plane constraint system was designed to allow fully reversed, fully plastic testing of thin sheet uniaxial coupons. Finally, a method was developed to determine to initiate small cracks in the liner of COPVs.
NASA Technical Reports Server (NTRS)
Rouff, Christopher A. (Inventor); Sterritt, Roy (Inventor); Truszkowski, Walter F. (Inventor); Hinchey, Michael G. (Inventor); Gracanin, Denis (Inventor); Rash, James L. (Inventor)
2011-01-01
Described herein is a method that produces fully (mathematically) tractable development of policies for autonomic systems from requirements through to code generation. This method is illustrated through an example showing how user formulated policies can be translated into a formal mode which can then be converted to code. The requirements-based programming method described provides faster, higher quality development and maintenance of autonomic systems based on user formulation of policies.Further, the systems, methods and apparatus described herein provide a way of analyzing policies for autonomic systems and facilities the generation of provably correct implementations automatically, which in turn provides reduced development time, reduced testing requirements, guarantees of correctness of the implementation with respect to the policies specified at the outset, and provides a higher degree of confidence that the policies are both complete and reasonable. The ability to specify the policy for the management of a system and then automatically generate an equivalent implementation greatly improves the quality of software, the survivability of future missions, in particular when the system will operate untended in very remote environments, and greatly reduces development lead times and costs.
Yucel, Deniz Sanliyuksel; Baba, Alper
2016-08-01
The Etili neighborhood in Can County (northwestern Turkey) has large reserves of coal and has been the site of many small- to medium-scale mining operations since the 1980s. Some of these have ceased working while others continue to operate. Once activities cease, the mining facilities and fields are usually abandoned without rehabilitation. The most significant environmental problem is acid mine drainage (AMD). This study was carried out to determine the acid generation potential of various lithological units in the Etili coal mine using static test methods. Seventeen samples were selected from areas with high acidic water concentrations: from different alteration zones belonging to volcanic rocks, from sedimentary rocks, and from coals and mine wastes. Static tests (paste pH, standard acid-base accounting, and net acid generation tests) were performed on these samples. The consistency of the static test results showed that oxidation of sulfide minerals, especially pyrite-which is widely found not only in the alteration zones of volcanic rocks but also in the coals and mine wastes-is the main factor controlling the generation of AMD in this mine. Lack of carbonate minerals in the region also increases the occurrence of AMD.
Automated encoding of clinical documents based on natural language processing.
Friedman, Carol; Shagina, Lyudmila; Lussier, Yves; Hripcsak, George
2004-01-01
The aim of this study was to develop a method based on natural language processing (NLP) that automatically maps an entire clinical document to codes with modifiers and to quantitatively evaluate the method. An existing NLP system, MedLEE, was adapted to automatically generate codes. The method involves matching of structured output generated by MedLEE consisting of findings and modifiers to obtain the most specific code. Recall and precision applied to Unified Medical Language System (UMLS) coding were evaluated in two separate studies. Recall was measured using a test set of 150 randomly selected sentences, which were processed using MedLEE. Results were compared with a reference standard determined manually by seven experts. Precision was measured using a second test set of 150 randomly selected sentences from which UMLS codes were automatically generated by the method and then validated by experts. Recall of the system for UMLS coding of all terms was .77 (95% CI.72-.81), and for coding terms that had corresponding UMLS codes recall was .83 (.79-.87). Recall of the system for extracting all terms was .84 (.81-.88). Recall of the experts ranged from .69 to .91 for extracting terms. The precision of the system was .89 (.87-.91), and precision of the experts ranged from .61 to .91. Extraction of relevant clinical information and UMLS coding were accomplished using a method based on NLP. The method appeared to be comparable to or better than six experts. The advantage of the method is that it maps text to codes along with other related information, rendering the coded output suitable for effective retrieval.
Nonlinear model for offline correction of pulmonary waveform generators.
Reynolds, Jeffrey S; Stemple, Kimberly J; Petsko, Raymond A; Ebeling, Thomas R; Frazer, David G
2002-12-01
Pulmonary waveform generators consisting of motor-driven piston pumps are frequently used to test respiratory-function equipment such as spirometers and peak expiratory flow (PEF) meters. Gas compression within these generators can produce significant distortion of the output flow-time profile. A nonlinear model of the generator was developed along with a method to compensate for gas compression when testing pulmonary function equipment. The model and correction procedure were tested on an Assess Full Range PEF meter and a Micro DiaryCard PEF meter. The tests were performed using the 26 American Thoracic Society standard flow-time waveforms as the target flow profiles. Without correction, the pump loaded with the higher resistance Assess meter resulted in ten waveforms having a mean square error (MSE) higher than 0.001 L2/s2. Correction of the pump for these ten waveforms resulted in a mean decrease in MSE of 87.0%. When loaded with the Micro DiaryCard meter, the uncorrected pump outputs included six waveforms with MSE higher than 0.001 L2/s2. Pump corrections for these six waveforms resulted in a mean decrease in MSE of 58.4%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
In this report, the scope of the tests, the method of analysis, the results, and the conclusions are discussed. The first test indicated that the requirements generated by the Standard procedures and formulae appear to yield reasonable results, although some of the cost data provided as defaults in the Standard should be reevaluated. The second test provided experience that was useful in modifying the points compliance format, but did not uncover any procedural issues that would lead to unreasonable results. These conclusions are based on analysis using the Automated Residential Energy Standard (ARES) computer program, developed to simplify the processmore » of standards generation.« less
Determination of Thermoelectric Module Efficiency A Survey
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hsin; McCarty, Robin; Salvador, James R.
2014-01-01
The development of thermoelectrics (TE) for energy conversion is in the transition phase from laboratory research to device development. There is an increasing demand to accurately determine the module efficiency, especially for the power generation mode. For many thermoelectrics, the figure of merit, ZT, of the material sometimes cannot be fully realized at the device level. Reliable efficiency testing of thermoelectric modules is important to assess the device ZT and provide the end-users with realistic values on how much power can be generated under specific conditions. We conducted a general survey of efficiency testing devices and their performance. The resultsmore » indicated the lack of industry standards and test procedures. This study included a commercial test system and several laboratory systems. Most systems are based on the heat flow meter method and some are based on the Harman method. They are usually reproducible in evaluating thermoelectric modules. However, cross-checking among different systems often showed large errors that are likely caused by unaccounted heat loss and thermal resistance. Efficiency testing is an important area for the thermoelectric community to focus on. A follow-up international standardization effort is planned.« less
Parametric vs. non-parametric daily weather generator: validation and comparison
NASA Astrophysics Data System (ADS)
Dubrovsky, Martin
2016-04-01
As the climate models (GCMs and RCMs) fail to satisfactorily reproduce the real-world surface weather regime, various statistical methods are applied to downscale GCM/RCM outputs into site-specific weather series. The stochastic weather generators are among the most favourite downscaling methods capable to produce realistic (observed like) meteorological inputs for agrological, hydrological and other impact models used in assessing sensitivity of various ecosystems to climate change/variability. To name their advantages, the generators may (i) produce arbitrarily long multi-variate synthetic weather series representing both present and changed climates (in the latter case, the generators are commonly modified by GCM/RCM-based climate change scenarios), (ii) be run in various time steps and for multiple weather variables (the generators reproduce the correlations among variables), (iii) be interpolated (and run also for sites where no weather data are available to calibrate the generator). This contribution will compare two stochastic daily weather generators in terms of their ability to reproduce various features of the daily weather series. M&Rfi is a parametric generator: Markov chain model is used to model precipitation occurrence, precipitation amount is modelled by the Gamma distribution, and the 1st order autoregressive model is used to generate non-precipitation surface weather variables. The non-parametric GoMeZ generator is based on the nearest neighbours resampling technique making no assumption on the distribution of the variables being generated. Various settings of both weather generators will be assumed in the present validation tests. The generators will be validated in terms of (a) extreme temperature and precipitation characteristics (annual and 30 years extremes and maxima of duration of hot/cold/dry/wet spells); (b) selected validation statistics developed within the frame of VALUE project. The tests will be based on observational weather series from several European stations available from the ECA&D database.
Zhang, Z; Jewett, D L
1994-01-01
Due to model misspecification, currently-used Dipole Source Localization (DSL) methods may contain Multiple-Generator Errors (MulGenErrs) when fitting simultaneously-active dipoles. The size of the MulGenErr is a function of both the model used, and the dipole parameters, including the dipoles' waveforms (time-varying magnitudes). For a given fitting model, by examining the variation of the MulGenErrs (or the fit parameters) under different waveforms for the same generating-dipoles, the accuracy of the fitting model for this set of dipoles can be determined. This method of testing model misspecification can be applied to evoked potential maps even when the parameters of the generating-dipoles are unknown. The dipole parameters fitted in a model should only be accepted if the model can be shown to be sufficiently accurate.
Requirements-Driven Log Analysis Extended Abstract
NASA Technical Reports Server (NTRS)
Havelund, Klaus
2012-01-01
Imagine that you are tasked to help a project improve their testing effort. In a realistic scenario it will quickly become clear, that having an impact is diffcult. First of all, it will likely be a challenge to suggest an alternative approach which is significantly more automated and/or more effective than current practice. The reality is that an average software system has a complex input/output behavior. An automated testing approach will have to auto-generate test cases, each being a pair (i; o) consisting of a test input i and an oracle o. The test input i has to be somewhat meaningful, and the oracle o can be very complicated to compute. Second, even in case where some testing technology has been developed that might improve current practice, it is then likely difficult to completely change the current behavior of the testing team unless the technique is obviously superior and does everything already done by existing technology. So is there an easier way to incorporate formal methods-based approaches than the full edged test revolution? Fortunately the answer is affirmative. A relatively simple approach is to benefit from possibly already existing logging infrastructure, which after all is part of most systems put in production. A log is a sequence of events, generated by special log recording statements, most often manually inserted in the code by the programmers. An event can be considered as a data record: a mapping from field names to values. We can analyze such a log using formal methods, for example checking it against a formal specification. This separates running the system for analyzing its behavior. It is not meant as an alternative to testing since it does not address the important in- put generation problem. However, it offers a solution which testing teams might accept since it has low impact on the existing process. A single person might be assigned to perform such log analysis, compared to the entire testing team changing behavior.
Steimke, John L.; Steeper, Timothy J.; Colon-Mercado, Hector R.; ...
2015-09-02
The hybrid sulfur (HyS) cycle is being developed as a technology to generate hydrogen by splitting water, using heat and electrical power from a nuclear or solar power plant. A key component is the SO 2-depolarized electrolysis (SDE) cell, which reacts SO 2 and water to form hydrogen and sulfuric acid. SDE could also be used in once-through operation to consume SO 2 and generate hydrogen and sulfuric acid for sale. A proton exchange membrane (PEM) SDE cell based on a PEM fuel cell design was fabricated and tested. Measured cell potential as a function of anolyte pressure and flowmore » rate, sulfuric acid concentration, and cell temperature are presented for this cell. Sulfur accumulation was observed inside the cell, which could have been a serious impediment to further development. A method to prevent sulfur formation was subsequently developed. As a result, this was made possible by a testing facility that allowed unattended operation for extended periods.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steimke, John L.; Steeper, Timothy J.; Colon-Mercado, Hector R.
The hybrid sulfur (HyS) cycle is being developed as a technology to generate hydrogen by splitting water, using heat and electrical power from a nuclear or solar power plant. A key component is the SO 2-depolarized electrolysis (SDE) cell, which reacts SO 2 and water to form hydrogen and sulfuric acid. SDE could also be used in once-through operation to consume SO 2 and generate hydrogen and sulfuric acid for sale. A proton exchange membrane (PEM) SDE cell based on a PEM fuel cell design was fabricated and tested. Measured cell potential as a function of anolyte pressure and flowmore » rate, sulfuric acid concentration, and cell temperature are presented for this cell. Sulfur accumulation was observed inside the cell, which could have been a serious impediment to further development. A method to prevent sulfur formation was subsequently developed. As a result, this was made possible by a testing facility that allowed unattended operation for extended periods.« less
Methods in virus diagnostics: from ELISA to next generation sequencing.
Boonham, Neil; Kreuze, Jan; Winter, Stephan; van der Vlugt, René; Bergervoet, Jan; Tomlinson, Jenny; Mumford, Rick
2014-06-24
Despite the seemingly continuous development of newer and ever more elaborate methods for detecting and identifying viruses, very few of these new methods get adopted for routine use in testing laboratories, often despite the many and varied claimed advantages they possess. To understand why the rate of uptake of new technologies is so low, requires a strong understanding of what makes a good routine diagnostic tool to begin. This can be done by looking at the two most successfully established plant virus detection methods: enzyme-linked immunosorbant assay (ELISA) and more recently introduced real-time polymerase chain reaction (PCR). By examining the characteristics of this pair of technologies, it becomes clear that they share many benefits, such as an industry standard format and high levels of repeatability and reproducibility. These combine to make methods that are accessible to testing labs, which are easy to establish and robust in their use, even with new and inexperienced users. Hence, to ensure the establishment of new techniques it is necessary to not only provide benefits not found with ELISA or real-time PCR, but also to provide a platform that is easy to establish and use. In plant virus diagnostics, recent developments can be clustered into three core areas: (1) techniques that can be performed in the field or resource poor locations (e.g., loop-mediated isothermal amplification LAMP); (2) multiplex methods that are able to detect many viruses in a single test (e.g., Luminex bead arrays); and (3) methods suited to virus discovery (e.g., next generation sequencing, NGS). Field based methods are not new, with Lateral Flow Devices (LFDs) for the detection being available for a number of years now. However, the widespread uptake of this technology remains poor. LAMP does offer significant advantages over LFDs, in terms of sensitivity and generic application, but still faces challenges in terms of establishment. It is likely that the main barrier to the uptake of field-based technologies is behavioural influences, rather than specific concerns about the performance of the technologies themselves. To overcome this, a new relationship will need to develop between centralised testing laboratories offering services and those requiring tests; a relationship which is currently in its infancy. Looking further into the future, virus discovery and multiplex methods seem to converge as NGS becomes ever cheaper, easier to perform and can provide high levels of multiplexing without the use of virus specific reagents. So ultimately the key challenge from a routine testing lab perspective will not be one of investment in platforms-which could even be outsourced to commercial sequencing services-but one of having the skills and expertise to analyse the large datasets generated and their subsequent interpretation. In conclusion, only time will tell which of the next-generation of methods currently in development will become the routine diagnostics of the future. This will be determined through a combination of factors. And while the technology itself will have to offer performance advantages over existing methods in order to supplant them, it is likely to be human factors e.g., the behaviours of end users, laboratories and policy makers, the availability of appropriate expertise, that ultimately determine which ones become established. Hence factors cannot be ignored and early engagement with diagnostic stakeholders is essential. Crown Copyright © 2013. Published by Elsevier B.V. All rights reserved.
Evaluating Gene Set Enrichment Analysis Via a Hybrid Data Model
Hua, Jianping; Bittner, Michael L.; Dougherty, Edward R.
2014-01-01
Gene set enrichment analysis (GSA) methods have been widely adopted by biological labs to analyze data and generate hypotheses for validation. Most of the existing comparison studies focus on whether the existing GSA methods can produce accurate P-values; however, practitioners are often more concerned with the correct gene-set ranking generated by the methods. The ranking performance is closely related to two critical goals associated with GSA methods: the ability to reveal biological themes and ensuring reproducibility, especially for small-sample studies. We have conducted a comprehensive simulation study focusing on the ranking performance of seven representative GSA methods. We overcome the limitation on the availability of real data sets by creating hybrid data models from existing large data sets. To build the data model, we pick a master gene from the data set to form the ground truth and artificially generate the phenotype labels. Multiple hybrid data models can be constructed from one data set and multiple data sets of smaller sizes can be generated by resampling the original data set. This approach enables us to generate a large batch of data sets to check the ranking performance of GSA methods. Our simulation study reveals that for the proposed data model, the Q2 type GSA methods have in general better performance than other GSA methods and the global test has the most robust results. The properties of a data set play a critical role in the performance. For the data sets with highly connected genes, all GSA methods suffer significantly in performance. PMID:24558298
Dynamic Loads Generation for Multi-Point Vibration Excitation Problems
NASA Technical Reports Server (NTRS)
Shen, Lawrence
2011-01-01
A random-force method has been developed to predict dynamic loads produced by rocket-engine random vibrations for new rocket-engine designs. The method develops random forces at multiple excitation points based on random vibration environments scaled from accelerometer data obtained during hot-fire tests of existing rocket engines. This random-force method applies random forces to the model and creates expected dynamic response in a manner that simulates the way the operating engine applies self-generated random vibration forces (random pressure acting on an area) with the resulting responses that we measure with accelerometers. This innovation includes the methodology (implementation sequence), the computer code, two methods to generate the random-force vibration spectra, and two methods to reduce some of the inherent conservatism in the dynamic loads. This methodology would be implemented to generate the random-force spectra at excitation nodes without requiring the use of artificial boundary conditions in a finite element model. More accurate random dynamic loads than those predicted by current industry methods can then be generated using the random force spectra. The scaling method used to develop the initial power spectral density (PSD) environments for deriving the random forces for the rocket engine case is based on the Barrett Criteria developed at Marshall Space Flight Center in 1963. This invention approach can be applied in the aerospace, automotive, and other industries to obtain reliable dynamic loads and responses from a finite element model for any structure subject to multipoint random vibration excitations.
How to test validity in orthodontic research: a mixed dentition analysis example.
Donatelli, Richard E; Lee, Shin-Jae
2015-02-01
The data used to test the validity of a prediction method should be different from the data used to generate the prediction model. In this study, we explored whether an independent data set is mandatory for testing the validity of a new prediction method and how validity can be tested without independent new data. Several validation methods were compared in an example using the data from a mixed dentition analysis with a regression model. The validation errors of real mixed dentition analysis data and simulation data were analyzed for increasingly large data sets. The validation results of both the real and the simulation studies demonstrated that the leave-1-out cross-validation method had the smallest errors. The largest errors occurred in the traditional simple validation method. The differences between the validation methods diminished as the sample size increased. The leave-1-out cross-validation method seems to be an optimal validation method for improving the prediction accuracy in a data set with limited sample sizes. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Improved Conflict Detection for Reducing Operational Errors in Air Traffic Control
NASA Technical Reports Server (NTRS)
Paielli, Russell A.; Erzberger, Hainz
2003-01-01
An operational error is an incident in which an air traffic controller allows the separation between two aircraft to fall below the minimum separation standard. The rates of such errors in the US have increased significantly over the past few years. This paper proposes new detection methods that can help correct this trend by improving on the performance of Conflict Alert, the existing software in the Host Computer System that is intended to detect and warn controllers of imminent conflicts. In addition to the usual trajectory based on the flight plan, a "dead-reckoning" trajectory (current velocity projection) is also generated for each aircraft and checked for conflicts. Filters for reducing common types of false alerts were implemented. The new detection methods were tested in three different ways. First, a simple flightpath command language was developed t o generate precisely controlled encounters for the purpose of testing the detection software. Second, written reports and tracking data were obtained for actual operational errors that occurred in the field, and these were "replayed" to test the new detection algorithms. Finally, the detection methods were used to shadow live traffic, and performance was analysed, particularly with regard to the false-alert rate. The results indicate that the new detection methods can provide timely warnings of imminent conflicts more consistently than Conflict Alert.
Vertical decomposition with Genetic Algorithm for Multiple Sequence Alignment
2011-01-01
Background Many Bioinformatics studies begin with a multiple sequence alignment as the foundation for their research. This is because multiple sequence alignment can be a useful technique for studying molecular evolution and analyzing sequence structure relationships. Results In this paper, we have proposed a Vertical Decomposition with Genetic Algorithm (VDGA) for Multiple Sequence Alignment (MSA). In VDGA, we divide the sequences vertically into two or more subsequences, and then solve them individually using a guide tree approach. Finally, we combine all the subsequences to generate a new multiple sequence alignment. This technique is applied on the solutions of the initial generation and of each child generation within VDGA. We have used two mechanisms to generate an initial population in this research: the first mechanism is to generate guide trees with randomly selected sequences and the second is shuffling the sequences inside such trees. Two different genetic operators have been implemented with VDGA. To test the performance of our algorithm, we have compared it with existing well-known methods, namely PRRP, CLUSTALX, DIALIGN, HMMT, SB_PIMA, ML_PIMA, MULTALIGN, and PILEUP8, and also other methods, based on Genetic Algorithms (GA), such as SAGA, MSA-GA and RBT-GA, by solving a number of benchmark datasets from BAliBase 2.0. Conclusions The experimental results showed that the VDGA with three vertical divisions was the most successful variant for most of the test cases in comparison to other divisions considered with VDGA. The experimental results also confirmed that VDGA outperformed the other methods considered in this research. PMID:21867510
Measurement of Capillary Radius and Contact Angle within Porous Media.
Ravi, Saitej; Dharmarajan, Ramanathan; Moghaddam, Saeed
2015-12-01
The pore radius (i.e., capillary radius) and contact angle determine the capillary pressure generated in a porous medium. The most common method to determine these two parameters is through measurement of the capillary pressure generated by a reference liquid (i.e., a liquid with near-zero contact angle) and a test liquid. The rate of rise technique, commonly used to determine the capillary pressure, results in significant uncertainties. In this study, we utilize a recently developed technique for independently measuring the capillary pressure and permeability to determine the equivalent minimum capillary radii and contact angle of water within micropillar wick structures. In this method, the experimentally measured dryout threshold of a wick structure at different wicking lengths is fit to Darcy's law to extract the maximum capillary pressure generated by the test liquid. The equivalent minimum capillary radii of different wick geometries are determined by measuring the maximum capillary pressures generated using n-hexane as the working fluid. It is found that the equivalent minimum capillary radius is dependent on the diameter of pillars and the spacing between pillars. The equivalent capillary radii of micropillar wicks determined using the new method are found to be up to 7 times greater than the current geometry-based first-order estimates. The contact angle subtended by water at the walls of the micropillars is determined by measuring the capillary pressure generated by water within the arrays and the measured capillary radii for the different geometries. This mean contact angle of water is determined to be 54.7°.
HOMAR: A computer code for generating homotopic grids using algebraic relations: User's manual
NASA Technical Reports Server (NTRS)
Moitra, Anutosh
1989-01-01
A computer code for fast automatic generation of quasi-three-dimensional grid systems for aerospace configurations is described. The code employs a homotopic method to algebraically generate two-dimensional grids in cross-sectional planes, which are stacked to produce a three-dimensional grid system. Implementation of the algebraic equivalents of the homotopic relations for generating body geometries and grids are explained. Procedures for controlling grid orthogonality and distortion are described. Test cases with description and specification of inputs are presented in detail. The FORTRAN computer program and notes on implementation and use are included.
A new method for testing the scale-factor performance of fiber optical gyroscope
NASA Astrophysics Data System (ADS)
Zhao, Zhengxin; Yu, Haicheng; Li, Jing; Li, Chao; Shi, Haiyang; Zhang, Bingxin
2015-10-01
Fiber optical gyro (FOG) is a kind of solid-state optical gyroscope with good environmental adaptability, which has been widely used in national defense, aviation, aerospace and other civilian areas. In some applications, FOG will experience environmental conditions such as vacuum, radiation, vibration and so on, and the scale-factor performance is concerned as an important accuracy indicator. However, the scale-factor performance of FOG under these environmental conditions is difficult to test using conventional methods, as the turntable can't work under these environmental conditions. According to the phenomenon that the physical effects of FOG produced by the sawtooth voltage signal under static conditions is consistent with the physical effects of FOG produced by a turntable in uniform rotation, a new method for the scale-factor performance test of FOG without turntable is proposed in this paper. In this method, the test system of the scale-factor performance is constituted by an external operational amplifier circuit and a FOG which the modulation signal and Y waveguied are disconnected. The external operational amplifier circuit is used to superimpose the externally generated sawtooth voltage signal and the modulation signal of FOG, and to exert the superimposed signal on the Y waveguide of the FOG. The test system can produce different equivalent angular velocities by changing the cycle of the sawtooth signal in the scale-factor performance test. In this paper, the system model of FOG superimposed with an externally generated sawtooth is analyzed, and a conclusion that the effect of the equivalent input angular velocity produced by the sawtooth voltage signal is consistent with the effect of input angular velocity produced by the turntable is obtained. The relationship between the equivalent angular velocity and the parameters such as sawtooth cycle and so on is presented, and the correction method for the equivalent angular velocity is also presented by analyzing the influence of each parameter error on the equivalent angular velocity. A comparative experiment of the method proposed in this paper and the method of turntable calibration was conducted, and the scale-factor performance test results of the same FOG using the two methods were consistent. Using the method proposed in this paper to test the scale-factor performance of FOG, the input angular velocity is the equivalent effect produced by a sawtooth voltage signal, and there is no need to use a turntable to produce mechanical rotation, so this method can be used to test the performance of FOG at the ambient conditions which turntable can not work.
From empirical data to time-inhomogeneous continuous Markov processes.
Lencastre, Pedro; Raischel, Frank; Rogers, Tim; Lind, Pedro G
2016-03-01
We present an approach for testing for the existence of continuous generators of discrete stochastic transition matrices. Typically, existing methods to ascertain the existence of continuous Markov processes are based on the assumption that only time-homogeneous generators exist. Here a systematic extension to time inhomogeneity is presented, based on new mathematical propositions incorporating necessary and sufficient conditions, which are then implemented computationally and applied to numerical data. A discussion concerning the bridging between rigorous mathematical results on the existence of generators to its computational implementation is presented. Our detection algorithm shows to be effective in more than 60% of tested matrices, typically 80% to 90%, and for those an estimate of the (nonhomogeneous) generator matrix follows. We also solve the embedding problem analytically for the particular case of three-dimensional circulant matrices. Finally, a discussion of possible applications of our framework to problems in different fields is briefly addressed.
NASA Astrophysics Data System (ADS)
Guerra, Jorge; Ullrich, Paul
2016-04-01
Tempest is a next-generation global climate and weather simulation platform designed to allow experimentation with numerical methods for a wide range of spatial resolutions. The atmospheric fluid equations are discretized by continuous / discontinuous finite elements in the horizontal and by a staggered nodal finite element method (SNFEM) in the vertical, coupled with implicit/explicit time integration. At horizontal resolutions below 10km, many important questions remain on optimal techniques for solving the fluid equations. We present results from a suite of idealized test cases to validate the performance of the SNFEM applied in the vertical with an emphasis on flow features and dynamic behavior. Internal gravity wave, mountain wave, convective bubble, and Cartesian baroclinic instability tests will be shown at various vertical orders of accuracy and compared with known results.
Reich, Christian G; Ryan, Patrick B; Schuemie, Martijn J
2013-10-01
A systematic risk identification system has the potential to test marketed drugs for important Health Outcomes of Interest or HOI. For each HOI, multiple definitions are used in the literature, and some of them are validated for certain databases. However, little is known about the effect of different definitions on the ability of methods to estimate their association with medical products. Alternative definitions of HOI were studied for their effect on the performance of analytical methods in observational outcome studies. A set of alternative definitions for three HOI were defined based on literature review and clinical diagnosis guidelines: acute kidney injury, acute liver injury and acute myocardial infarction. The definitions varied by the choice of diagnostic codes and the inclusion of procedure codes and lab values. They were then used to empirically study an array of analytical methods with various analytical choices in four observational healthcare databases. The methods were executed against predefined drug-HOI pairs to generate an effect estimate and standard error for each pair. These test cases included positive controls (active ingredients with evidence to suspect a positive association with the outcome) and negative controls (active ingredients with no evidence to expect an effect on the outcome). Three different performance metrics where used: (i) Area Under the Receiver Operator Characteristics (ROC) curve (AUC) as a measure of a method's ability to distinguish between positive and negative test cases, (ii) Measure of bias by estimation of distribution of observed effect estimates for the negative test pairs where the true effect can be assumed to be one (no relative risk), and (iii) Minimal Detectable Relative Risk (MDRR) as a measure of whether there is sufficient power to generate effect estimates. In the three outcomes studied, different definitions of outcomes show comparable ability to differentiate true from false control cases (AUC) and a similar bias estimation. However, broader definitions generating larger outcome cohorts allowed more drugs to be studied with sufficient statistical power. Broader definitions are preferred since they allow studying drugs with lower prevalence than the more precise or narrow definitions while showing comparable performance characteristics in differentiation of signal vs. no signal as well as effect size estimation.
The neural component-process architecture of endogenously generated emotion
Kanske, Philipp; Singer, Tania
2017-01-01
Abstract Despite the ubiquity of endogenous emotions and their role in both resilience and pathology, the processes supporting their generation are largely unknown. We propose a neural component process model of endogenous generation of emotion (EGE) and test it in two functional magnetic resonance imaging (fMRI) experiments (N = 32/293) where participants generated and regulated positive and negative emotions based on internal representations, usin self-chosen generation methods. EGE activated nodes of salience (SN), default mode (DMN) and frontoparietal control (FPCN) networks. Component processes implemented by these networks were established by investigating their functional associations, activation dynamics and integration. SN activation correlated with subjective affect, with midbrain nodes exclusively distinguishing between positive and negative affect intensity, showing dynamics consistent generation of core affect. Dorsomedial DMN, together with ventral anterior insula, formed a pathway supporting multiple generation methods, with activation dynamics suggesting it is involved in the generation of elaborated experiential representations. SN and DMN both coupled to left frontal FPCN which in turn was associated with both subjective affect and representation formation, consistent with FPCN supporting the executive coordination of the generation process. These results provide a foundation for research into endogenous emotion in normal, pathological and optimal function. PMID:27522089
NASA Shuttle Orbiter Reinforced Carbon Carbon (RCC) Crack Repair Arc-Jet Testing
NASA Technical Reports Server (NTRS)
Clark, ShawnDella; Larin, Max; Rochelle, Bill
2007-01-01
This NASA study demonstrates the capability for testing NOAX-repaired RCC crack models in high temperature environments representative of Shuttle Orbiter during reentry. Analysis methods have provided correlation of test data with flight predictions. NOAX repair material for RCC is flown on every STS flight in the event such a repair is needed. Two final test reports are being generated on arc-jet results (both calibration model runs and repaired models runs).
Automated Generation and Assessment of Autonomous Systems Test Cases
NASA Technical Reports Server (NTRS)
Barltrop, Kevin J.; Friberg, Kenneth H.; Horvath, Gregory A.
2008-01-01
This slide presentation reviews some of the issues concerning verification and validation testing of autonomous spacecraft routinely culminates in the exploration of anomalous or faulted mission-like scenarios using the work involved during the Dawn mission's tests as examples. Prioritizing which scenarios to develop usually comes down to focusing on the most vulnerable areas and ensuring the best return on investment of test time. Rules-of-thumb strategies often come into play, such as injecting applicable anomalies prior to, during, and after system state changes; or, creating cases that ensure good safety-net algorithm coverage. Although experience and judgment in test selection can lead to high levels of confidence about the majority of a system's autonomy, it's likely that important test cases are overlooked. One method to fill in potential test coverage gaps is to automatically generate and execute test cases using algorithms that ensure desirable properties about the coverage. For example, generate cases for all possible fault monitors, and across all state change boundaries. Of course, the scope of coverage is determined by the test environment capabilities, where a faster-than-real-time, high-fidelity, software-only simulation would allow the broadest coverage. Even real-time systems that can be replicated and run in parallel, and that have reliable set-up and operations features provide an excellent resource for automated testing. Making detailed predictions for the outcome of such tests can be difficult, and when algorithmic means are employed to produce hundreds or even thousands of cases, generating predicts individually is impractical, and generating predicts with tools requires executable models of the design and environment that themselves require a complete test program. Therefore, evaluating the results of large number of mission scenario tests poses special challenges. A good approach to address this problem is to automatically score the results based on a range of metrics. Although the specific means of scoring depends highly on the application, the use of formal scoring - metrics has high value in identifying and prioritizing anomalies, and in presenting an overall picture of the state of the test program. In this paper we present a case study based on automatic generation and assessment of faulted test runs for the Dawn mission, and discuss its role in optimizing the allocation of resources for completing the test program.
NASA Astrophysics Data System (ADS)
Guo, Jun; Lu, Siliang; Zhai, Chao; He, Qingbo
2018-02-01
An automatic bearing fault diagnosis method is proposed for permanent magnet synchronous generators (PMSGs), which are widely installed in wind turbines subjected to low rotating speeds, speed fluctuations, and electrical device noise interferences. The mechanical rotating angle curve is first extracted from the phase current of a PMSG by sequentially applying a series of algorithms. The synchronous sampled vibration signal of the fault bearing is then resampled in the angular domain according to the obtained rotating phase information. Considering that the resampled vibration signal is still overwhelmed by heavy background noise, an adaptive stochastic resonance filter is applied to the resampled signal to enhance the fault indicator and facilitate bearing fault identification. Two types of fault bearings with different fault sizes in a PMSG test rig are subjected to experiments to test the effectiveness of the proposed method. The proposed method is fully automated and thus shows potential for convenient, highly efficient and in situ bearing fault diagnosis for wind turbines subjected to harsh environments.
Fine-scale patterns of population stratification confound rare variant association tests.
O'Connor, Timothy D; Kiezun, Adam; Bamshad, Michael; Rich, Stephen S; Smith, Joshua D; Turner, Emily; Leal, Suzanne M; Akey, Joshua M
2013-01-01
Advances in next-generation sequencing technology have enabled systematic exploration of the contribution of rare variation to Mendelian and complex diseases. Although it is well known that population stratification can generate spurious associations with common alleles, its impact on rare variant association methods remains poorly understood. Here, we performed exhaustive coalescent simulations with demographic parameters calibrated from exome sequence data to evaluate the performance of nine rare variant association methods in the presence of fine-scale population structure. We find that all methods have an inflated spurious association rate for parameter values that are consistent with levels of differentiation typical of European populations. For example, at a nominal significance level of 5%, some test statistics have a spurious association rate as high as 40%. Finally, we empirically assess the impact of population stratification in a large data set of 4,298 European American exomes. Our results have important implications for the design, analysis, and interpretation of rare variant genome-wide association studies.
Eye Dominance Predicts fMRI Signals in Human Retinotopic Cortex
Mendola, Janine D.; Conner, Ian P.
2009-01-01
There have been many attempts to define eye dominance in normal subjects, but limited consensus exists, and relevant physiological data is scarce. In this study, we consider two different behavioral methods for assignment of eye dominance, and how well they predict fMRI signals evoked by monocular stimulation. Sighting eye dominance was assessed with two standard tests, the Porta Test, and a ‘hole in hand’ variation of the Miles Test. Acuity dominance was tested with a standard eye chart and with a computerized test of grating acuity. We found limited agreement between the sighting and acuity methods for assigning dominance in our individual subjects. We then compared the fMRI response generated by dominant eye stimulation to that generated by non-dominant eye, according to both methods, in 7 normal subjects. The stimulus consisted of a high contrast hemifield stimulus alternating with no stimulus in a blocked paradigm. In separate scans, we used standard techniques to label the borders of visual areas V1, V2, V3, VP, V4, V3A, and MT. These regions of interest (ROIs) were used to analyze each visual area separately. We found that percent change in fMRI BOLD signal was stronger for the dominant eye as defined by the acuity method, and this effect was significant for areas located in the ventral occipital territory (V1v, V2v, VP, V4). In contrast, assigning dominance based on sighting produced no significant interocular BOLD differences. We conclude that interocular BOLD differences in normal subjects exist, and may be predicted by acuity measures. PMID:17194544
Noncontact methods for optical testing of convex aspheric mirrors for future large telescopes
NASA Astrophysics Data System (ADS)
Goncharov, Alexander V.; Druzhin, Vladislav V.; Batshev, Vladislav I.
2009-06-01
Non-contact methods for testing of large rotationally symmetric convex aspheric mirrors are proposed. These methods are based on non-null testing with side illumination schemes, in which a narrow collimated beam is reflected from the meridional aspheric profile of a mirror. The figure error of the mirror is deduced from the intensity pattern from the reflected beam obtained on a screen, which is positioned in the tangential plane (containing the optical axis) and perpendicular to the incoming beam. Testing of the entire surface is carried out by rotating the mirror about its optical axis and registering the characteristics of the intensity pattern on the screen. The intensity pattern can be formed using three different techniques: modified Hartman test, interference and boundary curve test. All these techniques are well known but have not been used in the proposed side illumination scheme. Analytical expressions characterizing the shape and location of the intensity pattern on the screen or a CCD have been developed for all types of conic surfaces. The main advantage of these testing methods compared with existing methods (Hindle sphere, null lens, computer generated hologram) is that the reference system does not require large optical components.
Keever, Allison; McGowan, Conor P.; Ditchkoff, Stephen S.; Acker, S.A.; Grand, James B.; Newbolt, Chad H.
2017-01-01
Automated cameras have become increasingly common for monitoring wildlife populations and estimating abundance. Most analytical methods, however, fail to account for incomplete and variable detection probabilities, which biases abundance estimates. Methods which do account for detection have not been thoroughly tested, and those that have been tested were compared to other methods of abundance estimation. The goal of this study was to evaluate the accuracy and effectiveness of the N-mixture method, which explicitly incorporates detection probability, to monitor white-tailed deer (Odocoileus virginianus) by using camera surveys and a known, marked population to collect data and estimate abundance. Motion-triggered camera surveys were conducted at Auburn University’s deer research facility in 2010. Abundance estimates were generated using N-mixture models and compared to the known number of marked deer in the population. We compared abundance estimates generated from a decreasing number of survey days used in analysis and by time periods (DAY, NIGHT, SUNRISE, SUNSET, CREPUSCULAR, ALL TIMES). Accurate abundance estimates were generated using 24 h of data and nighttime only data. Accuracy of abundance estimates increased with increasing number of survey days until day 5, and there was no improvement with additional data. This suggests that, for our system, 5-day camera surveys conducted at night were adequate for abundance estimation and population monitoring. Further, our study demonstrates that camera surveys and N-mixture models may be a highly effective method for estimation and monitoring of ungulate populations.
Quality control for quantitative PCR based on amplification compatibility test.
Tichopad, Ales; Bar, Tzachi; Pecen, Ladislav; Kitchen, Robert R; Kubista, Mikael; Pfaffl, Michael W
2010-04-01
Quantitative qPCR is a routinely used method for the accurate quantification of nucleic acids. Yet it may generate erroneous results if the amplification process is obscured by inhibition or generation of aberrant side-products such as primer dimers. Several methods have been established to control for pre-processing performance that rely on the introduction of a co-amplified reference sequence, however there is currently no method to allow for reliable control of the amplification process without directly modifying the sample mix. Herein we present a statistical approach based on multivariate analysis of the amplification response data generated in real-time. The amplification trajectory in its most resolved and dynamic phase is fitted with a suitable model. Two parameters of this model, related to amplification efficiency, are then used for calculation of the Z-score statistics. Each studied sample is compared to a predefined reference set of reactions, typically calibration reactions. A probabilistic decision for each individual Z-score is then used to identify the majority of inhibited reactions in our experiments. We compare this approach to univariate methods using only the sample specific amplification efficiency as reporter of the compatibility. We demonstrate improved identification performance using the multivariate approach compared to the univariate approach. Finally we stress that the performance of the amplification compatibility test as a quality control procedure depends on the quality of the reference set. Copyright 2010 Elsevier Inc. All rights reserved.
Generating and using truly random quantum states in Mathematica
NASA Astrophysics Data System (ADS)
Miszczak, Jarosław Adam
2012-01-01
The problem of generating random quantum states is of a great interest from the quantum information theory point of view. In this paper we present a package for Mathematica computing system harnessing a specific piece of hardware, namely Quantis quantum random number generator (QRNG), for investigating statistical properties of quantum states. The described package implements a number of functions for generating random states, which use Quantis QRNG as a source of randomness. It also provides procedures which can be used in simulations not related directly to quantum information processing. Program summaryProgram title: TRQS Catalogue identifier: AEKA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7924 No. of bytes in distributed program, including test data, etc.: 88 651 Distribution format: tar.gz Programming language: Mathematica, C Computer: Requires a Quantis quantum random number generator (QRNG, http://www.idquantique.com/true-random-number-generator/products-overview.html) and supporting a recent version of Mathematica Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit) RAM: Case dependent Classification: 4.15 Nature of problem: Generation of random density matrices. Solution method: Use of a physical quantum random number generator. Running time: Generating 100 random numbers takes about 1 second, generating 1000 random density matrices takes more than a minute.
NASA Astrophysics Data System (ADS)
Liu, Xiaofei; Wang, Enyuan
2018-06-01
A rockburst is a dynamic disaster that occurs during underground excavation or mining which has been a serious threat to safety. Rockburst prediction and control are as important as any other underground engineering in deep mines. For this paper, we tested electromagnetic radiation (EMR) signals generated during the deformation and fracture of rock samples from a copper mine under uniaxial compression, tension, and cycle-loading experiments, analyzed the changes in the EMR intensity, pulse number, and frequency corresponding to the loading, and a high correlation between these EMR parameters and the applied loading was observed. EMR apparently reflects the deformation and fracture status to the loaded rock. Based on this experimental work, we invented the KBD5-type EMR monitor and used it to test EMR signals generated in the rock surrounding the Hongtoushan copper mine. From the test results, it is determined the responding characteristics of EMR signals generated by changes in mine-generated stresses and stress concentrations and it is proposed that this EMR monitoring method can be used to provide early warning for rockbursts.
Automatically generated acceptance test: A software reliability experiment
NASA Technical Reports Server (NTRS)
Protzel, Peter W.
1988-01-01
This study presents results of a software reliability experiment investigating the feasibility of a new error detection method. The method can be used as an acceptance test and is solely based on empirical data about the behavior of internal states of a program. The experimental design uses the existing environment of a multi-version experiment previously conducted at the NASA Langley Research Center, in which the launch interceptor problem is used as a model. This allows the controlled experimental investigation of versions with well-known single and multiple faults, and the availability of an oracle permits the determination of the error detection performance of the test. Fault interaction phenomena are observed that have an amplifying effect on the number of error occurrences. Preliminary results indicate that all faults examined so far are detected by the acceptance test. This shows promise for further investigations, and for the employment of this test method on other applications.
Scavuzzo-Duggan, Tess R.; Chaves, Arielle M.; Roberts, Alison W.
2015-07-14
Here, a method for rapid in vivo functional analysis of engineered proteins was developed using Physcomitrella patens. A complementation assay was designed for testing structure/function relationships in cellulose synthase (CESA) proteins. The components of the assay include (1) construction of test vectors that drive expression of epitope-tagged PpCESA5 carrying engineered mutations, (2) transformation of a ppcesa5 knockout line that fails to produce gametophores with test and control vectors, (3) scoring the stable transformants for gametophore production, (4) statistical analysis comparing complementation rates for test vectors to positive and negative control vectors, and (5) analysis of transgenic protein expression by Westernmore » blotting. The assay distinguished mutations that generate fully functional, nonfunctional, and partially functional proteins. In conclusion, compared with existing methods for in vivo testing of protein function, this complementation assay provides a rapid method for investigating protein structure/function relationships in plants.« less
Testa, Maria; Livingston, Jennifer A; VanZile-Tamsen, Carol
2011-02-01
A mixed methods approach, combining quantitative with qualitative data methods and analysis, offers a promising means of advancing the study of violence. Integrating semi-structured interviews and qualitative analysis into a quantitative program of research on women's sexual victimization has resulted in valuable scientific insight and generation of novel hypotheses for testing. This mixed methods approach is described and recommendations for integrating qualitative data into quantitative research are provided.
Al-Ghatani, Ali M; Obonsawin, Marc C; Binshaig, Basmah A; Al-Moutaery, Khalaf R
2011-01-01
There are 2 aims for this study: first, to collect normative data for the Wisconsin Card Sorting Test (WCST), Stroop test, Test of Non-verbal Intelligence (TONI-3), Picture Completion (PC) and Vocabulary (VOC) sub-test of the Wechsler Adult Intelligence Scale-Revised for use in a Saudi Arabian culture, and second, to use the normative data provided to generate the regression equations. To collect the normative data and generate the regression equations, 198 healthy individuals were selected to provide a representative distribution for age, gender, years of education, and socioeconomic class. The WCST, Stroop test, TONI-3, PC, and VOC were administrated to the healthy individuals. This study was carried out at the Department of Clinical Neurosciences, Riyadh Military Hospital, Riyadh, Kingdom of Saudi Arabia from January 2000 to July 2002. Normative data were obtained for all tests, and tables were constructed to interpret scores for different age groups. Regression equations to predict performance on the 3 tests of frontal function from scores on tests of fluid (TONI-3) and premorbid intelligence were generated from the data from the healthy individuals. The data collected in this study provide normative tables for 3 tests of frontal lobe function and for tests of general intellectual ability for use in Saudi Arabia. The data also provide a method to estimate pre-injury ability without the use of verbally based tests.
Shepertycky, Michael; Li, Qingguo
2015-01-01
Background Much research in the field of energy harvesting has sought to develop devices capable of generating electricity during daily activities with minimum user effort. No previous study has considered the metabolic cost of carrying the harvester when determining the energetic effects it has on the user. When considering device carrying costs, no energy harvester to date has demonstrated the ability to generate a substantial amount of electricity (> 5W) while maintaining a user effort at the same level or lower than conventional power generation methods (e.g. hand crank generator). Methodology/Principal Findings We developed a lower limb-driven energy harvester that is able to generate approximately 9W of electricity. To quantify the performance of the harvester, we introduced a new performance measure, total cost of harvesting (TCOH), which evaluates a harvester’s overall efficiency in generating electricity including the device carrying cost. The new harvester captured the motion from both lower limbs and operated in the generative braking mode to assist the knee flexor muscles in slowing the lower limbs. From a testing on 10 participants under different walking conditions, the harvester achieved an average TCOH of 6.1, which is comparable to the estimated TCOH for a conventional power generation method of 6.2. When generating 5.2W of electricity, the TCOH of the lower limb-driven energy harvester (4.0) is lower than that of conventional power generation methods. Conclusions/Significance These results demonstrated that the lower limb-driven energy harvester is an energetically effective option for generating electricity during daily activities. PMID:26039493
Shepertycky, Michael; Li, Qingguo
2015-01-01
Much research in the field of energy harvesting has sought to develop devices capable of generating electricity during daily activities with minimum user effort. No previous study has considered the metabolic cost of carrying the harvester when determining the energetic effects it has on the user. When considering device carrying costs, no energy harvester to date has demonstrated the ability to generate a substantial amount of electricity (> 5W) while maintaining a user effort at the same level or lower than conventional power generation methods (e.g. hand crank generator). We developed a lower limb-driven energy harvester that is able to generate approximately 9W of electricity. To quantify the performance of the harvester, we introduced a new performance measure, total cost of harvesting (TCOH), which evaluates a harvester's overall efficiency in generating electricity including the device carrying cost. The new harvester captured the motion from both lower limbs and operated in the generative braking mode to assist the knee flexor muscles in slowing the lower limbs. From a testing on 10 participants under different walking conditions, the harvester achieved an average TCOH of 6.1, which is comparable to the estimated TCOH for a conventional power generation method of 6.2. When generating 5.2W of electricity, the TCOH of the lower limb-driven energy harvester (4.0) is lower than that of conventional power generation methods. These results demonstrated that the lower limb-driven energy harvester is an energetically effective option for generating electricity during daily activities.
Martinek, Radek; Kelnar, Michal; Koudelka, Petr; Vanus, Jan; Bilik, Petr; Janku, Petr; Nazeran, Homer; Zidek, Jan
2016-02-01
This paper describes the design, construction, and testing of a multi-channel fetal electrocardiogram (fECG) signal generator based on LabVIEW. Special attention is paid to the fetal heart development in relation to the fetus' anatomy, physiology, and pathology. The non-invasive signal generator enables many parameters to be set, including fetal heart rate (FHR), maternal heart rate (MHR), gestational age (GA), fECG interferences (biological and technical artifacts), as well as other fECG signal characteristics. Furthermore, based on the change in the FHR and in the T wave-to-QRS complex ratio (T/QRS), the generator enables manifestations of hypoxic states (hypoxemia, hypoxia, and asphyxia) to be monitored while complying with clinical recommendations for classifications in cardiotocography (CTG) and fECG ST segment analysis (STAN). The generator can also produce synthetic signals with defined properties for 6 input leads (4 abdominal and 2 thoracic). Such signals are well suited to the testing of new and existing methods of fECG processing and are effective in suppressing maternal ECG while non-invasively monitoring abdominal fECG. They may also contribute to the development of a new diagnostic method, which may be referred to as non-invasive trans-abdominal CTG + STAN. The functional prototype is based on virtual instrumentation using the LabVIEW developmental environment and its associated data acquisition measurement cards (DAQmx). The generator also makes it possible to create synthetic signals and measure actual fetal and maternal ECGs by means of bioelectrodes.
Statistical complexity measure of pseudorandom bit generators
NASA Astrophysics Data System (ADS)
González, C. M.; Larrondo, H. A.; Rosso, O. A.
2005-08-01
Pseudorandom number generators (PRNG) are extensively used in Monte Carlo simulations, gambling machines and cryptography as substitutes of ideal random number generators (RNG). Each application imposes different statistical requirements to PRNGs. As L’Ecuyer clearly states “the main goal for Monte Carlo methods is to reproduce the statistical properties on which these methods are based whereas for gambling machines and cryptology, observing the sequence of output values for some time should provide no practical advantage for predicting the forthcoming numbers better than by just guessing at random”. In accordance with different applications several statistical test suites have been developed to analyze the sequences generated by PRNGs. In a recent paper a new statistical complexity measure [Phys. Lett. A 311 (2003) 126] has been defined. Here we propose this measure, as a randomness quantifier of a PRNGs. The test is applied to three very well known and widely tested PRNGs available in the literature. All of them are based on mathematical algorithms. Another PRNGs based on Lorenz 3D chaotic dynamical system is also analyzed. PRNGs based on chaos may be considered as a model for physical noise sources and important new results are recently reported. All the design steps of this PRNG are described, and each stage increase the PRNG randomness using different strategies. It is shown that the MPR statistical complexity measure is capable to quantify this randomness improvement. The PRNG based on the chaotic 3D Lorenz dynamical system is also evaluated using traditional digital signal processing tools for comparison.
The Analysis of Fluorescence Decay by a Method of Moments
Isenberg, Irvin; Dyson, Robert D.
1969-01-01
The fluorescence decay of the excited state of most biopolymers, and biopolymer conjugates and complexes, is not, in general, a simple exponential. The method of moments is used to establish a means of analyzing such multi-exponential decays. The method is tested by the use of computer simulated data, assuming that the limiting error is determined by noise generated by a pseudorandom number generator. Multi-exponential systems with relatively closely spaced decay constants may be successfully analyzed. The analyses show the requirements, in terms of precision, that data must meet. The results may be used both as an aid in the design of equipment and in the analysis of data subsequently obtained. PMID:5353139
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lissenden, Cliff; Hassan, Tasnin; Rangari, Vijaya
The research built upon a prior investigation to develop a unified constitutive model for design-by-analysis of the intermediate heat exchanger (IHX) for a very high temperature reactor (VHTR) design of next generation nuclear plants (NGNPs). Model development requires a set of failure data from complex mechanical experiments to characterize the material behavior. Therefore uniaxial and multiaxial creep-fatigue and creep-ratcheting tests were conducted on the nickel-base Alloy 617 at 850 and 950°C. The time dependence of material behavior, and the interaction of time dependent behavior (e.g., creep) with ratcheting, which is an increase in the cyclic mean strain under load-controlled cycling,more » are major concerns for NGNP design. This research project aimed at characterizing the microstructure evolution mechanisms activated in Alloy 617 by mechanical loading and dwell times at elevated temperature. The acoustic harmonic generation method was researched for microstructural characterization. It is a nonlinear acoustics method with excellent potential for nondestructive evaluation, and even online continuous monitoring once high temperature sensors become available. It is unique because it has the ability to quantitatively characterize microstructural features well before macroscale defects (e.g., cracks) form. The nonlinear acoustics beta parameter was shown to correlate with microstructural evolution using a systematic approach to handle the complexity of multiaxial creep-fatigue and creep-ratcheting deformation. Mechanical testing was conducted to provide a full spectrum of data for: thermal aging, tensile creep, uniaxial fatigue, uniaxial creep-fatigue, uniaxial creep-ratcheting, multiaxial creep-fatigue, and multiaxial creep-ratcheting. Transmission Electron Microscopy (TEM), Scanning Electron Microscopy (SEM), and Optical Microscopy were conducted to correlate the beta parameter with individual microstructure mechanisms. We researched application of the harmonic generation method to tubular mechanical test specimens and pipes for nondestructive evaluation. Tubular specimens and pipes act as waveguides, thus we applied the acoustic harmonic generation method to guided waves in both plates and shells. Magnetostrictive transducers were used to generate and receive guided wave modes in the shell sample and the received signals were processed to show the sensitivity of higher harmonic generation to microstructure evolution. Modeling was initiated to correlate higher harmonic generation with the microstructure that will lead to development of a life prediction model that is informed by the nonlinear acoustics measurements.« less
Comparing fire spread algorithms using equivalence testing and neutral landscape models
Brian R. Miranda; Brian R. Sturtevant; Jian Yang; Eric J. Gustafson
2009-01-01
We demonstrate a method to evaluate the degree to which a meta-model approximates spatial disturbance processes represented by a more detailed model across a range of landscape conditions, using neutral landscapes and equivalence testing. We illustrate this approach by comparing burn patterns produced by a relatively simple fire spread algorithm with those generated by...
The Future Value of Serious Games for Assessment: Where Do We Go Now?
ERIC Educational Resources Information Center
de Klerk, Sebastiaan; Kato, Pamela M.
2017-01-01
Game-based assessments will most likely be an increasing part of testing programs in future generations because they provide promising possibilities for more valid and reliable measurement of students' skills as compared to the traditional methods of assessment like paper-and-pencil tests or performance-based assessments. The current status of…
ERIC Educational Resources Information Center
Gligorovic, Milica; Buha, Natasa
2013-01-01
Background: The ability to generate and flexibly change concepts is of great importance for the development of academic and adaptive skills. This paper analyses the conceptual reasoning ability of children with mild intellectual disability (MID) by their achievements on the Wisconsin Card Sorting Test (WCST). Method: The sample consisted of 95…
A novel Python program for implementation of quality control in the ELISA.
Wetzel, Hanna N; Cohen, Cinder; Norman, Andrew B; Webster, Rose P
2017-09-01
The use of semi-quantitative assays such as the enzyme-linked immunosorbent assay (ELISA) requires stringent quality control of the data. However, such quality control is often lacking in academic settings due to unavailability of software and knowledge. Therefore, our aim was to develop methods to easily implement Levey-Jennings quality control methods. For this purpose, we created a program written in Python (a programming language with an open-source license) and tested it using a training set of ELISA standard curves quantifying the Fab fragment of an anti-cocaine monoclonal antibody in mouse blood. A colorimetric ELISA was developed using a goat anti-human anti-Fab capture method. Mouse blood samples spiked with the Fab fragment were tested against a standard curve of known concentrations of Fab fragment in buffer over a period of 133days stored at 4°C to assess stability of the Fab fragment and to generate a test dataset to assess the program. All standard curves were analyzed using our program to batch process the data and to generate Levey-Jennings control charts and statistics regarding the datasets. The program was able to identify values outside of two standard deviations, and this identification of outliers was consistent with the results of a two-way ANOVA. This program is freely available, which will help laboratories implement quality control methods, thus improving reproducibility within and between labs. We report here successful testing of the program with our training set and development of a method for quantification of the Fab fragment in mouse blood. Copyright © 2017 Elsevier B.V. All rights reserved.
A study on scattering correction for γ-photon 3D imaging test method
NASA Astrophysics Data System (ADS)
Xiao, Hui; Zhao, Min; Liu, Jiantang; Chen, Hao
2018-03-01
A pair of 511KeV γ-photons is generated during a positron annihilation. Their directions differ by 180°. The moving path and energy information can be utilized to form the 3D imaging test method in industrial domain. However, the scattered γ-photons are the major factors influencing the imaging precision of the test method. This study proposes a γ-photon single scattering correction method from the perspective of spatial geometry. The method first determines possible scattering points when the scattered γ-photon pair hits the detector pair. The range of scattering angle can then be calculated according to the energy window. Finally, the number of scattered γ-photons denotes the attenuation of the total scattered γ-photons along its moving path. The corrected γ-photons are obtained by deducting the scattered γ-photons from the original ones. Two experiments are conducted to verify the effectiveness of the proposed scattering correction method. The results concluded that the proposed scattering correction method can efficiently correct scattered γ-photons and improve the test accuracy.
PWR steam generator chemical cleaning, Phase I. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rothstein, S.
1978-07-01
United Nuclear Industries (UNI) entered into a subcontract with Consolidated Edison Company of New York (Con Ed) on August 8, 1977, for the purpose of developing methods to chemically clean the secondary side tube to tube support crevices of the steam generators of Indian Point Nos. 1 and 2 PWR plants. This document represents the first reporting on activities performed for Phase I of this effort. Specifically, this report contains the results of a literature search performed by UNI for the purpose of determining state-of-the-art chemical solvents and methods for decontaminating nuclear reactor steam generators. The results of the searchmore » sought to accomplish two objectives: (1) identify solvents beyond those proposed at present by UNI and Con Ed for the test program, and (2) confirm the appropriateness of solvents and methods of decontamination currently in use by UNI.« less
NASA Technical Reports Server (NTRS)
Bentley, Nicole L.; Thomas, Evan A.; VanWie, Michael; Morrison, Chad; Stinson, Richard G.
2010-01-01
The Total Organic Carbon Analyzer (TOGA) is designed to autonomously determine recovered water quality as a function of TOC. The current TOGA has been on the International Space Station since November 2008. Functional checkout and operations revealed complex operating considerations. Specifically, failure of the hydrogen catalyst resulted in the development of an innovative oxidation analysis method. This method reduces the activation time and limits the hydrogen produced during analysis, while retaining the ability to indicate TOC concentrations within 25% accuracy. Subsequent testing and comparison to archived samples returned from the Station and tested on the ground yield high confidence in this method, and in the quality of the recovered water.
NASA Astrophysics Data System (ADS)
Oliveira, Sérgio C.; Zêzere, José L.; Lajas, Sara; Melo, Raquel
2017-07-01
Approaches used to assess shallow slide susceptibility at the basin scale are conceptually different depending on the use of statistical or physically based methods. The former are based on the assumption that the same causes are more likely to produce the same effects, whereas the latter are based on the comparison between forces which tend to promote movement along the slope and the counteracting forces that are resistant to motion. Within this general framework, this work tests two hypotheses: (i) although conceptually and methodologically distinct, the statistical and deterministic methods generate similar shallow slide susceptibility results regarding the model's predictive capacity and spatial agreement; and (ii) the combination of shallow slide susceptibility maps obtained with statistical and physically based methods, for the same study area, generate a more reliable susceptibility model for shallow slide occurrence. These hypotheses were tested at a small test site (13.9 km2) located north of Lisbon (Portugal), using a statistical method (the information value method, IV) and a physically based method (the infinite slope method, IS). The landslide susceptibility maps produced with the statistical and deterministic methods were combined into a new landslide susceptibility map. The latter was based on a set of integration rules defined by the cross tabulation of the susceptibility classes of both maps and analysis of the corresponding contingency tables. The results demonstrate a higher predictive capacity of the new shallow slide susceptibility map, which combines the independent results obtained with statistical and physically based models. Moreover, the combination of the two models allowed the identification of areas where the results of the information value and the infinite slope methods are contradictory. Thus, these areas were classified as uncertain and deserve additional investigation at a more detailed scale.
Skin friction drag reduction in turbulent flow using spanwise traveling surface waves
NASA Astrophysics Data System (ADS)
Musgrave, Patrick F.; Tarazaga, Pablo A.
2017-04-01
A major technological driver in current aircraft and other vehicles is the improvement of fuel efficiency. One way to increase the efficiency is to reduce the skin friction drag on these vehicles. This experimental study presents an active drag reduction technique which decreases the skin friction using spanwise traveling waves. A novel method is introduced for generating traveling waves which is low-profile, non-intrusive, and operates under various flow conditions. This wave generation method is discussed and the resulting traveling waves are presented. These waves are then tested in a low-speed wind tunnel to determine their drag reduction potential. To calculate the drag reduction, the momentum integral method is applied to turbulent boundary layer data collected using a pitot tube and traversing system. The skin friction coefficients are then calculated and the drag reduction determined. Preliminary results yielded a drag reduction of ≍ 5% for 244Hz traveling waves. Thus, this novel wave generation method possesses the potential to yield an easily implementable, non-invasive drag reduction technology.
Development of dynamic Bayesian models for web application test management
NASA Astrophysics Data System (ADS)
Azarnova, T. V.; Polukhin, P. V.; Bondarenko, Yu V.; Kashirina, I. L.
2018-03-01
The mathematical apparatus of dynamic Bayesian networks is an effective and technically proven tool that can be used to model complex stochastic dynamic processes. According to the results of the research, mathematical models and methods of dynamic Bayesian networks provide a high coverage of stochastic tasks associated with error testing in multiuser software products operated in a dynamically changing environment. Formalized representation of the discrete test process as a dynamic Bayesian model allows us to organize the logical connection between individual test assets for multiple time slices. This approach gives an opportunity to present testing as a discrete process with set structural components responsible for the generation of test assets. Dynamic Bayesian network-based models allow us to combine in one management area individual units and testing components with different functionalities and a direct influence on each other in the process of comprehensive testing of various groups of computer bugs. The application of the proposed models provides an opportunity to use a consistent approach to formalize test principles and procedures, methods used to treat situational error signs, and methods used to produce analytical conclusions based on test results.
Effect of geometrical parameters on pressure distributions of impulse manufacturing technologies
NASA Astrophysics Data System (ADS)
Brune, Ryan Carl
Impulse manufacturing techniques constitute a growing field of methods that utilize high-intensity pressure events to conduct useful mechanical operations. As interest in applying this technology continues to grow, greater understanding must be achieved with respect to output pressure events in both magnitude and distribution. In order to address this need, a novel pressure measurement has been developed called the Profile Indentation Pressure Evaluation (PIPE) method that systematically analyzes indentation patterns created with impulse events. Correlation with quasi-static test data and use of software-assisted analysis techniques allows for colorized pressure maps to be generated for both electromagnetic and vaporizing foil actuator (VFA) impulse forming events. Development of this technique aided introduction of a design method for electromagnetic path actuator systems, where key geometrical variables are considered using a newly developed analysis method, which is called the Path Actuator Proximal Array (PAPA) pressure model. This model considers key current distribution and proximity effects and interprets generated pressure by considering the adjacent conductor surfaces as proximal arrays of individual conductors. According to PIPE output pressure analysis, the PAPA model provides a reliable prediction of generated pressure for path actuator systems as local geometry is changed. Associated mechanical calculations allow for pressure requirements to be calculated for shearing, flanging, and hemming operations, providing a design process for such cases. Additionally, geometry effect is investigated through a formability enhancement study using VFA metalworking techniques. A conical die assembly is utilized with both VFA high velocity and traditional quasi-static test methods on varied Hasek-type sample geometries to elicit strain states consistent with different locations on a forming limit diagram. Digital image correlation techniques are utilized to measure major and minor strains for each sample type to compare limit strain results. Overall testing indicated decreased formability at high velocity for 304 DDQ stainless steel and increased formability at high velocity for 3003-H14 aluminum. Microstructural and fractographic analysis helped dissect and analyze the observed differences in these cases. Overall, these studies comprehensively explore the effects of geometrical parameters on magnitude and distribution of impulse manufacturing generated pressure, establishing key guidelines and models for continued development and implementation in commercial applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia, Marie-Paule, E-mail: marie-paule.garcia@univ-brest.fr; Villoing, Daphnée; McKay, Erin
Purpose: The TestDose platform was developed to generate scintigraphic imaging protocols and associated dosimetry by Monte Carlo modeling. TestDose is part of a broader project (www.dositest.com) whose aim is to identify the biases induced by different clinical dosimetry protocols. Methods: The TestDose software allows handling the whole pipeline from virtual patient generation to resulting planar and SPECT images and dosimetry calculations. The originality of their approach relies on the implementation of functional segmentation for the anthropomorphic model representing a virtual patient. Two anthropomorphic models are currently available: 4D XCAT and ICRP 110. A pharmacokinetic model describes the biodistribution of amore » given radiopharmaceutical in each defined compartment at various time-points. The Monte Carlo simulation toolkit GATE offers the possibility to accurately simulate scintigraphic images and absorbed doses in volumes of interest. The TestDose platform relies on GATE to reproduce precisely any imaging protocol and to provide reference dosimetry. For image generation, TestDose stores user’s imaging requirements and generates automatically command files used as input for GATE. Each compartment is simulated only once and the resulting output is weighted using pharmacokinetic data. Resulting compartment projections are aggregated to obtain the final image. For dosimetry computation, emission data are stored in the platform database and relevant GATE input files are generated for the virtual patient model and associated pharmacokinetics. Results: Two samples of software runs are given to demonstrate the potential of TestDose. A clinical imaging protocol for the Octreoscan™ therapeutical treatment was implemented using the 4D XCAT model. Whole-body “step and shoot” acquisitions at different times postinjection and one SPECT acquisition were generated within reasonable computation times. Based on the same Octreoscan™ kinetics, a dosimetry computation performed on the ICRP 110 model is also presented. Conclusions: The proposed platform offers a generic framework to implement any scintigraphic imaging protocols and voxel/organ-based dosimetry computation. Thanks to the modular nature of TestDose, other imaging modalities could be supported in the future such as positron emission tomography.« less
A Hydrogen Peroxide Hot-Jet Simulator for Wind-Tunnel Tests of Turbojet-Exit Models
NASA Technical Reports Server (NTRS)
Runckel, Jack F.; Swihart, John M.
1959-01-01
A turbojet-engine-exhaust simulator which utilizes a hydrogen peroxide gas generator has been developed for powered-model testing in wind tunnels with air exchange. Catalytic decomposition of concentrated hydrogen peroxide provides a convenient and easily controlled method of providing a hot jet with characteristics that correspond closely to the jet of a gas turbine engine. The problems associated with simulation of jet exhausts in a transonic wind tunnel which led to the selection of a liquid monopropellant are discussed. The operation of the jet simulator consisting of a thrust balance, gas generator, exit nozzle, and auxiliary control system is described. Static-test data obtained with convergent nozzles are presented and shown to be in good agreement with ideal calculated values.
Intrusion detection using rough set classification.
Zhang, Lian-hua; Zhang, Guan-hua; Zhang, Jie; Bai, Ying-cai
2004-09-01
Recently machine learning-based intrusion detection approaches have been subjected to extensive researches because they can detect both misuse and anomaly. In this paper, rough set classification (RSC), a modern learning algorithm, is used to rank the features extracted for detecting intrusions and generate intrusion detection models. Feature ranking is a very critical step when building the model. RSC performs feature ranking before generating rules, and converts the feature ranking to minimal hitting set problem addressed by using genetic algorithm (GA). This is done in classical approaches using Support Vector Machine (SVM) by executing many iterations, each of which removes one useless feature. Compared with those methods, our method can avoid many iterations. In addition, a hybrid genetic algorithm is proposed to increase the convergence speed and decrease the training time of RSC. The models generated by RSC take the form of "IF-THEN" rules, which have the advantage of explication. Tests and comparison of RSC with SVM on DARPA benchmark data showed that for Probe and DoS attacks both RSC and SVM yielded highly accurate results (greater than 99% accuracy on testing set).
Hwang, Wei-Chin
2010-01-01
How do we culturally adapt psychotherapy for ethnic minorities? Although there has been growing interest in doing so, few therapy adaptation frameworks have been developed. The majority of these frameworks take a top-down theoretical approach to adapting psychotherapy. The purpose of this paper is to introduce a community-based developmental approach to modifying psychotherapy for ethnic minorities. The Formative Method for Adapting Psychotherapy (FMAP) is a bottom-up approach that involves collaborating with consumers to generate and support ideas for therapy adaptation. It involves 5-phases that target developing, testing, and reformulating therapy modifications. These phases include: (a) generating knowledge and collaborating with stakeholders (b) integrating generated information with theory and empirical and clinical knowledge, (c) reviewing the initial culturally adapted clinical intervention with stakeholders and revising the culturally adapted intervention, (d) testing the culturally adapted intervention, and (e) finalizing the culturally adapted intervention. Application of the FMAP is illustrated using examples from a study adapting psychotherapy for Chinese Americans, but can also be readily applied to modify therapy for other ethnic groups. PMID:20625458
NASA Technical Reports Server (NTRS)
Culpepper, William X.; ONeill, Pat; Nicholson, Leonard L.
2000-01-01
An internuclear cascade and evaporation model has been adapted to estimate the LET spectrum generated during testing with 200 MeV protons. The model-generated heavy ion LET spectrum is compared to the heavy ion LET spectrum seen on orbit. This comparison is the basis for predicting single event failure rates from heavy ions using results from a single proton test. Of equal importance, this spectra comparison also establishes an estimate of the risk of encountering a failure mode on orbit that was not detected during proton testing. Verification of the general results of the model is presented based on experiments, individual part test results, and flight data. Acceptance of this model and its estimate of remaining risk opens the hardware verification philosophy to the consideration of radiation testing with high energy protons at the board and box level instead of the more standard method of individual part testing with low energy heavy ions.
NASA Astrophysics Data System (ADS)
Kreger, Stephen T.; Sang, Alex K.; Garg, Naman; Michel, Julia
2013-05-01
Fiber-optic ultrasonic transducers are an important component of an active ultrasonic testing system for structural health monitoring. Fiber-optic transducers have several advantages such as small size, light weight, and immunity to electromagnetic interference that make them much more attractive than the current available piezoelectric transducers, especially as embedded and permanent transducers in active ultrasonic testing for structural health monitoring. In this paper, a distributed fiber-optic laser-ultrasound generation based on the ghost-mode of tilted fiber Bragg gratings is studied. The influences of the laser power and laser pulse duration on the laser-ultrasound generation are investigated. The results of this paper are helpful to understand the working principle of this laser-ultrasound method and improve the ultrasonic generation efficiency.
SW-846 Test Method 3511: Organic Compounds in Water by Microextraction
a procedure for extracting selected volatile and semivolatileorganic compounds from water. The microscale approach minimizes sample size and solventusage, thereby reducing the supply costs, health and safety risks, and waste generated.
Patterned thin metal film for the lateral resolution measurement of photoacoustic tomography
2012-01-01
Background Image quality assessment method of photoacoustic tomography has not been completely standardized yet. Due to the combined nature of photonic signal generation and ultrasonic signal transmission in biological tissue, neither optical nor ultrasonic traditional methods can be used without modification. An optical resolution measurement technique was investigated for its feasibility for resolution measurement of photoacoustic tomography. Methods A patterned thin metal film deposited on silica glass provides high contrast in optical imaging due to high reflectivity from the metal film and high transmission from the glass. It provides high contrast when it is used for photoacoustic tomography because thin metal film can absorb pulsed laser energy. An US Air Force 1951 resolution target was used to generate patterned photoacoustic signal to measure the lateral resolution. Transducer with 2.25 MHz bandwidth and a sample submerged in water and gelatinous block were tested for lateral resolution measurement. Results Photoacoustic signal generated from a thin metal film deposited on a glass can propagate along the surface or through the surrounding medium. First, a series of experiments with tilted sample confirmed that the measured photoacoustic signal is what is propagating through the medium. Lateral resolution of the photoacoustic tomography system was successfully measured for water and gelatinous block as media: 0.33 mm and 0.35 mm in water and gelatinous material, respectively, when 2.25 MHz transducer was used. Chicken embryo was tested for biomedical applications. Conclusions A patterned thin metal film sample was tested for its feasibility of measuring lateral resolution of a photoacoustic tomography system. Lateral resolutions in water and gelatinous material were successfully measured using the proposed method. Measured resolutions agreed well with theoretical values. PMID:22794510
NASA Astrophysics Data System (ADS)
McClanahan, James Patrick
Eddy Current Testing (ECT) is a Non-Destructive Examination (NDE) technique that is widely used in power generating plants (both nuclear and fossil) to test the integrity of heat exchanger (HX) and steam generator (SG) tubing. Specifically for this research, laboratory-generated, flawed tubing data were examined. The purpose of this dissertation is to develop and implement an automated method for the classification and an advanced characterization of defects in HX and SG tubing. These two improvements enhanced the robustness of characterization as compared to traditional bobbin-coil ECT data analysis methods. A more robust classification and characterization of the tube flaw in-situ (while the SG is on-line but not when the plant is operating), should provide valuable information to the power industry. The following are the conclusions reached from this research. A feature extraction program acquiring relevant information from both the mixed, absolute and differential data was successfully implemented. The CWT was utilized to extract more information from the mixed, complex differential data. Image Processing techniques used to extract the information contained in the generated CWT, classified the data with a high success rate. The data were accurately classified, utilizing the compressed feature vector and using a Bayes classification system. An estimation of the upper bound for the probability of error, using the Bhattacharyya distance, was successfully applied to the Bayesian classification. The classified data were separated according to flaw-type (classification) to enhance characterization. The characterization routine used dedicated, flaw-type specific ANNs that made the characterization of the tube flaw more robust. The inclusion of outliers may help complete the feature space so that classification accuracy is increased. Given that the eddy current test signals appear very similar, there may not be sufficient information to make an extremely accurate (>95%) classification or an advanced characterization using this system. It is necessary to have a larger database fore more accurate system learning.
Cost analysis in the toxicology laboratory.
Travers, E M
1990-09-01
The process of determining laboratory sectional and departmental costs and test costs for instrument-generated and manually generated reportable results for toxicology laboratories has been outlined in this article. It is hoped that the basic principles outlined in the preceding text will clarify and elucidate one of the most important areas needed for laboratory fiscal integrity and its survival in these difficult times for health care providers. The following general principles derived from this article are helpful aids for managers of toxicology laboratories. 1. To manage a cost-effective, efficient toxicology laboratory, several factors must be considered: the laboratory's instrument configuration, test turnaround time needs, the test menu offered, the analytic methods used, the cost of labor based on time expended and the experience and educational level of the staff, and logistics that determine specimen delivery time and costs. 2. There is a wide variation in costs for toxicologic methods, which requires that an analysis of capital (equipment) purchase and operational (test performance) costs be performed to avoid waste, purchase wisely, and determine which tests consume the majority of the laboratory's resources. 3. Toxicologic analysis is composed of many complex steps. Each step must be individually cost-accounted. Screening test results must be confirmed, and the cost for both steps must be included in the cost per reportable result. 4. Total costs will vary in the same laboratory and between laboratories based on differences in salaries paid to technical staff, differences in reagent/supply costs, the number of technical staff needed to operate the analyzer or perform the method, and the inefficient use of highly paid staff to operate the analyzer or perform the method. 5. Since direct test costs vary directly with the type and number of analyzers or methods and are dependent on the operational mode designed by the manufacturer, laboratory managers should construct an actual test-cost data base for instrument or method in use to accurately compare costs using the "bottom-up" approach. 6. Laboratory expenses can be examined from three perspectives: total laboratory, laboratory section, and subsection workstation. The objective is to track all laboratory expenses through each of these levels. 7. In the final analysis, a portion of total laboratory expenses must be allocated to each unit of laboratory output--the billable procedure or, in laboratories where tests are not billed, the tests produced.(ABSTRACT TRUNCATED AT 400 WORDS)
NASA Technical Reports Server (NTRS)
Lohner, Kevin A. (Inventor); Mays, Jeffrey A. (Inventor); Sevener, Kathleen M. (Inventor)
2004-01-01
A method for designing and assembling a high performance catalyst bed gas generator for use in decomposing propellants, particularly hydrogen peroxide propellants, for use in target, space, and on-orbit propulsion systems and low-emission terrestrial power and gas generation. The gas generator utilizes a sectioned catalyst bed system, and incorporates a robust, high temperature mixed metal oxide catalyst. The gas generator requires no special preheat apparatus or special sequencing to meet start-up requirements, enabling a fast overall response time. The high performance catalyst bed gas generator system has consistently demonstrated high decomposition efficiency, extremely low decomposition roughness, and long operating life on multiple test articles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pitarka, A.
In this project we developed GEN_SRF4 a computer program for generating kinematic rupture models, compatible with the SRF format, using Irikura and Miyake (2011) asperity-based earthquake rupture model (IM2011, hereafter). IM2011, also known as Irkura’s recipe, has been widely used to model and simulate ground motion from earthquakes in Japan. An essential part of the method is its kinematic rupture generation technique, which is based on a deterministic rupture asperity modeling approach. The source model simplicity and efficiency of IM2011 at reproducing ground motion from earthquakes recorded in Japan makes it attractive to developers and users of the Southern Californiamore » Earthquake Center Broadband Platform (SCEC BB platform). Besides writing the code the objective of our study was to test the transportability of IM2011 to broadband simulation methods used by the SCEC BB platform. Here we test it using the Graves and Pitarka (2010) method, implemented in the platform. We performed broadband (0.1- -10 Hz) ground motion simulations for a M6.7 scenario earthquake using rupture models produced with both GEN_SRF4 and rupture generator of Graves and Pitarka (2016), (GP2016 hereafter). In the simulations we used the same Green’s functions, and same high frequency approach for calculating the low-frequency and high-frequency parts of ground motion, respectively.« less
On the design of henon and logistic map-based random number generator
NASA Astrophysics Data System (ADS)
Magfirawaty; Suryadi, M. T.; Ramli, Kalamullah
2017-10-01
The key sequence is one of the main elements in the cryptosystem. True Random Number Generators (TRNG) method is one of the approaches to generating the key sequence. The randomness source of the TRNG divided into three main groups, i.e. electrical noise based, jitter based and chaos based. The chaos based utilizes a non-linear dynamic system (continuous time or discrete time) as an entropy source. In this study, a new design of TRNG based on discrete time chaotic system is proposed, which is then simulated in LabVIEW. The principle of the design consists of combining 2D and 1D chaotic systems. A mathematical model is implemented for numerical simulations. We used comparator process as a harvester method to obtain the series of random bits. Without any post processing, the proposed design generated random bit sequence with high entropy value and passed all NIST 800.22 statistical tests.
An extension of the directed search domain algorithm to bilevel optimization
NASA Astrophysics Data System (ADS)
Wang, Kaiqiang; Utyuzhnikov, Sergey V.
2017-08-01
A method is developed for generating a well-distributed Pareto set for the upper level in bilevel multiobjective optimization. The approach is based on the Directed Search Domain (DSD) algorithm, which is a classical approach for generation of a quasi-evenly distributed Pareto set in multiobjective optimization. The approach contains a double-layer optimizer designed in a specific way under the framework of the DSD method. The double-layer optimizer is based on bilevel single-objective optimization and aims to find a unique optimal Pareto solution rather than generate the whole Pareto frontier on the lower level in order to improve the optimization efficiency. The proposed bilevel DSD approach is verified on several test cases, and a relevant comparison against another classical approach is made. It is shown that the approach can generate a quasi-evenly distributed Pareto set for the upper level with relatively low time consumption.
VCSEL fault location apparatus and method
Keeler, Gordon A [Albuquerque, NM; Serkland, Darwin K [Albuquerque, NM
2007-05-15
An apparatus for locating a fault within an optical fiber is disclosed. The apparatus, which can be formed as a part of a fiber-optic transmitter or as a stand-alone instrument, utilizes a vertical-cavity surface-emitting laser (VCSEL) to generate a test pulse of light which is coupled into an optical fiber under test. The VCSEL is subsequently reconfigured by changing a bias voltage thereto and is used as a resonant-cavity photodetector (RCPD) to detect a portion of the test light pulse which is reflected or scattered from any fault within the optical fiber. A time interval .DELTA.t between an instant in time when the test light pulse is generated and the time the reflected or scattered portion is detected can then be used to determine the location of the fault within the optical fiber.
From systems biology to dynamical neuropharmacology: proposal for a new methodology.
Erdi, P; Kiss, T; Tóth, J; Ujfalussy, B; Zalányi, L
2006-07-01
The concepts and methods of systems biology are extended to neuropharmacology in order to test and design drugs for the treatment of neurological and psychiatric disorders. Computational modelling by integrating compartmental neural modelling techniques and detailed kinetic descriptions of pharmacological modulation of transmitter-receptor interaction is offered as a method to test the electrophysiological and behavioural effects of putative drugs. Even more, an inverse method is suggested as a method for controlling a neural system to realise a prescribed temporal pattern. In particular, as an application of the proposed new methodology, a computational platform is offered to analyse the generation and pharmacological modulation of theta rhythm related to anxiety.
Kania, Dramane; Truong, Tam Nguyen; Montoya, Ana; Nagot, Nicolas; Van de Perre, Philippe; Tuaillon, Edouard
2015-01-01
Point-of-care testing and diagnosis of HIV acute infections play important roles in preventing transmission, but HIV rapid diagnosis tests have poor capacity to detect early infections. Filter paper can be used for capillary blood collection and HIV testing using 4th generation immunoassays. Antigen/antibody combined immunoassays were evaluated for their capacity to identify early HIV infections using filter paper in comparison with rapid test. Thirty nine serum samples collected from HIV seroconverters were spotted onto filter paper and tested by the Roche Elecsys(®) HIV Combi PT test and the DiaSorin Liaison XL Murex HIV Ab/Ag assay. Fourth generation immunoassays identified 34 out of 39 HIV early infections using dried serum spot, whereas the Determine™ HIV-1/2 rapid test detected 24 out of 39 HIV positive serum (87.2% vs 61.5% respectively, p = 0.009). p24 antigen was detected by the Liaison XL in 19 dried serum samples (48.7%). In the group characterized by a negative western blot, 7 out of 8 (87.5%) and 6 out of 8 (75.0%) samples were found positive for HIV using the Elecsys and the Liaison XL, respectively. None of these eight samples classified in this group of early acute infections were found positive by the rapid test. Fourth generation Ag/Ab immunoassays performed on dried serum spot had good performance for HIV testing during the early phases of HIV infection. This method may be useful to detect HIV early infections in hard-to-reach populations and individuals living in remote areas before rapid tests become positive. Copyright © 2014 Elsevier B.V. All rights reserved.
Cai, Bin; Dolly, Steven; Kamal, Gregory; Yaddanapudi, Sridhar; Sun, Baozhou; Goddu, S Murty; Mutic, Sasa; Li, Hua
2018-04-28
To investigate the feasibility of using kV flat panel detector on linac for consistency evaluations of kV X-ray generator performance. An in-house designed aluminum (Al) array phantom with six 9×9 cm 2 square regions having various thickness was proposed and used in this study. Through XML script-driven image acquisition, kV images with various acquisition settings were obtained using the kV flat panel detector. Utilizing pre-established baseline curves, the consistency of X-ray tube output characteristics including tube voltage accuracy, exposure accuracy and exposure linearity were assessed through image quality assessment metrics including ROI mean intensity, ROI standard deviation (SD) and noise power spectrums (NPS). The robustness of this method was tested on two linacs for a three-month period. With the proposed method, tube voltage accuracy can be verified through conscience check with a 2% tolerance and 2 kVp intervals for forty different kVp settings. The exposure accuracy can be tested with a 4% consistency tolerance for three mAs settings over forty kVp settings. The exposure linearity tested with three mAs settings achieved a coefficient of variation (CV) of 0.1. We proposed a novel approach that uses the kV flat panel detector available on linac for X-ray generator test. This approach eliminates the inefficiencies and variability associated with using third party QA detectors while enabling an automated process. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Self-correcting random number generator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S.; Pooser, Raphael C.
2016-09-06
A system and method for generating random numbers. The system may include a random number generator (RNG), such as a quantum random number generator (QRNG) configured to self-correct or adapt in order to substantially achieve randomness from the output of the RNG. By adapting, the RNG may generate a random number that may be considered random regardless of whether the random number itself is tested as such. As an example, the RNG may include components to monitor one or more characteristics of the RNG during operation, and may use the monitored characteristics as a basis for adapting, or self-correcting, tomore » provide a random number according to one or more performance criteria.« less
Gray correlation analysis and prediction models of living refuse generation in Shanghai city.
Liu, Gousheng; Yu, Jianguo
2007-01-01
A better understanding of the factors that affect the generation of municipal living refuse (MLF) and the accurate prediction of its generation are crucial for municipal planning projects and city management. Up to now, most of the design efforts have been based on a rough prediction of MLF without any actual support. In this paper, based on published data of socioeconomic variables and MLF generation from 1990 to 2003 in the city of Shanghai, the main factors that affect MLF generation have been quantitatively studied using the method of gray correlation coefficient. Several gray models, such as GM(1,1), GIM(1), GPPM(1) and GLPM(1), have been studied, and predicted results are verified with subsequent residual test. Results show that, among the selected seven factors, consumption of gas, water and electricity are the largest three factors affecting MLF generation, and GLPM(1) is the optimized model to predict MLF generation. Through this model, the predicted MLF generation in 2010 in Shanghai will be 7.65 million tons. The methods and results developed in this paper can provide valuable information for MLF management and related municipal planning projects.
Jahn, B; Stüben, A; Bhakdi, S
1996-01-01
Two colorimetric methods that use Alamar Blue or 3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2H-tetrazolium bromide (MTT) for assaying the in vitro activities of antifungal agents have been described. We report that both tests performed similarly when the antifungal activity of amphotericin B against Candida albicans was determined. However, only the MTT test generated interpretable data when Aspergillus fumigatus was used. PMID:8818910
Negeri, Zelalem F; Shaikh, Mateen; Beyene, Joseph
2018-05-11
Diagnostic or screening tests are widely used in medical fields to classify patients according to their disease status. Several statistical models for meta-analysis of diagnostic test accuracy studies have been developed to synthesize test sensitivity and specificity of a diagnostic test of interest. Because of the correlation between test sensitivity and specificity, modeling the two measures using a bivariate model is recommended. In this paper, we extend the current standard bivariate linear mixed model (LMM) by proposing two variance-stabilizing transformations: the arcsine square root and the Freeman-Tukey double arcsine transformation. We compared the performance of the proposed methods with the standard method through simulations using several performance measures. The simulation results showed that our proposed methods performed better than the standard LMM in terms of bias, root mean square error, and coverage probability in most of the scenarios, even when data were generated assuming the standard LMM. We also illustrated the methods using two real data sets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Experimental Investigation of Project Orion Crew Exploration Vehicle Aeroheating in AEDC Tunnel 9
NASA Technical Reports Server (NTRS)
Hollis, Brian R.; Horvath, Thomas J.; Berger, Karen T.; Lillard, Randolph P.; Kirk, Benjamin S.; Coblish, Joseph J.; Norris, Joseph D.
2008-01-01
An investigation of the aeroheating environment of the Project Orion Crew Entry Vehicle has been performed in the Arnold Engineering Development Center Tunnel 9. The goals of this test were to measure turbulent heating augmentation levels on the heat shield and to obtain high-fidelity heating data for assessment of computational fluid dynamics methods. Laminar and turbulent predictions were generated for all wind tunnel test conditions and comparisons were performed with the data for the purpose of helping to define uncertainty margins for the computational method. Data from both the wind tunnel test and the computational study are presented herein.
Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L
2018-01-01
Aims A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R2), using R2 as the primary metric of assay agreement. However, the use of R2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. Methods We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Results Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. Conclusions The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. PMID:28747393
Electrostatic testing of thin plastic materials
NASA Technical Reports Server (NTRS)
Skinner, S. Ballou
1988-01-01
Ten thin plastic materials (Velostat, RCAS 1200, Llumalloy, Herculite 80, RCAS 2400, Wrightlon 7000, PVC, Aclar 22A, Mylar, and Polyethylene) were tested for electrostatic properties by four different devices: (1) The static decay meter, (2) the manual triboelectric testing device, (3) the robotic triboelectric testing device, and (4) the resistivity measurement adapter device. The static decay meter measured the electrostatic decay rates in accordance with the Federal Test Method Standard 101B, Method 4046. The manual and the robotic triboelectric devices measured the triboelectric generated peak voltages and the five-second decay voltages in accordance with the criteria for acceptance standards at Kennedy Space Center. The resistivity measurement adapter measured the surface resistivity of each material. An analysis was made to correlate the data among the four testing devices. For the material tested the pass/fail results were compared for the 4046 method and the triboelectric testing devices. For the limited number of materials tested, the relationship between decay rate and surface resistivity was investigated as well as the relationship between triboelectric peak voltage and surface resistivity.
Vadnjal, Ana Laura; Etchepareborda, Pablo; Federico, Alejandro; Kaufmann, Guillermo H
2013-03-20
We present a method to determine micro and nano in-plane displacements based on the phase singularities generated by application of directional wavelet transforms to speckle pattern images. The spatial distribution of the obtained phase singularities by the wavelet transform configures a network, which is characterized by two quasi-orthogonal directions. The displacement value is determined by identifying the intersection points of the network before and after the displacement produced by the tested object. The performance of this method is evaluated using simulated speckle patterns and experimental data. The proposed approach is compared with the optical vortex metrology and digital image correlation methods in terms of performance and noise robustness, and the advantages and limitations associated to each method are also discussed.
de la Calle, Maria B; Devesa, Vicenta; Fiamegos, Yiannis; Vélez, Dinoraz
2017-09-01
The European Food Safety Authority (EFSA) underlined in its Scientific Opinion on Arsenic in Food that in order to support a sound exposure assessment to inorganic arsenic through diet, information about distribution of arsenic species in various food types must be generated. A method, previously validated in a collaborative trial, has been applied to determine inorganic arsenic in a wide variety of food matrices, covering grains, mushrooms and food of marine origin (31 samples in total). The method is based on detection by flow injection-hydride generation-atomic absorption spectrometry of the iAs selectively extracted into chloroform after digestion of the proteins with concentrated HCl. The method is characterized by a limit of quantification of 10 µg/kg dry weight, which allowed quantification of inorganic arsenic in a large amount of food matrices. Information is provided about performance scores given to results obtained with this method and which were reported by different laboratories in several proficiency tests. The percentage of satisfactory results obtained with the discussed method is higher than that of the results obtained with other analytical approaches.
[Dental arch form reverting by four-point method].
Pan, Xiao-Gang; Qian, Yu-Fen; Weng, Si-En; Feng, Qi-Ping; Yu, Quan
2008-04-01
To explore a simple method of reverting individual dental arch form template for wire bending. Individual dental arch form was reverted by four-point method. By defining central point of bracket on bilateral lower second premolar and first molar, certain individual dental arch form could be generated. The arch form generating procedure was then be developed to computer software for printing arch form. Four-point method arch form was evaluated by comparing with direct model measurement on linear and angular parameters. The accuracy and reproducibility were assessed by paired t test and concordance correlation coefficient with Medcalc 9.3 software package. The arch form by four-point method was of good accuracy and reproducibility (linear concordance correlation coefficient was 0.9909 and angular concordance correlation coefficient was 0.8419). The dental arch form reverted by four-point method could reproduce the individual dental arch form.
Vexler, Albert; Tanajian, Hovig; Hutson, Alan D
In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.
Wang, Jinke; Cheng, Yuanzhi; Guo, Changyong; Wang, Yadong; Tamura, Shinichi
2016-05-01
Propose a fully automatic 3D segmentation framework to segment liver on challenging cases that contain the low contrast of adjacent organs and the presence of pathologies from abdominal CT images. First, all of the atlases are weighted in the selected training datasets by calculating the similarities between the atlases and the test image to dynamically generate a subject-specific probabilistic atlas for the test image. The most likely liver region of the test image is further determined based on the generated atlas. A rough segmentation is obtained by a maximum a posteriori classification of probability map, and the final liver segmentation is produced by a shape-intensity prior level set in the most likely liver region. Our method is evaluated and demonstrated on 25 test CT datasets from our partner site, and its results are compared with two state-of-the-art liver segmentation methods. Moreover, our performance results on 10 MICCAI test datasets are submitted to the organizers for comparison with the other automatic algorithms. Using the 25 test CT datasets, average symmetric surface distance is [Formula: see text] mm (range 0.62-2.12 mm), root mean square symmetric surface distance error is [Formula: see text] mm (range 0.97-3.01 mm), and maximum symmetric surface distance error is [Formula: see text] mm (range 12.73-26.67 mm) by our method. Our method on 10 MICCAI test data sets ranks 10th in all the 47 automatic algorithms on the site as of July 2015. Quantitative results, as well as qualitative comparisons of segmentations, indicate that our method is a promising tool to improve the efficiency of both techniques. The applicability of the proposed method to some challenging clinical problems and the segmentation of the liver are demonstrated with good results on both quantitative and qualitative experimentations. This study suggests that the proposed framework can be good enough to replace the time-consuming and tedious slice-by-slice manual segmentation approach.
also created new codes, new methods of analysis for wind turbine testing and new methods to develop of 35 employees. Dr. Thresher's group was responsible for the next generation wind turbine . Thresher was asked to work for two years with DOE in Washington D.C. to manage the innovative wind turbine
ERIC Educational Resources Information Center
Alsuwaileh, Bader Ghannam; Russ-Eft, Darlene F.; Alshurai, Saad R.
2016-01-01
The research herein used a sequential mixed methods design to investigate why academic dishonesty is widespread among the students at the College of Basic Education in Kuwait. Qualitative interviews were conducted to generate research hypotheses. Then, using questionnaire survey, the research hypotheses were quantitatively tested. The findings…
ERIC Educational Resources Information Center
Huitema, Bradley E.; McKean, Joseph W.
2007-01-01
Regression models used in the analysis of interrupted time-series designs assume statistically independent errors. Four methods of evaluating this assumption are the Durbin-Watson (D-W), Huitema-McKean (H-M), Box-Pierce (B-P), and Ljung-Box (L-B) tests. These tests were compared with respect to Type I error and power under a wide variety of error…
Assembly, Integration, and Test Methods for Operationally Responsive Space Satellites
2010-03-01
like assembly and vibration tests, to ensure there have been no failures induced by the activities. External thermal control blankets and radiator...configuration of the satellite post- vibration test and adds time to the process. • Thermal blanketing is not realistic with current technology or...patterns for thermal blankets and radiator tape. The computer aided drawing (CAD) solid model was used to generate patterns that were cut and applied real
NASA Astrophysics Data System (ADS)
Razali Hanipah, M.; Razul Razali, Akhtar
2017-10-01
Free-piston engine generator (FPEG) provides a novel method for electrical power generation in hybrid electric vehicle applications with scarcely reported prototype development and testing. This paper is looking into the motion control strategy for motoring the FPEG during starting. There are two motion profiles investigated namely, trapezoidal velocity and Scurve velocity. Both motion profiles were investigated numerically and the results have shown that the S-curve motion can only achieve 80% of the stroke when operated at the proposed motoring speed of 10Hz.
Testa, Maria; Livingston, Jennifer A.; VanZile-Tamsen, Carol
2011-01-01
A mixed methods approach, combining quantitative with qualitative data methods and analysis, offers a promising means of advancing the study of violence. Integrating semi-structured interviews and qualitative analysis into a quantitative program of research on women’s sexual victimization has resulted in valuable scientific insight and generation of novel hypotheses for testing. This mixed methods approach is described and recommendations for integrating qualitative data into quantitative research are provided. PMID:21307032
Roberts, D J; Spellman, R A; Sanok, K; Chen, H; Chan, M; Yurt, P; Thakur, A K; DeVito, G L; Murli, H; Stankowski, L F
2012-05-01
A flow cytometric procedure for determining mitotic index (MI) as part of the metaphase chromosome aberrations assay, developed and utilized routinely at Pfizer as part of their standard assay design, has been adopted successfully by Covance laboratories. This method, using antibodies against phosphorylated histone tails (H3PS10) and nucleic acid stain, has been evaluated by the two independent test sites and compared to manual scoring. Primary human lymphocytes were treated with cyclophosphamide, mitomycin C, benzo(a)pyrene, and etoposide at concentrations inducing dose-dependent cytotoxicity. Deming regression analysis indicates that the results generated via flow cytometry (FCM) were more consistent between sites than those generated via microscopy. Further analysis using the Bland-Altman modification of the Tukey mean difference method supports this finding, as the standard deviations (SDs) of differences in MI generated by FCM were less than half of those generated manually. Decreases in scoring variability owing to the objective nature of FCM, and the greater number of cells analyzed, make FCM a superior method for MI determination. In addition, the FCM method has proven to be transferable and easily integrated into standard genetic toxicology laboratory operations. Copyright © 2012 Wiley Periodicals, Inc.
Waste Characterization Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vigil-Holterman, Luciana R.; Naranjo, Felicia Danielle
2016-02-02
This report discusses ways to classify waste as outlined by LANL. Waste Generators must make a waste determination and characterize regulated waste by appropriate analytical testing or use of acceptable knowledge (AK). Use of AK for characterization requires several source documents. Waste characterization documentation must be accurate, sufficient, and current (i.e., updated); relevant and traceable to the waste stream’s generation, characterization, and management; and not merely a list of information sources.
Spectral turning bands for efficient Gaussian random fields generation on GPUs and accelerators
NASA Astrophysics Data System (ADS)
Hunger, L.; Cosenza, B.; Kimeswenger, S.; Fahringer, T.
2015-11-01
A random field (RF) is a set of correlated random variables associated with different spatial locations. RF generation algorithms are of crucial importance for many scientific areas, such as astrophysics, geostatistics, computer graphics, and many others. Current approaches commonly make use of 3D fast Fourier transform (FFT), which does not scale well for RF bigger than the available memory; they are also limited to regular rectilinear meshes. We introduce random field generation with the turning band method (RAFT), an RF generation algorithm based on the turning band method that is optimized for massively parallel hardware such as GPUs and accelerators. Our algorithm replaces the 3D FFT with a lower-order, one-dimensional FFT followed by a projection step and is further optimized with loop unrolling and blocking. RAFT can easily generate RF on non-regular (non-uniform) meshes and efficiently produce fields with mesh sizes bigger than the available device memory by using a streaming, out-of-core approach. Our algorithm generates RF with the correct statistical behavior and is tested on a variety of modern hardware, such as NVIDIA Tesla, AMD FirePro and Intel Phi. RAFT is faster than the traditional methods on regular meshes and has been successfully applied to two real case scenarios: planetary nebulae and cosmological simulations.
Visual Persons Behavior Diary Generation Model based on Trajectories and Pose Estimation
NASA Astrophysics Data System (ADS)
Gang, Chen; Bin, Chen; Yuming, Liu; Hui, Li
2018-03-01
The behavior pattern of persons was the important output of the surveillance analysis. This paper focus on the generation model of visual person behavior diary. The pipeline includes the person detection, tracking, and the person behavior classify. This paper adopts the deep convolutional neural model YOLO (You Only Look Once)V2 for person detection module. Multi person tracking was based on the detection framework. The Hungarian assignment algorithm was used to the matching. The person appearance model was integrated by HSV color model and Hash code model. The person object motion was estimated by the Kalman Filter. The multi objects were matching with exist tracklets through the appearance and motion location distance by the Hungarian assignment method. A long continuous trajectory for one person was get by the spatial-temporal continual linking algorithm. And the face recognition information was used to identify the trajectory. The trajectories with identification information can be used to generate the visual diary of person behavior based on the scene context information and person action estimation. The relevant modules are tested in public data sets and our own capture video sets. The test results show that the method can be used to generate the visual person behavior pattern diary with certain accuracy.
Kim, Young-Gon; Song, Kuk-Hyun; Lee, Dong-Hoon; Joo, Sung-Min
2018-03-01
The demand of crack tip opening displacement (CTOD) test which evaluates fracture toughness of a cracked material is very important to ensure the stability of structure under severe service environment. The validity of the CTOD test result is judged using several criterions of the specification standards. One of them is the artificially generated fatigue pre-crack length inside the specimen. For acceptable CTOD test results, fatigue pre-crack must have a reasonable sharp crack front. The propagation of fatigue crack started from the tip of the machined notch, which might have propagated irregularly due to residual stress field. To overcome this problem, test codes suggest local compression method, reversed bending method and stepwise high-R ratio method to reduce the disparity of residual stress distribution inside the specimen. In this paper, the relation between the degree of local compression and distribution of welding residual stress has been analyzed by finite element analyses in order to determine the amount of effective local compression of the test piece. Analysis results show that initial welding residual stress is dramatically varied three-dimensionally while cutting, notch machining and local compressing due to the change of internal restraint force. From the simulation result, the authors find that there is an optimum amount of local compression to modify regularly for generating fatigue pre-crack propagation. In the case of 0.5% compressions of the model width is the most effective for uniforming residual stress distribution.
Using pattern enumeration to accelerate process development and ramp yield
NASA Astrophysics Data System (ADS)
Zhuang, Linda; Pang, Jenny; Xu, Jessy; Tsai, Mengfeng; Wang, Amy; Zhang, Yifan; Sweis, Jason; Lai, Ya-Chieh; Ding, Hua
2016-03-01
During a new technology node process setup phase, foundries do not initially have enough product chip designs to conduct exhaustive process development. Different operational teams use manually designed simple test keys to set up their process flows and recipes. When the very first version of the design rule manual (DRM) is ready, foundries enter the process development phase where new experiment design data is manually created based on these design rules. However, these IP/test keys contain very uniform or simple design structures. This kind of design normally does not contain critical design structures or process unfriendly design patterns that pass design rule checks but are found to be less manufacturable. It is desired to have a method to generate exhaustive test patterns allowed by design rules at development stage to verify the gap of design rule and process. This paper presents a novel method of how to generate test key patterns which contain known problematic patterns as well as any constructs which designers could possibly draw based on current design rules. The enumerated test key patterns will contain the most critical design structures which are allowed by any particular design rule. A layout profiling method is used to do design chip analysis in order to find potential weak points on new incoming products so fab can take preemptive action to avoid yield loss. It can be achieved by comparing different products and leveraging the knowledge learned from previous manufactured chips to find possible yield detractors.
Investigating pyrolysis/incineration as a method of resource recovery from solid waste
NASA Technical Reports Server (NTRS)
Robertson, Bobby J.; Lemay, Christopher S.
1993-01-01
Pyrolysis/incineration (P/I) is a physicochemical method for the generation of recoverable resources from solid waste materials such as inedible plant biomass (IPB), paper, plastics, cardboard, etc. P/I permits the collection of numerous gases with a minimal amount of solid residue. Pyrolysis, also known as starved air incineration, is usually conducted at relatively high temperatures (greater than 500 deg C) in the absence of oxygen. Incineration is conducted at lower temperatures in the presence of oxygen. The primary purpose of this study was to design, construct, and test a model P/I. The system design includes safety requirements for temperature and pressure. The objectives of this study were: (1) to design and construct a P/I system for incorporation with the Hybrid Regenerative Water Recovery System; (2) to initiate testing of the P/I system; (3) to collect and analyze P/I system data; (4) to consider test variables; and (5) to determine the feasibility of P/I as an effective method of resource recovery. A P/I system for the recovery of reuseable resources from solid waste materials was designed, constructed, and tested. Since a large amount of inedible plant biomass (IPB) will be generated in a space-based habitat on the lunar surface and Mars, IPB was the primary waste material tested in the system. Analysis of the effluent gases was performed to determine which gases could be used in a life support system.
Rapid 3D bioprinting from medical images: an application to bone scaffolding
NASA Astrophysics Data System (ADS)
Lee, Daniel Z.; Peng, Matthew W.; Shinde, Rohit; Khalid, Arbab; Hong, Abigail; Pennacchi, Sara; Dawit, Abel; Sipzner, Daniel; Udupa, Jayaram K.; Rajapakse, Chamith S.
2018-03-01
Bioprinting of tissue has its applications throughout medicine. Recent advances in medical imaging allows the generation of 3-dimensional models that can then be 3D printed. However, the conventional method of converting medical images to 3D printable G-Code instructions has several limitations, namely significant processing time for large, high resolution images, and the loss of microstructural surface information from surface resolution and subsequent reslicing. We have overcome these issues by creating a JAVA program that skips the intermediate triangularization and reslicing steps and directly converts binary dicom images into G-Code. In this study, we tested the two methods of G-Code generation on the application of synthetic bone graft scaffold generation. We imaged human cadaveric proximal femurs at an isotropic resolution of 0.03mm using a high resolution peripheral quantitative computed tomography (HR-pQCT) scanner. These images, of the Digital Imaging and Communications in Medicine (DICOM) format, were then processed through two methods. In each method, slices and regions of print were selected, filtered to generate a smoothed image, and thresholded. In the conventional method, these processed images are converted to the STereoLithography (STL) format and then resliced to generate G-Code. In the new, direct method, these processed images are run through our JAVA program and directly converted to G-Code. File size, processing time, and print time were measured for each. We found that this new method produced a significant reduction in G-Code file size as well as processing time (92.23% reduction). This allows for more rapid 3D printing from medical images.
Wangsness, Kathryn; Salfinger, Yvonne; Randolph, Robyn; Shea, Shari; Larson, Kirsten
2017-07-01
Laboratory accreditation provides a level of standardization in laboratories and confidence in generated food and feed testing results. For some laboratories, ISO/IEC 17025:2005 accreditation may not be fiscally viable, or a requested test method may be out of the scope of the laboratory's accreditation. To assist laboratories for whom accreditation is not feasible, the Association of Public Health Laboratories Data Acceptance Work Group developed a white paper entitled "Best Practices for Submission of Actionable Food and Feed Testing Data Generated in State and Local Laboratories." The basic elements of a quality management system, along with other best practices that state and local food and feed testing laboratories should follow, are included in the white paper. It also covers program-specific requirements that may need to be addressed. Communication with programs and end data users is regarded as essential for establishing the reliability and accuracy of laboratory data. Following these suggested best practices can facilitate the acceptance of laboratory data, which can result in swift regulatory action and the quick removal of contaminated product from the food supply, improving public health nationally.
NASA Astrophysics Data System (ADS)
Yuliasmi, S.; Pardede, T. R.; Nerdy; Syahputra, H.
2017-03-01
Oil palm midrib is one of the waste generated by palm plants containing 34.89% cellulose. Cellulose has the potential to produce microcrystalline cellulose can be used as an excipient in tablet formulations by direct compression. Microcrystalline cellulose is the result of a controlled hydrolysis of alpha cellulose, so the alpha cellulose extraction process of oil palm midrib greatly affect the quality of the resulting microcrystalline cellulose. The purpose of this study was to compare the microcrystalline cellulose produced from alpha cellulose extracted from oil palm midrib by two different methods. Fisrt delignization method uses sodium hydroxide. Second method uses a mixture of nitric acid and sodium nitrite, and continued with sodium hydroxide and sodium sulfite. Microcrystalline cellulose obtained by both method was characterized separately, including organoleptic test, color reagents test, dissolution test, pH test and determination of functional groups by FTIR. The results was compared with microcrystalline cellulose which has been available on the market. The characterization results showed that microcrystalline cellulose obtained by first method has the most similar characteristics to the microcrystalline cellulose available in the market.
Finck, Rachel; Lui-Deguzman, Carrie; Teng, Shih-Mao; Davis, Rebecca; Yuan, Shan
2013-04-01
Titration is a semiquantitative method used to estimate red blood cell (RBC) alloantibody reactivity. The conventional tube test (CTT) technique is the traditional method for performing titration studies. The gel microcolumn assay (GMA) is also a sensitive method to detect RBC alloantibodies. The aim of this study was to compare a GMA with the CTT technique in the performance of Rh and K alloantibody titration. Patient serum samples that contained an RBC alloantibody with a singular specificity were identified by routine blood bank workflow. Parallel titration studies were performed on these samples by both the CTT method and a GMA (ID-Micro Typing System anti-IgG gel card, Micro Typing Systems, Inc., an Ortho-Clinical Diagnostics Company). Forty-eight samples were included, including 11 anti-D, five anti-c, 13 anti-E, one anti-C, three anti-e, and 15 anti-K. Overall, the two methods generated identical results in 21 of 48 samples. For 42 samples (87.5%) the two methods generated results that were within one serial dilution, and for the remaining six samples, results were within two dilutions. GMA systems may perform comparably to the CTT in titrating alloantibodies to Rh and Kell antigens. © 2012 American Association of Blood Banks.
Optimizing Associative Experimental Design for Protein Crystallization Screening
Dinç, Imren; Pusey, Marc L.; Aygün, Ramazan S.
2016-01-01
The goal of protein crystallization screening is the determination of the main factors of importance to crystallizing the protein under investigation. One of the major issues about determining these factors is that screening is often expanded to many hundreds or thousands of conditions to maximize combinatorial chemical space coverage for maximizing the chances of a successful (crystalline) outcome. In this paper, we propose an experimental design method called “Associative Experimental Design (AED)” and an optimization method includes eliminating prohibited combinations and prioritizing reagents based on AED analysis of results from protein crystallization experiments. AED generates candidate cocktails based on these initial screening results. These results are analyzed to determine those screening factors in chemical space that are most likely to lead to higher scoring outcomes, crystals. We have tested AED on three proteins derived from the hyperthermophile Thermococcus thioreducens, and we applied an optimization method to these proteins. Our AED method generated novel cocktails (count provided in parentheses) leading to crystals for three proteins as follows: Nucleoside diphosphate kinase (4), HAD superfamily hydrolase (2), Nucleoside kinase (1). After getting promising results, we have tested our optimization method on four different proteins. The AED method with optimization yielded 4, 3, and 20 crystalline conditions for holo Human Transferrin, archaeal exosome protein, and Nucleoside diphosphate kinase, respectively. PMID:26955046
TTCN-3 Based Conformance Testing of Mobile Broadcast Business Management System in 3G Networks
NASA Astrophysics Data System (ADS)
Wang, Zhiliang; Yin, Xia; Xiang, Yang; Zhu, Ruiping; Gao, Shirui; Wu, Xin; Liu, Shijian; Gao, Song; Zhou, Li; Li, Peng
Mobile broadcast service is one of the emerging most important new services in 3G networks. To better operate and manage mobile broadcast services, mobile broadcast business management system (MBBMS) should be designed and developed. Such a system, with its distributed nature, complicated XML data and security mechanism, faces many challenges in testing technology. In this paper, we study the conformance testing methodology of MBBMS, and design and implement a MBBMS protocol conformance testing tool based on TTCN-3, a standardized test description language that can be used in black-box testing of reactive and distributed system. In this methodology and testing tool, we present a semi-automatic XML test data generation method of TTCN-3 test suite and use HMSC model to help the design of test suite. In addition, we also propose an integrated testing method for hierarchical MBBMS security architecture. This testing tool has been used in industrial level’s testing.
Next Generation of Leaching Tests
A corresponding abstract has been cleared for this presentation. The four methods comprising the Leaching Environmental Assessment Framework are described along with the tools to support implementation of the more rigorous and accurate source terms that are developed using LEAF ...
Test Guidelines for Pesticides and Toxic Substances
Documents that specify methods EPA recommends to generate data submitted to EPA to support the registration of a pesticide, setting of a tolerance or tolerance exemption for pesticide residues, or the decision making process for an industrial chemical.
NASA Technical Reports Server (NTRS)
Krist, Steven E.; Ghaffari, Farhad
2015-01-01
Computational simulations for a Space Launch System configuration at liftoff conditions for incidence angles from 0 to 90 degrees were conducted in order to generate integrated force and moment data and longitudinal lineloads. While the integrated force and moment coefficients can be obtained from wind tunnel testing, computational analyses are indispensable in obtaining the extensive amount of surface information required to generate proper lineloads. However, beyond an incidence angle of about 15 degrees, the effects of massive flow separation on the leeward pressure field is not well captured with state of the art Reynolds Averaged Navier-Stokes methods, necessitating the employment of a Detached Eddy Simulation method. Results from these simulations are compared to the liftoff force and moment database and surface pressure data derived from a test in the NASA Langley 14- by 22-Foot Subsonic Wind Tunnel.
Nondestructive Evaluation of Carbon Fiber Bicycle Frames Using Infrared Thermography
Ibarra-Castanedo, Clemente; Klein, Matthieu; Maldague, Xavier; Sanchez-Beato, Alvaro
2017-01-01
Bicycle frames made of carbon fibre are extremely popular for high-performance cycling due to the stiffness-to-weight ratio, which enables greater power transfer. However, products manufactured using carbon fibre are sensitive to impact damage. Therefore, intelligent nondestructive evaluation is a required step to prevent failures and ensure a secure usage of the bicycle. This work proposes an inspection method based on active thermography, a proven technique successfully applied to other materials. Different configurations for the inspection are tested, including power and heating time. Moreover, experiments are applied to a real bicycle frame with generated impact damage of different energies. Tests show excellent results, detecting the generated damage during the inspection. When the results are combined with advanced image post-processing methods, the SNR is greatly increased, and the size and localization of the defects are clearly visible in the images. PMID:29156650
Vision Algorithm for the Solar Aspect System of the HEROES Mission
NASA Technical Reports Server (NTRS)
Cramer, Alexander
2014-01-01
This work covers the design and test of a machine vision algorithm for generating high-accuracy pitch and yaw pointing solutions relative to the sun for the High Energy Replicated Optics to Explore the Sun (HEROES) mission. It describes how images were constructed by focusing an image of the sun onto a plate printed with a pattern of small fiducial markers. Images of this plate were processed in real time to determine relative position of the balloon payload to the sun. The algorithm is broken into four problems: circle detection, fiducial detection, fiducial identification, and image registration. Circle detection is handled by an "Average Intersection" method, fiducial detection by a matched filter approach, identification with an ad-hoc method based on the spacing between fiducials, and image registration with a simple least squares fit. Performance is verified on a combination of artificially generated images, test data recorded on the ground, and images from the 2013 flight
Vision Algorithm for the Solar Aspect System of the HEROES Mission
NASA Technical Reports Server (NTRS)
Cramer, Alexander; Christe, Steven; Shih, Albert
2014-01-01
This work covers the design and test of a machine vision algorithm for generating high-accuracy pitch and yaw pointing solutions relative to the sun for the High Energy Replicated Optics to Explore the Sun (HEROES) mission. It describes how images were constructed by focusing an image of the sun onto a plate printed with a pattern of small fiducial markers. Images of this plate were processed in real time to determine relative position of the balloon payload to the sun. The algorithm is broken into four problems: circle detection, fiducial detection, fiducial identification, and image registration. Circle detection is handled by an Average Intersection method, fiducial detection by a matched filter approach, identification with an ad-hoc method based on the spacing between fiducials, and image registration with a simple least squares fit. Performance is verified on a combination of artificially generated images, test data recorded on the ground, and images from the 2013 flight.
Applications of Automation Methods for Nonlinear Fracture Test Analysis
NASA Technical Reports Server (NTRS)
Allen, Phillip A.; Wells, Douglas N.
2013-01-01
As fracture mechanics material testing evolves, the governing test standards continue to be refined to better reflect the latest understanding of the physics of the fracture processes involved. The traditional format of ASTM fracture testing standards, utilizing equations expressed directly in the text of the standard to assess the experimental result, is self-limiting in the complexity that can be reasonably captured. The use of automated analysis techniques to draw upon a rich, detailed solution database for assessing fracture mechanics tests provides a foundation for a new approach to testing standards that enables routine users to obtain highly reliable assessments of tests involving complex, non-linear fracture behavior. Herein, the case for automating the analysis of tests of surface cracks in tension in the elastic-plastic regime is utilized as an example of how such a database can be generated and implemented for use in the ASTM standards framework. The presented approach forms a bridge between the equation-based fracture testing standards of today and the next generation of standards solving complex problems through analysis automation.
Developing the Persian version of the homophone meaning generation test
Ebrahimipour, Mona; Motamed, Mohammad Reza; Ashayeri, Hassan; Modarresi, Yahya; Kamali, Mohammad
2016-01-01
Background: Finding the right word is a necessity in communication, and its evaluation has always been a challenging clinical issue, suggesting the need for valid and reliable measurements. The Homophone Meaning Generation Test (HMGT) can measure the ability to switch between verbal concepts, which is required in word retrieval. The purpose of this study was to adapt and validate the Persian version of the HMGT. Methods: The first phase involved the adaptation of the HMGT to the Persian language. The second phase concerned the psychometric testing. The word-finding performance was assessed in 90 Persian-speaking healthy individuals (20-50 year old; 45 males and 45 females) through three naming tasks: Semantic Fluency, Phonemic Fluency, and Homophone Meaning Generation Test. The participants had no history of neurological or psychiatric diseases, alcohol abuse, severe depression, or history of speech, language, or learning problems. Results: The internal consistency coefficient was larger than 0.8 for all the items with a total Cronbach’s alpha of 0.80. Interrater and intrarater reliability were also excellent. The validity of all items was above 0.77, and the content validity index (0.99) was appropriate. The Persian HMGT had strong convergent validity with semantic and phonemic switching and adequate divergent validity with semantic and phonemic clustering. Conclusion: The Persian version of the Homophone Meaning Generation Test is an appropriate, valid, and reliable test to evaluate the ability to switch between verbal concepts in the assessment of word-finding performance. PMID:27390705
The Changing Role of the Clinical Microbiology Laboratory in Defining Resistance in Gram-negatives.
Endimiani, Andrea; Jacobs, Michael R
2016-06-01
The evolution of resistance in Gram-negatives has challenged the clinical microbiology laboratory to implement new methods for their detection. Multidrug-resistant strains present major challenges to conventional and new detection methods. More rapid pathogen identification and antimicrobial susceptibility testing have been developed for use directly on specimens, including fluorescence in situ hybridization tests, automated polymerase chain reaction systems, microarrays, mass spectroscopy, next-generation sequencing, and microfluidics. Review of these methods shows the advances that have been made in rapid detection of resistance in cultures, but limited progress in direct detection from specimens. Copyright © 2016 Elsevier Inc. All rights reserved.
A Solution Adaptive Technique Using Tetrahedral Unstructured Grids
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2000-01-01
An adaptive unstructured grid refinement technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The method is based on a combination of surface mesh subdivision and local remeshing of the volume grid Simple functions of flow quantities are employed to detect dominant features of the flowfield The method is designed for modular coupling with various error/feature analyzers and flow solvers. Several steady-state, inviscid flow test cases are presented to demonstrate the applicability of the method for solving practical three-dimensional problems. In all cases, accurate solutions featuring complex, nonlinear flow phenomena such as shock waves and vortices have been generated automatically and efficiently.
Waste Analysis Plan and Waste Characterization Survey, Barksdale AFB, Louisiana
1991-03-01
review to assess if analysis is needed, any analyses that are to be provided by generators, and methods to be used to meet specific waste analysis ...sampling method , sampling frequency, parameters of analysis , SW 846 test methods , Department of Transportation (DOT) shipping name and hazard class...S.e.iceA w/Atchs 2. HQ SAC/DEV Ltr, 28 Sep 90 19 119 APPENDIX B Waste Analysis Plan Rationale 21 APPENDIX B 1. SAMPLING METHOD RATIONALE: Composite Liquid
ATLAS Test Program Generator II (AGEN II). Volume I. Executive Software System.
1980-08-01
features. l-1 C. To provide detailed descriptions of each of the system components and modules and their corresponding flowcharts. D. To describe methods of...contains the FORTRAN source code listings to enable programmer to do the expansions and modifications. The methods and details of adding another...characteristics of the network. The top-down implementa- tion method is therefore suggested. This method starts at the top by designing the IVT modules in
Method and apparatus for automatically generating airfoil performance tables
NASA Technical Reports Server (NTRS)
van Dam, Cornelis P. (Inventor); Mayda, Edward A. (Inventor); Strawn, Roger Clayton (Inventor)
2006-01-01
One embodiment of the present invention provides a system that facilitates automatically generating a performance table for an object, wherein the object is subject to fluid flow. The system operates by first receiving a description of the object and testing parameters for the object. The system executes a flow solver using the testing parameters and the description of the object to produce an output. Next, the system determines if the output of the flow solver indicates negative density or pressure. If not, the system analyzes the output to determine if the output is converging. If converging, the system writes the output to the performance table for the object.
Parametric Methods for Dynamic 11C-Phenytoin PET Studies.
Mansor, Syahir; Yaqub, Maqsood; Boellaard, Ronald; Froklage, Femke E; de Vries, Anke; Bakker, Esther D M; Voskuyl, Rob A; Eriksson, Jonas; Schwarte, Lothar A; Verbeek, Joost; Windhorst, Albert D; Lammertsma, Adriaan A
2017-03-01
In this study, the performance of various methods for generating quantitative parametric images of dynamic 11 C-phenytoin PET studies was evaluated. Methods: Double-baseline 60-min dynamic 11 C-phenytoin PET studies, including online arterial sampling, were acquired for 6 healthy subjects. Parametric images were generated using Logan plot analysis, a basis function method, and spectral analysis. Parametric distribution volume (V T ) and influx rate ( K 1 ) were compared with those obtained from nonlinear regression analysis of time-activity curves. In addition, global and regional test-retest (TRT) variability was determined for parametric K 1 and V T values. Results: Biases in V T observed with all parametric methods were less than 5%. For K 1 , spectral analysis showed a negative bias of 16%. The mean TRT variabilities of V T and K 1 were less than 10% for all methods. Shortening the scan duration to 45 min provided similar V T and K 1 with comparable TRT performance compared with 60-min data. Conclusion: Among the various parametric methods tested, the basis function method provided parametric V T and K 1 values with the least bias compared with nonlinear regression data and showed TRT variabilities lower than 5%, also for smaller volume-of-interest sizes (i.e., higher noise levels) and shorter scan duration. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
ERIC Educational Resources Information Center
Pavlik, Philip I. Jr.; Cen, Hao; Koedinger, Kenneth R.
2009-01-01
This paper describes a novel method to create a quantitative model of an educational content domain of related practice item-types using learning curves. By using a pairwise test to search for the relationships between learning curves for these item-types, we show how the test results in a set of pairwise transfer relationships that can be…
A robust, efficient equidistribution 2D grid generation method
NASA Astrophysics Data System (ADS)
Chacon, Luis; Delzanno, Gian Luca; Finn, John; Chung, Jeojin; Lapenta, Giovanni
2007-11-01
We present a new cell-area equidistribution method for two- dimensional grid adaptation [1]. The method is able to satisfy the equidistribution constraint to arbitrary precision while optimizing desired grid properties (such as isotropy and smoothness). The method is based on the minimization of the grid smoothness integral, constrained to producing a given positive-definite cell volume distribution. The procedure gives rise to a single, non-linear scalar equation with no free-parameters. We solve this equation numerically with the Newton-Krylov technique. The ellipticity property of the linearized scalar equation allows multigrid preconditioning techniques to be effectively used. We demonstrate a solution exists and is unique. Therefore, once the solution is found, the adapted grid cannot be folded due to the positivity of the constraint on the cell volumes. We present several challenging tests to show that our new method produces optimal grids in which the constraint is satisfied numerically to arbitrary precision. We also compare the new method to the deformation method [2] and show that our new method produces better quality grids. [1] G.L. Delzanno, L. Chac'on, J.M. Finn, Y. Chung, G. Lapenta, A new, robust equidistribution method for two-dimensional grid generation, in preparation. [2] G. Liao and D. Anderson, A new approach to grid generation, Appl. Anal. 44, 285--297 (1992).
Refinetti, Paulo; Morgenthaler, Stephan; Ekstrøm, Per O
2016-07-01
Cycling temperature capillary electrophoresis has been optimised for mutation detection in 76% of the mitochondrial genome. The method was tested on a mixed sample and compared to mutation detection by next generation sequencing. Out of 152 fragments 90 were concordant, 51 discordant and in 11 were semi-concordant. Dilution experiments show that cycling capillary electrophoresis has a detection limit of 1-3%. The detection limit of routine next generation sequencing was in the ranges of 15 to 30%. Cycling temperature capillary electrophoresis detect and accurate quantify mutations at a fraction of the cost and time required to perform a next generation sequencing analysis. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Corrosion Performance of New Generation Aluminum-Lithium Alloys for Aerospace Applications
NASA Astrophysics Data System (ADS)
Moran, James P.; Bovard, Francine S.; Chrzan, James D.; Vandenburgh, Peter
Over the past several years, a new generation of aluminum-lithium alloys has been developed. These alloys are characterized by excellent strength, low density, and high modulus of elasticity and are therefore of interest for lightweight structural materials applications particularly for construction of current and future aircraft. These new alloys have also demonstrated significant improvements in corrosion resistance when compared with the legacy and incumbent alloys. This paper documents the superior corrosion resistance of the current commercial tempers of these materials and also discusses the corrosion performance as a function of the degree of artificial aging. Results from laboratory corrosion tests are compared with results from exposures in a seacoast atmosphere to assess the predictive capability of the laboratory tests. The correlations that have been developed between the laboratory tests and the seacoast exposures provide confidence that a set of available methods can provide an accurate assessment of the corrosion performance of this new generation of alloys.
Quiet Sonic Booms: A NASA and Industry Progress Report
NASA Technical Reports Server (NTRS)
Larson, David Nils; Martin, Roy; Haering, Edward A.
2011-01-01
The purpose of this Oral Presentation is to present a progress report on NASA and Industry efforts related to Quiet Sonic Boom Program activities. This presentation will review changes in aircraft shaping to produce quiet supersonic booms and associated supersonic flight test methods and results. In addition, new flight test profiles have been recently developed that have allowed for the generation of sonic booms of varying intensity. These new flight test profiles have allowed for ground testing of the response of various building structures to sonic booms and the associated public acceptability to various sonic boom intensities. The new flight test profiles and associated ground measurement test methods will be reviewed. Finally, this Oral Presentation will review the International Regulatory requirements that would be involved to change aviation regulation and allow for overland quiet supersonic flight.
NASA Technical Reports Server (NTRS)
Bailey, G. D.; Tenoso, H. J.
1975-01-01
An attempt was made to develop a test requiring no preadsorption steps for the assessment of antibodies to rubella and mumps viruses using the passive immune agglutination (PIA) method. Both rubella and mumps antigens and antibodies were prepared. Direct PIA tests, using rubella antigen-coated beads, and indirect PIA tests, using rubella antibody-coated beads, were investigated. Attempts, using either method, were unsuccessful. Serum interference along with nonspecific agglutination of beads by the rubella antigen resulted in no specific response under the test conditions investigated. A new, highly sensitive approach, the enzyme immunoassay (EIA) test system, is recommended to overcome the nonspecificity. This system is a logical outgrowth of some of the solid phase work done on MEMS and represents the next generation tests system that can be directly applied to early disease detection and monitoring.
Yuan, Jintao; Yu, Shuling; Zhang, Ting; Yuan, Xuejie; Cao, Yunyuan; Yu, Xingchen; Yang, Xuan; Yao, Wu
2016-06-01
Octanol/water (K(OW)) and octanol/air (K(OA)) partition coefficients are two important physicochemical properties of organic substances. In current practice, K(OW) and K(OA) values of some polychlorinated biphenyls (PCBs) are measured using generator column method. Quantitative structure-property relationship (QSPR) models can serve as a valuable alternative method of replacing or reducing experimental steps in the determination of K(OW) and K(OA). In this paper, two different methods, i.e., multiple linear regression based on dragon descriptors and hologram quantitative structure-activity relationship, were used to predict generator-column-derived log K(OW) and log K(OA) values of PCBs. The predictive ability of the developed models was validated using a test set, and the performances of all generated models were compared with those of three previously reported models. All results indicated that the proposed models were robust and satisfactory and can thus be used as alternative models for the rapid assessment of the K(OW) and K(OA) of PCBs. Copyright © 2016 Elsevier Inc. All rights reserved.
Validation of two (parametric vs non-parametric) daily weather generators
NASA Astrophysics Data System (ADS)
Dubrovsky, M.; Skalak, P.
2015-12-01
As the climate models (GCMs and RCMs) fail to satisfactorily reproduce the real-world surface weather regime, various statistical methods are applied to downscale GCM/RCM outputs into site-specific weather series. The stochastic weather generators are among the most favourite downscaling methods capable to produce realistic (observed-like) meteorological inputs for agrological, hydrological and other impact models used in assessing sensitivity of various ecosystems to climate change/variability. To name their advantages, the generators may (i) produce arbitrarily long multi-variate synthetic weather series representing both present and changed climates (in the latter case, the generators are commonly modified by GCM/RCM-based climate change scenarios), (ii) be run in various time steps and for multiple weather variables (the generators reproduce the correlations among variables), (iii) be interpolated (and run also for sites where no weather data are available to calibrate the generator). This contribution will compare two stochastic daily weather generators in terms of their ability to reproduce various features of the daily weather series. M&Rfi is a parametric generator: Markov chain model is used to model precipitation occurrence, precipitation amount is modelled by the Gamma distribution, and the 1st order autoregressive model is used to generate non-precipitation surface weather variables. The non-parametric GoMeZ generator is based on the nearest neighbours resampling technique making no assumption on the distribution of the variables being generated. Various settings of both weather generators will be assumed in the present validation tests. The generators will be validated in terms of (a) extreme temperature and precipitation characteristics (annual and 30-years extremes and maxima of duration of hot/cold/dry/wet spells); (b) selected validation statistics developed within the frame of VALUE project. The tests will be based on observational weather series from several European stations available from the ECA&D database. Acknowledgements: The weather generator is developed and validated within the frame of projects WG4VALUE (sponsored by the Ministry of Education, Youth and Sports of CR), and VALUE (COST ES 1102 action).
Dooley, Christopher J; Tenore, Francesco V; Gayzik, F Scott; Merkle, Andrew C
2018-04-27
Biological tissue testing is inherently susceptible to the wide range of variability specimen to specimen. A primary resource for encapsulating this range of variability is the biofidelity response corridor or BRC. In the field of injury biomechanics, BRCs are often used for development and validation of both physical, such as anthropomorphic test devices, and computational models. For the purpose of generating corridors, post-mortem human surrogates were tested across a range of loading conditions relevant to under-body blast events. To sufficiently cover the wide range of input conditions, a relatively small number of tests were performed across a large spread of conditions. The high volume of required testing called for leveraging the capabilities of multiple impact test facilities, all with slight variations in test devices. A method for assessing similitude of responses between test devices was created as a metric for inclusion of a response in the resulting BRC. The goal of this method was to supply a statistically sound, objective method to assess the similitude of an individual response against a set of responses to ensure that the BRC created from the set was affected primarily by biological variability, not anomalies or differences stemming from test devices. Copyright © 2018 Elsevier Ltd. All rights reserved.
Generation of Murine Monoclonal Antibodies by Hybridoma Technology.
Holzlöhner, Pamela; Hanack, Katja
2017-01-02
Monoclonal antibodies are universal binding molecules and are widely used in biomedicine and research. Nevertheless, the generation of these binding molecules is time-consuming and laborious due to the complicated handling and lack of alternatives. The aim of this protocol is to provide one standard method for the generation of monoclonal antibodies using hybridoma technology. This technology combines two steps. Step 1 is an appropriate immunization of the animal and step 2 is the fusion of B lymphocytes with immortal myeloma cells in order to generate hybrids possessing both parental functions, such as the production of antibody molecules and immortality. The generated hybridoma cells were then recloned and diluted to obtain stable monoclonal cell cultures secreting the desired monoclonal antibody in the culture supernatant. The supernatants were tested in enzyme-linked immunosorbent assays (ELISA) for antigen specificity. After the selection of appropriate cell clones, the cells were transferred to mass cultivation in order to produce the desired antibody molecule in large amounts. The purification of the antibodies is routinely performed by affinity chromatography. After purification, the antibody molecule can be characterized and validated for the final test application. The whole process takes 8 to 12 months of development, and there is a high risk that the antibody will not work in the desired test system.
NASA Astrophysics Data System (ADS)
Zheng, Jingjing; Meana-Pañeda, Rubén; Truhlar, Donald G.
2013-08-01
We present an improved version of the MSTor program package, which calculates partition functions and thermodynamic functions of complex molecules involving multiple torsions; the method is based on either a coupled torsional potential or an uncoupled torsional potential. The program can also carry out calculations in the multiple-structure local harmonic approximation. The program package also includes seven utility codes that can be used as stand-alone programs to calculate reduced moment of inertia matrices by the method of Kilpatrick and Pitzer, to generate conformational structures, to calculate, either analytically or by Monte Carlo sampling, volumes for torsional subdomains defined by Voronoi tessellation of the conformational subspace, to generate template input files for the MSTor calculation and Voronoi calculation, and to calculate one-dimensional torsional partition functions using the torsional eigenvalue summation method. Restrictions: There is no limit on the number of torsions that can be included in either the Voronoi calculation or the full MS-T calculation. In practice, the range of problems that can be addressed with the present method consists of all multitorsional problems for which one can afford to calculate all the conformational structures and their frequencies. Unusual features: The method can be applied to transition states as well as stable molecules. The program package also includes the hull program for the calculation of Voronoi volumes, the symmetry program for determining point group symmetry of a molecule, and seven utility codes that can be used as stand-alone programs to calculate reduced moment-of-inertia matrices by the method of Kilpatrick and Pitzer, to generate conformational structures, to calculate, either analytically or by Monte Carlo sampling, volumes of the torsional subdomains defined by Voronoi tessellation of the conformational subspace, to generate template input files, and to calculate one-dimensional torsional partition functions using the torsional eigenvalue summation method. Additional comments: The program package includes a manual, installation script, and input and output files for a test suite. Running time: There are 26 test runs. The running time of the test runs on a single processor of the Itasca computer is less than 2 s. References: [1] MS-T(C) method: Quantum Thermochemistry: Multi-Structural Method with Torsional Anharmonicity Based on a Coupled Torsional Potential, J. Zheng and D.G. Truhlar, Journal of Chemical Theory and Computation 9 (2013) 1356-1367, DOI: http://dx.doi.org/10.1021/ct3010722. [2] MS-T(U) method: Practical Methods for Including Torsional Anharmonicity in Thermochemical Calculations of Complex Molecules: The Internal-Coordinate Multi-Structural Approximation, J. Zheng, T. Yu, E. Papajak, I, M. Alecu, S.L. Mielke, and D.G. Truhlar, Physical Chemistry Chemical Physics 13 (2011) 10885-10907.
Yu, Zhaoyuan; Yuan, Linwang; Luo, Wen; Feng, Linyao; Lv, Guonian
2015-01-01
Passive infrared (PIR) motion detectors, which can support long-term continuous observation, are widely used for human motion analysis. Extracting all possible trajectories from the PIR sensor networks is important. Because the PIR sensor does not log location and individual information, none of the existing methods can generate all possible human motion trajectories that satisfy various spatio-temporal constraints from the sensor activation log data. In this paper, a geometric algebra (GA)-based approach is developed to generate all possible human trajectories from the PIR sensor network data. Firstly, the representation of the geographical network, sensor activation response sequences and the human motion are represented as algebraic elements using GA. The human motion status of each sensor activation are labeled using the GA-based trajectory tracking. Then, a matrix multiplication approach is developed to dynamically generate the human trajectories according to the sensor activation log and the spatio-temporal constraints. The method is tested with the MERL motion database. Experiments show that our method can flexibly extract the major statistical pattern of the human motion. Compared with direct statistical analysis and tracklet graph method, our method can effectively extract all possible trajectories of the human motion, which makes it more accurate. Our method is also likely to provides a new way to filter other passive sensor log data in sensor networks. PMID:26729123
Yu, Zhaoyuan; Yuan, Linwang; Luo, Wen; Feng, Linyao; Lv, Guonian
2015-12-30
Passive infrared (PIR) motion detectors, which can support long-term continuous observation, are widely used for human motion analysis. Extracting all possible trajectories from the PIR sensor networks is important. Because the PIR sensor does not log location and individual information, none of the existing methods can generate all possible human motion trajectories that satisfy various spatio-temporal constraints from the sensor activation log data. In this paper, a geometric algebra (GA)-based approach is developed to generate all possible human trajectories from the PIR sensor network data. Firstly, the representation of the geographical network, sensor activation response sequences and the human motion are represented as algebraic elements using GA. The human motion status of each sensor activation are labeled using the GA-based trajectory tracking. Then, a matrix multiplication approach is developed to dynamically generate the human trajectories according to the sensor activation log and the spatio-temporal constraints. The method is tested with the MERL motion database. Experiments show that our method can flexibly extract the major statistical pattern of the human motion. Compared with direct statistical analysis and tracklet graph method, our method can effectively extract all possible trajectories of the human motion, which makes it more accurate. Our method is also likely to provides a new way to filter other passive sensor log data in sensor networks.
Tai, Meng Wei; Chong, Zhen Feng; Asif, Muhammad Khan; Rahmat, Rabiah A; Nambiar, Phrabhakaran
2016-09-01
This study was to compare the suitability and precision of xerographic and computer-assisted methods for bite mark investigations. Eleven subjects were asked to bite on their forearm and the bite marks were photographically recorded. Alginate impressions of the subjects' dentition were taken and their casts were made using dental stone. The overlays generated by xerographic method were obtained by photocopying the subjects' casts and the incisal edge outlines were then transferred on a transparent sheet. The bite mark images were imported into Adobe Photoshop® software and printed to life-size. The bite mark analyses using xerographically generated overlays were done by comparing an overlay to the corresponding printed bite mark images manually. In computer-assisted method, the subjects' casts were scanned into Adobe Photoshop®. The bite mark analyses using computer-assisted overlay generation were done by matching an overlay and the corresponding bite mark images digitally using Adobe Photoshop®. Another comparison method was superimposing the cast images with corresponding bite mark images employing the Adobe Photoshop® CS6 and GIF-Animator©. A score with a range of 0-3 was given during analysis to each precision-determining criterion and the score was increased with better matching. The Kruskal Wallis H test showed significant difference between the three sets of data (H=18.761, p<0.05). In conclusion, bite mark analysis using the computer-assisted animated-superimposition method was the most accurate, followed by the computer-assisted overlay generation and lastly the xerographic method. The superior precision contributed by digital method is discernible despite the human skin being a poor recording medium of bite marks. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Bayesian estimation of the transmissivity spatial structure from pumping test data
NASA Astrophysics Data System (ADS)
Demir, Mehmet Taner; Copty, Nadim K.; Trinchero, Paolo; Sanchez-Vila, Xavier
2017-06-01
Estimating the statistical parameters (mean, variance, and integral scale) that define the spatial structure of the transmissivity or hydraulic conductivity fields is a fundamental step for the accurate prediction of subsurface flow and contaminant transport. In practice, the determination of the spatial structure is a challenge because of spatial heterogeneity and data scarcity. In this paper, we describe a novel approach that uses time drawdown data from multiple pumping tests to determine the transmissivity statistical spatial structure. The method builds on the pumping test interpretation procedure of Copty et al. (2011) (Continuous Derivation method, CD), which uses the time-drawdown data and its time derivative to estimate apparent transmissivity values as a function of radial distance from the pumping well. A Bayesian approach is then used to infer the statistical parameters of the transmissivity field by combining prior information about the parameters and the likelihood function expressed in terms of radially-dependent apparent transmissivities determined from pumping tests. A major advantage of the proposed Bayesian approach is that the likelihood function is readily determined from randomly generated multiple realizations of the transmissivity field, without the need to solve the groundwater flow equation. Applying the method to synthetically-generated pumping test data, we demonstrate that, through a relatively simple procedure, information on the spatial structure of the transmissivity may be inferred from pumping tests data. It is also shown that the prior parameter distribution has a significant influence on the estimation procedure, given the non-uniqueness of the estimation procedure. Results also indicate that the reliability of the estimated transmissivity statistical parameters increases with the number of available pumping tests.
Rodnick, Melissa E; Brooks, Allen F; Hockley, Brian G; Henderson, Bradford D; Scott, Peter J H
2013-08-01
A novel one-pot method for preparing [(18)F]fluoromethylcholine ([(18)F]FCH) via in situ generation of [(18)F]fluoromethyl tosylate ([(18)F]FCH2OTs), and subsequent [(18)F]fluoromethylation of dimethylaminoethanol (DMAE), has been developed. [(18)F]FCH was prepared using a GE TRACERlab FXFN, although the method should be readily adaptable to any other fluorine-(18) synthesis module. Initially ditosylmethane was fluorinated to generate [(18)F]FCH2OTs. DMAE was then added and the reaction was heated at 120 °C for 10 min to generate [(18)F]FCH. After this time, reaction solvent was evaporated, and the crude reaction mixture was purified by solid-phase extraction using C(18)-Plus and CM-Light Sep-Pak cartridges to provide [(18)F]FCH formulated in USP saline. The formulated product was passed through a 0.22 µm filter into a sterile dose vial, and submitted for quality control testing. Total synthesis time was 1.25 h from end-of-bombardment. Typical non-decay-corrected yields of [(18)F]FCH prepared using this method were 91 mCi (7% non-decay corrected based upon ~1.3 Ci [(18)F]fluoride), and doses passed all other quality control (QC) tests. A one-pot liquid-phase synthesis of [(18)F]FCH has been developed. Doses contain extremely low levels of residual DMAE (31.6 µg/10 mL dose or ~3 ppm) and passed all other requisite QC testing, confirming their suitability for use in clinical imaging studies. Copyright © 2013 Elsevier Ltd. All rights reserved.
Method and apparatus for electrical cable testing by pulse-arrested spark discharge
Barnum, John R.; Warne, Larry K.; Jorgenson, Roy E.; Schneider, Larry X.
2005-02-08
A method for electrical cable testing by Pulse-Arrested Spark Discharge (PASD) uses the cable response to a short-duration high-voltage incident pulse to determine the location of an electrical breakdown that occurs at a defect site in the cable. The apparatus for cable testing by PASD includes a pulser for generating the short-duration high-voltage incident pulse, at least one diagnostic sensor to detect the incident pulse and the breakdown-induced reflected and/or transmitted pulses propagating from the electrical breakdown at the defect site, and a transient recorder to record the cable response. The method and apparatus are particularly useful to determine the location of defect sites in critical but inaccessible electrical cabling systems in aging aircraft, ships, nuclear power plants, and industrial complexes.
Bayesian Tracking of Emerging Epidemics Using Ensemble Optimal Statistical Interpolation
Cobb, Loren; Krishnamurthy, Ashok; Mandel, Jan; Beezley, Jonathan D.
2014-01-01
We present a preliminary test of the Ensemble Optimal Statistical Interpolation (EnOSI) method for the statistical tracking of an emerging epidemic, with a comparison to its popular relative for Bayesian data assimilation, the Ensemble Kalman Filter (EnKF). The spatial data for this test was generated by a spatial susceptible-infectious-removed (S-I-R) epidemic model of an airborne infectious disease. Both tracking methods in this test employed Poisson rather than Gaussian noise, so as to handle epidemic data more accurately. The EnOSI and EnKF tracking methods worked well on the main body of the simulated spatial epidemic, but the EnOSI was able to detect and track a distant secondary focus of infection that the EnKF missed entirely. PMID:25113590
A Study of Economical Incentives for Voltage Profile Control Method in Future Distribution Network
NASA Astrophysics Data System (ADS)
Tsuji, Takao; Sato, Noriyuki; Hashiguchi, Takuhei; Goda, Tadahiro; Tange, Seiji; Nomura, Toshio
In a future distribution network, it is difficult to maintain system voltage because a large number of distributed generators are introduced to the system. The authors have proposed “voltage profile control method” using power factor control of distributed generators in the previous work. However, the economical disbenefit is caused by the active power decrease when the power factor is controlled in order to increase the reactive power. Therefore, proper incentives must be given to the customers that corporate to the voltage profile control method. Thus, in this paper, we develop a new rules which can decide the economical incentives to the customers. The method is tested in one feeder distribution network model and its effectiveness is shown.
External Magnetic Field Reduction Techniques for the Advanced Stirling Radioisotope Generator
NASA Technical Reports Server (NTRS)
Niedra, Janis M.; Geng, Steven M.
2013-01-01
Linear alternators coupled to high efficiency Stirling engines are strong candidates for thermal-to-electric power conversion in space. However, the magnetic field emissions, both AC and DC, of these permanent magnet excited alternators can interfere with sensitive instrumentation onboard a spacecraft. Effective methods to mitigate the AC and DC electromagnetic interference (EMI) from solenoidal type linear alternators (like that used in the Advanced Stirling Convertor) have been developed for potential use in the Advanced Stirling Radioisotope Generator. The methods developed avoid the complexity and extra mass inherent in data extraction from multiple sensors or the use of shielding. This paper discusses these methods, and also provides experimental data obtained during breadboard testing of both AC and DC external magnetic field devices.
Mpindi, John-Patrick; Swapnil, Potdar; Dmitrii, Bychkov; Jani, Saarela; Saeed, Khalid; Wennerberg, Krister; Aittokallio, Tero; Östling, Päivi; Kallioniemi, Olli
2015-12-01
Most data analysis tools for high-throughput screening (HTS) seek to uncover interesting hits for further analysis. They typically assume a low hit rate per plate. Hit rates can be dramatically higher in secondary screening, RNAi screening and in drug sensitivity testing using biologically active drugs. In particular, drug sensitivity testing on primary cells is often based on dose-response experiments, which pose a more stringent requirement for data quality and for intra- and inter-plate variation. Here, we compared common plate normalization and noise-reduction methods, including the B-score and the Loess a local polynomial fit method under high hit-rate scenarios of drug sensitivity testing. We generated simulated 384-well plate HTS datasets, each with 71 plates having a range of 20 (5%) to 160 (42%) hits per plate, with controls placed either at the edge of the plates or in a scattered configuration. We identified 20% (77/384) as the critical hit-rate after which the normalizations started to perform poorly. Results from real drug testing experiments supported this estimation. In particular, the B-score resulted in incorrect normalization of high hit-rate plates, leading to poor data quality, which could be attributed to its dependency on the median polish algorithm. We conclude that a combination of a scattered layout of controls per plate and normalization using a polynomial least squares fit method, such as Loess helps to reduce column, row and edge effects in HTS experiments with high hit-rates and is optimal for generating accurate dose-response curves. john.mpindi@helsinki.fi. Supplementary information: R code and Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.
Validation of the tablet-administered Brief Assessment of Cognition (BAC App).
Atkins, Alexandra S; Tseng, Tina; Vaughan, Adam; Twamley, Elizabeth W; Harvey, Philip; Patterson, Thomas; Narasimhan, Meera; Keefe, Richard S E
2017-03-01
Computerized tests benefit from automated scoring procedures and standardized administration instructions. These methods can reduce the potential for rater error. However, especially in patients with severe mental illnesses, the equivalency of traditional and tablet-based tests cannot be assumed. The Brief Assessment of Cognition in Schizophrenia (BACS) is a pen-and-paper cognitive assessment tool that has been used in hundreds of research studies and clinical trials, and has normative data available for generating age- and gender-corrected standardized scores. A tablet-based version of the BACS called the BAC App has been developed. This study compared performance on the BACS and the BAC App in patients with schizophrenia and healthy controls. Test equivalency was assessed, and the applicability of paper-based normative data was evaluated. Results demonstrated the distributions of standardized composite scores for the tablet-based BAC App and the pen-and-paper BACS were indistinguishable, and the between-methods mean differences were not statistically significant. The discrimination between patients and controls was similarly robust. The between-methods correlations for individual measures in patients were r>0.70 for most subtests. When data from the Token Motor Test was omitted, the between-methods correlation of composite scores was r=0.88 (df=48; p<0.001) in healthy controls and r=0.89 (df=46; p<0.001) in patients, consistent with the test-retest reliability of each measure. Taken together, results indicate that the tablet-based BAC App generates results consistent with the traditional pen-and-paper BACS, and support the notion that the BAC App is appropriate for use in clinical trials and clinical practice. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
A two-step method for developing a control rod program for boiling water reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taner, M.S.; Levine, S.H.; Hsiao, M.Y.
1992-01-01
This paper reports on a two-step method that is established for the generation of a long-term control rod program for boiling water reactors (BWRs). The new method assumes a time-variant target power distribution in core depletion. In the new method, the BWR control rod programming is divided into two steps. In step 1, a sequence of optimal, exposure-dependent Haling power distribution profiles is generated, utilizing the spectral shift concept. In step 2, a set of exposure-dependent control rod patterns is developed by using the Haling profiles generated at step 1 as a target. The new method is implemented in amore » computer program named OCTOPUS. The optimization procedure of OCTOPUS is based on the method of approximation programming, in which the SIMULATE-E code is used to determine the nucleonics characteristics of the reactor core state. In a test in cycle length over a time-invariant, target Haling power distribution case because of a moderate application of spectral shift. No thermal limits of the core were violated. The gain in cycle length could be increased further by broadening the extent of the spetral shift.« less
Aida, Kazuo; Sugie, Toshihiko
2011-12-12
We propose a method of testing transmission fiber lines and distributed amplifiers. Multipath interference (MPI) is detected as a beat spectrum between a multipath signal and a direct signal using a synthesized chirped test signal with lightwave frequencies of f(1) and f(2) periodically emitted from a distributed feedback laser diode (DFB-LD). This chirped test pulse is generated using a directly modulated DFB-LD with a drive signal calculated using a digital signal processing technique (DSP). A receiver consisting of a photodiode and an electrical spectrum analyzer (ESA) detects a baseband power spectrum peak appearing at the frequency of the test signal frequency deviation (f(1)-f(2)) as a beat spectrum of self-heterodyne detection. Multipath interference is converted from the spectrum peak power. This method improved the minimum detectable MPI to as low as -78 dB. We discuss the detailed design and performance of the proposed test method, including a DFB-LD drive signal calculation algorithm with DSP for synthesis of the chirped test signal and experiments on single-mode fibers with discrete reflections. © 2011 Optical Society of America
Scaling properties of the aerodynamic noise generated by low-speed fans
NASA Astrophysics Data System (ADS)
Canepa, Edward; Cattanei, Andrea; Mazzocut Zecchin, Fabio
2017-11-01
The spectral decomposition algorithm presented in the paper may be applied to selected parts of the SPL spectrum, i.e. to specific noise generating mechanisms. It yields the propagation and the generation functions, and indeed the Mach number scaling exponent associated with each mechanism as a function of the Strouhal number. The input data are SPL spectra obtained from measurements taken during speed ramps. Firstly, the basic theory and the implemented algorithm are described. Then, the behaviour of the new method is analysed with reference to numerically generated spectral data and the results are compared with the ones of an existing method based on the assumption that the scaling exponent is constant. Guidelines for the employment of both methods are provided. Finally, the method is applied to measurements taken on a cooling fan mounted on a test plenum designed following the ISO 10302 standards. The most common noise generating mechanisms are present and attention is focused on the low-frequency part of the spectrum, where the mechanisms are superposed. Generally, both propagation and generation functions are determined with better accuracy than the scaling exponent, whose values are usually consistent with expectations based on coherence and compactness of the acoustic sources. For periodic noise, the computed exponent is less accurate, as the related SPL data set has usually a limited size. The scaling exponent is very sensitive to the details of the experimental data, e.g. to slight inconsistencies or random errors.
Intelligent Distribution Voltage Control with Distributed Generation =
NASA Astrophysics Data System (ADS)
Castro Mendieta, Jose
In this thesis, three methods for the optimal participation of the reactive power of distributed generations (DGs) in unbalanced distributed network have been proposed, developed, and tested. These new methods were developed with the objectives of maintain voltage within permissible limits and reduce losses. The first method proposes an optimal participation of reactive power of all devices available in the network. The propose approach is validated by comparing the results with other methods reported in the literature. The proposed method was implemented using Simulink of Matlab and OpenDSS. Optimization techniques and the presentation of results are from Matlab. The co-simulation of Electric Power Research Institute's (EPRI) OpenDSS program solves a three-phase optimal power flow problem in the unbalanced IEEE 13 and 34-node test feeders. The results from this work showed a better loss reduction compared to the Coordinated Voltage Control (CVC) method. The second method aims to minimize the voltage variation on the pilot bus on distribution network using DGs. It uses Pareto and Fuzzy-PID logic to reduce the voltage variation. Results indicate that the proposed method reduces the voltage variation more than the other methods. Simulink of Matlab and OpenDSS is used in the development of the proposed approach. The performance of the method is evaluated on IEEE 13-node test feeder with one and three DGs. Variables and unbalanced loads are used, based on real consumption data, over a time window of 48 hours. The third method aims to minimize the reactive losses using DGs on distribution networks. This method analyzes the problem using the IEEE 13-node test feeder with three different loads and the IEEE 123-node test feeder with four DGs. The DGs can be fixed or variables. Results indicate that integration of DGs to optimize the reactive power of the network helps to maintain the voltage within the allowed limits and to reduce the reactive power losses. The thesis is presented in the form of the three articles. The first article is published in the journal Electrical Power and Energy System, the second is published in the international journal Energies and the third was submitted to the journal Electrical Power and Energy System. Two other articles have been published in conferences with reviewing committee. This work is based on six chapters, which are detailed in the various sections of the thesis.
NASA Astrophysics Data System (ADS)
Zorila, Alexandru; Stratan, Aurel; Nemes, George
2018-01-01
We compare the ISO-recommended (the standard) data-reduction algorithm used to determine the surface laser-induced damage threshold of optical materials by the S-on-1 test with two newly suggested algorithms, both named "cumulative" algorithms/methods, a regular one and a limit-case one, intended to perform in some respects better than the standard one. To avoid additional errors due to real experiments, a simulated test is performed, named the reverse approach. This approach simulates the real damage experiments, by generating artificial test-data of damaged and non-damaged sites, based on an assumed, known damage threshold fluence of the target and on a given probability distribution function to induce the damage. In this work, a database of 12 sets of test-data containing both damaged and non-damaged sites was generated by using four different reverse techniques and by assuming three specific damage probability distribution functions. The same value for the threshold fluence was assumed, and a Gaussian fluence distribution on each irradiated site was considered, as usual for the S-on-1 test. Each of the test-data was independently processed by the standard and by the two cumulative data-reduction algorithms, the resulting fitted probability distributions were compared with the initially assumed probability distribution functions, and the quantities used to compare these algorithms were determined. These quantities characterize the accuracy and the precision in determining the damage threshold and the goodness of fit of the damage probability curves. The results indicate that the accuracy in determining the absolute damage threshold is best for the ISO-recommended method, the precision is best for the limit-case of the cumulative method, and the goodness of fit estimator (adjusted R-squared) is almost the same for all three algorithms.
Buchalska, Marta; Labuz, Przemysław; Bujak, Łukasz; Szewczyk, Grzegorz; Sarna, Tadeusz; Maćkowski, Sebastian; Macyk, Wojciech
2013-07-14
The generation of singlet oxygen in aqueous colloids of nanocrystalline TiO2 (anatase) modified by organic chelating ligands forming surface Ti(IV) complexes was studied. Detailed studies revealed a plausible and to date unappreciated influence of near-infrared irradiation on singlet oxygen generation at the surface of TiO2. To detect (1)O2, direct and indirect methods have been applied: a photon counting technique enabling time-resolved measurements of (1)O2 phosphorescence, and fluorescence measurements of a product of singlet oxygen interaction with Singlet Oxygen Sensor Green (SOSG). Both methods proved the generation of (1)O2. Nanocrystalline TiO2 modified with salicylic acid appeared to be the most efficient photosensitizer among the tested materials. The measured quantum yield reached the value of 0.012 upon irradiation at 355 nm, while unmodified TiO2 colloids appeared to be substantially less efficient generators of singlet oxygen with the corresponding quantum yield of ca. 0.003. A photocatalytic degradation of 4-chlorophenol, proceeding through oxidation by OH˙, was also monitored. The influence of irradiation conditions (UV, vis, NIR or any combination of these spectral ranges) on the generation of both singlet oxygen and hydroxyl radicals has been tested and discussed. Simultaneous irradiation with visible and NIR light did not accelerate OH˙ formation; however, for TiO2 modified with catechol it influenced (1)O2 generation. Singlet oxygen is presumably formed according to Nosaka's mechanism comprising O2˙(-) oxidation with a strong oxidant (hole, an oxidized ligand); however, the energy transfer from NIR-excited titanium(iii) centers (trapped electrons) plays also a plausible role.
Development of and Improved Magneto-Optic/Eddy-Current Imager
DOT National Transportation Integrated Search
1997-04-01
Magneto-optic/eddy-current imaging technology has been developed and approved for inspection of cracks in aging aircraft. This relatively new nondestructive test method gives the inspector the ability to quickly generate real-time eddy-current images...
Intelligent rover decision-making in response to exogenous events
NASA Technical Reports Server (NTRS)
Chouinard, C.; Estlin, T.; Gaines, D.; Fisher, F.
2005-01-01
This paper presents an introduction to the CLEAR system which performs rover command generation and re-planning, the challenges faced maintaining domain specific information in an uncertain environment, and the successes demonstrated with several methods of system testing.
Ribozyme Mediated gRNA Generation for In Vitro and In Vivo CRISPR/Cas9 Mutagenesis.
Lee, Raymond Teck Ho; Ng, Ashley Shu Mei; Ingham, Philip W
2016-01-01
CRISPR/Cas9 is now regularly used for targeted mutagenesis in a wide variety of systems. Here we report the use of ribozymes for the generation of gRNAs both in vitro and in zebrafish embryos. We show that incorporation of ribozymes increases the types of promoters and number of target sites available for mutagenesis without compromising mutagenesis efficiency. We have tested this by comparing the efficiency of mutagenesis of gRNA constructs with and without ribozymes and also generated a transgenic zebrafish expressing gRNA using a heat shock promoter (RNA polymerase II-dependent promoter) that was able to induce mutagenesis of its target. Our method provides a streamlined approach to test gRNA efficiency as well as increasing the versatility of conditional gene knock out in zebrafish.
Dutta, Debashis; Johnson, Samuel; Dalal, Alisha; Deymier, Martin J.; Hunter, Eric
2018-01-01
Traditional restriction endonuclease-based cloning has been routinely used to generate replication-competent simian-human immunodeficiency viruses (SHIV) and simian tropic HIV (stHIV). This approach requires the existence of suitable restriction sites or the introduction of nucleotide changes to create them. Here, using an In-Fusion cloning technique that involves homologous recombination, we generated SHIVs and stHIVs based on epidemiologically linked clade C transmitted/founder HIV molecular clones from Zambia. Replacing vif from these HIV molecular clones with vif of SIVmac239 resulted in chimeric genomes used to generate infectious stHIV viruses. Likewise, exchanging HIV env genes and introducing N375 mutations to enhance macaque CD4 binding site and cloned into a SHIVAD8-EO backbone. The generated SHIVs and stHIV were infectious in TZMbl and ZB5 cells, as well as macaque PBMCs. Therefore, this method can replace traditional methods and be a valuable tool for the rapid generation and testing of molecular clones of stHIV and SHIV based on primary clinical isolates will be valuable to generate rapid novel challenge viruses for HIV vaccine/cure studies. PMID:29758076
Groeber, F; Schober, L; Schmid, F F; Traube, A; Kolbus-Hernandez, S; Daton, K; Hoffmann, S; Petersohn, D; Schäfer-Korting, M; Walles, H; Mewes, K R
2016-10-01
To replace the Draize skin irritation assay (OECD guideline 404) several test methods based on reconstructed human epidermis (RHE) have been developed and were adopted in the OECD test guideline 439. However, all validated test methods in the guideline are linked to RHE provided by only three companies. Thus, the availability of these test models is dependent on the commercial interest of the producer. To overcome this limitation and thus to increase the accessibility of in vitro skin irritation testing, an open source reconstructed epidermis (OS-REp) was introduced. To demonstrate the capacity of the OS-REp in regulatory risk assessment, a catch-up validation study was performed. The participating laboratories used in-house generated OS-REp to assess the set of 20 reference substances according to the performance standards amending the OECD test guideline 439. Testing was performed under blinded conditions. The within-laboratory reproducibility of 87% and the inter-laboratory reproducibility of 85% prove a high reliability of irritancy testing using the OS-REp protocol. In addition, the prediction capacity was with an accuracy of 80% comparable to previous published RHE based test protocols. Taken together the results indicate that the OS-REp test method can be used as a standalone alternative skin irritation test replacing the OECD test guideline 404. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Akushevich, I.; Filoti, O. F.; Ilyichev, A.; Shumeiko, N.
2012-07-01
The structure and algorithms of the Monte Carlo generator ELRADGEN 2.0 designed to simulate radiative events in polarized ep-scattering are presented. The full set of analytical expressions for the QED radiative corrections is presented and discussed in detail. Algorithmic improvements implemented to provide faster simulation of hard real photon events are described. Numerical tests show high quality of generation of photonic variables and radiatively corrected cross section. The comparison of the elastic radiative tail simulated within the kinematical conditions of the BLAST experiment at MIT BATES shows a good agreement with experimental data. Catalogue identifier: AELO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1299 No. of bytes in distributed program, including test data, etc.: 11 348 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: All Operating system: Any RAM: 1 MB Classification: 11.2, 11.4 Nature of problem: Simulation of radiative events in polarized ep-scattering. Solution method: Monte Carlo simulation according to the distributions of the real photon kinematic variables that are calculated by the covariant method of QED radiative correction estimation. The approach provides rather fast and accurate generation. Running time: The simulation of 108 radiative events for itest:=1 takes up to 52 seconds on Pentium(R) Dual-Core 2.00 GHz processor.