Science.gov

Sample records for algorithms tested include

  1. Effective detection of toxigenic Clostridium difficile by a two-step algorithm including tests for antigen and cytotoxin.

    PubMed

    Ticehurst, John R; Aird, Deborah Z; Dam, Lisa M; Borek, Anita P; Hargrove, John T; Carroll, Karen C

    2006-03-01

    We evaluated a two-step algorithm for detecting toxigenic Clostridium difficile: an enzyme immunoassay for glutamate dehydrogenase antigen (Ag-EIA) and then, for antigen-positive specimens, a concurrent cell culture cytotoxicity neutralization assay (CCNA). Antigen-negative results were > or = 99% predictive of CCNA negativity. Because the Ag-EIA reduced cell culture workload by approximately 75 to 80% and two-step testing was complete in < or = 3 days, we decided that this algorithm would be effective. Over 6 months, our laboratories' expenses were US dollar 143,000 less than if CCNA alone had been performed on all 5,887 specimens.

  2. Yield of stool culture with isolate toxin testing versus a two-step algorithm including stool toxin testing for detection of toxigenic Clostridium difficile.

    PubMed

    Reller, Megan E; Lema, Clara A; Perl, Trish M; Cai, Mian; Ross, Tracy L; Speck, Kathleen A; Carroll, Karen C

    2007-11-01

    We examined the incremental yield of stool culture (with toxin testing on isolates) versus our two-step algorithm for optimal detection of toxigenic Clostridium difficile. Per the two-step algorithm, stools were screened for C. difficile-associated glutamate dehydrogenase (GDH) antigen and, if positive, tested for toxin by a direct (stool) cell culture cytotoxicity neutralization assay (CCNA). In parallel, stools were cultured for C. difficile and tested for toxin by both indirect (isolate) CCNA and conventional PCR if the direct CCNA was negative. The "gold standard" for toxigenic C. difficile was detection of C. difficile by the GDH screen or by culture and toxin production by direct or indirect CCNA. We tested 439 specimens from 439 patients. GDH screening detected all culture-positive specimens. The sensitivity of the two-step algorithm was 77% (95% confidence interval [CI], 70 to 84%), and that of culture was 87% (95% CI, 80 to 92%). PCR results correlated completely with those of CCNA testing on isolates (29/29 positive and 32/32 negative, respectively). We conclude that GDH is an excellent screening test and that culture with isolate CCNA testing detects an additional 23% of toxigenic C. difficile missed by direct CCNA. Since culture is tedious and also detects nontoxigenic C. difficile, we conclude that culture is most useful (i) when the direct CCNA is negative but a high clinical suspicion of toxigenic C. difficile remains, (ii) in the evaluation of new diagnostic tests for toxigenic C. difficile (where the best reference standard is essential), and (iii) in epidemiologic studies (where the availability of an isolate allows for strain typing and antimicrobial susceptibility testing).

  3. Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  4. Testing an earthquake prediction algorithm

    USGS Publications Warehouse

    Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.

    1997-01-01

    A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.

  5. A dynamic programming algorithm for RNA structure prediction including pseudoknots.

    PubMed

    Rivas, E; Eddy, S R

    1999-02-01

    We describe a dynamic programming algorithm for predicting optimal RNA secondary structure, including pseudoknots. The algorithm has a worst case complexity of O(N6) in time and O(N4) in storage. The description of the algorithm is complex, which led us to adopt a useful graphical representation (Feynman diagrams) borrowed from quantum field theory. We present an implementation of the algorithm that generates the optimal minimum energy structure for a single RNA sequence, using standard RNA folding thermodynamic parameters augmented by a few parameters describing the thermodynamic stability of pseudoknots. We demonstrate the properties of the algorithm by using it to predict structures for several small pseudoknotted and non-pseudoknotted RNAs. Although the time and memory demands of the algorithm are steep, we believe this is the first algorithm to be able to fold optimal (minimum energy) pseudoknotted RNAs with the accepted RNA thermodynamic model.

  6. A generalized memory test algorithm

    NASA Technical Reports Server (NTRS)

    Milner, E. J.

    1982-01-01

    A general algorithm for testing digital computer memory is presented. The test checks that (1) every bit can be cleared and set in each memory work, and (2) bits are not erroneously cleared and/or set elsewhere in memory at the same time. The algorithm can be applied to any size memory block and any size memory word. It is concise and efficient, requiring the very few cycles through memory. For example, a test of 16-bit-word-size memory requries only 384 cycles through memory. Approximately 15 seconds were required to test a 32K block of such memory, using a microcomputer having a cycle time of 133 nanoseconds.

  7. Component evaluation testing and analysis algorithms.

    SciTech Connect

    Hart, Darren M.; Merchant, Bion John

    2011-10-01

    The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.

  8. Chemical Compatibility Testing Final Report Including Test Plans and Procedures

    SciTech Connect

    NIMITZ,JONATHAN S.; ALLRED,RONALD E.; GORDON,BRENT W.; NIGREY,PAUL J.; MCCONNELL,PAUL E.

    2001-07-01

    This report provides an independent assessment of information on mixed waste streams, chemical compatibility information on polymers, and standard test methods for polymer properties. It includes a technology review of mixed low-level waste (LLW) streams and material compatibilities, validation for the plan to test the compatibility of simulated mixed wastes with potential seal and liner materials, and the test plan itself. Potential packaging materials were reviewed and evaluated for compatibility with expected hazardous wastes. The chemical and physical property measurements required for testing container materials were determined. Test methodologies for evaluating compatibility were collected and reviewed for applicability. A test plan to meet US Department of Energy and Environmental Protection Agency requirements was developed. The expected wastes were compared with the chemical resistances of polymers, the top-ranking polymers were selected for testing, and the most applicable test methods for candidate seal and liner materials were determined. Five recommended solutions to simulate mixed LLW streams are described. The test plan includes descriptions of test materials, test procedures, data collection protocols, safety and environmental considerations, and quality assurance procedures. The recommended order of testing to be conducted is specified.

  9. Testing block subdivision algorithms on block designs

    NASA Astrophysics Data System (ADS)

    Wiseman, Natalie; Patterson, Zachary

    2016-01-01

    Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.

  10. Quantum Statistical Testing of a QRNG Algorithm

    SciTech Connect

    Humble, Travis S; Pooser, Raphael C; Britt, Keith A

    2013-01-01

    We present the algorithmic design of a quantum random number generator, the subsequent synthesis of a physical design and its verification using quantum statistical testing. We also describe how quantum statistical testing can be used to diagnose channel noise in QKD protocols.

  11. A blind test of monthly homogenisation algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.

    2012-04-01

    metrics: the root mean square error, the error in (linear and nonlinear) trend estimates and contingency scores. The metrics are computed on the station data and the network average regional climate signal, as well as on monthly data and yearly data, for both temperature and precipitation. Because the test was blind, we can state with confidence that relative homogenisation improves the quality of climate station data. The performance of the contributions depends significantly on the error metric considered. Still a group of better algorithms can be found that includes Craddock, PRODIGE, MASH, ACMANT and USHCN. Clearly algorithms developed for solving the multiple breakpoint problem with an inhomogeneous reference perform best. The results suggest that the correction algorithms are currently an important weakness of many methods. For more information on the COST Action on homogenisation see: http://www.homogenisation.org/

  12. Tests for assessing beam propagation algorithms

    NASA Astrophysics Data System (ADS)

    Stone, Bryan D.

    2011-10-01

    Given a beam propagation algorithm, whether it is a commercial implementation or some other in-house or research implementation, it is not trivial to determine whether it is suitable either for a wide range of applications or even for a specific application. In this paper, we describe a range of tests with "known" results; these can be used to exercise beam propagation algorithms and assess their robustness and accuracy. Three different categories of such tests are discussed. One category is tests of self-consistency. Such tests often rely on symmetry to make guarantees about some aspect of the resulting field. While passing such tests does not guarantee correct results in detail, they can nonetheless point towards problems with an algorithm when they fail, and build confidence when they pass. Another category of tests compares the complex field to values that have been experimentally measured. While the experimental data is not always known in precisely, and the experimental setup might not always be accessible, these tests can provide reasonable quantitative comparisons that can also point towards problems with the algorithm. The final category of tests discussed is those for which the propagated complex field can be computed independently. The test systems for this category tend to be relatively simple, such as diffraction through apertures in free space or in the pupil of an ideal imaging system. Despite their relative simplicity, there are a number of advantages to these tests. For example, they can provide quantitative measures of accuracy. These tests also allow one to develop an understanding of how the execution time (or similarly, memory usage) scales as the region-of-interest over which one desires the field is changed.

  13. Motion Cueing Algorithm Development: Piloted Performance Testing of the Cueing Algorithms

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    The relative effectiveness in simulating aircraft maneuvers with both current and newly developed motion cueing algorithms was assessed with an eleven-subject piloted performance evaluation conducted on the NASA Langley Visual Motion Simulator (VMS). In addition to the current NASA adaptive algorithm, two new cueing algorithms were evaluated: the optimal algorithm and the nonlinear algorithm. The test maneuvers included a straight-in approach with a rotating wind vector, an offset approach with severe turbulence and an on/off lateral gust that occurs as the aircraft approaches the runway threshold, and a takeoff both with and without engine failure after liftoff. The maneuvers were executed with each cueing algorithm with added visual display delay conditions ranging from zero to 200 msec. Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. Piloted performance parameters for the approach maneuvers, the vertical velocity upon touchdown and the runway touchdown position, were also analyzed but did not show any noticeable difference among the cueing algorithms. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach were less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.

  14. A framework for data-driven algorithm testing

    NASA Astrophysics Data System (ADS)

    Funk, Wolfgang; Kirchner, Daniel

    2005-03-01

    We describe the requirements, design, architecture and implementation of a framework that facilitates the setup, management and realisation of data-driven performance and acceptance tests for algorithms. The framework builds on standard components, supports distributed tests on heterogeneous platforms, is scalable and requires minimum integration efforts for algorithm providers by chaining command line driven applications. We use XML as test specification language, so tests can be set up in a declarative way without any programming effort and the test specification can easily be validated against an XML schema. We consider a test scenario where each test consists of one to many test processes and each process works on a representative set of input data that are accessible as data files. The test process is built up of operations that are executed successively in a predefined sequence. Each operation may be one of the algorithms under test or a supporting functionality (e.g. a file format conversion utility). The test definition and the test results are made persistent in a relational database. We decided to use a J2EE compliant application server as persistence engine, thus the natural choice is to implement the test client as Java application. Java is available for the most important operating systems, provides control of OS-processes, including the input and output channels and has extensive support for XML processing.

  15. Datasets for radiation network algorithm development and testing

    SciTech Connect

    Rao, Nageswara S; Sen, Satyabrata; Berry, M. L..; Wu, Qishi; Grieme, M.; Brooks, Richard R; Cordone, G.

    2016-01-01

    Domestic Nuclear Detection Office s (DNDO) Intelligence Radiation Sensors Systems (IRSS) program supported the development of networks of commercial-off-the-shelf (COTS) radiation counters for detecting, localizing, and identifying low-level radiation sources. Under this program, a series of indoor and outdoor tests were conducted with multiple source strengths and types, different background profiles, and various types of source and detector movements. Following the tests, network algorithms were replayed in various re-constructed scenarios using sub-networks. These measurements and algorithm traces together provide a rich collection of highly valuable datasets for testing the current and next generation radiation network algorithms, including the ones (to be) developed by broader R&D communities such as distributed detection, information fusion, and sensor networks. From this multiple TeraByte IRSS database, we distilled out and packaged the first batch of canonical datasets for public release. They include measurements from ten indoor and two outdoor tests which represent increasingly challenging baseline scenarios for robustly testing radiation network algorithms.

  16. 8. VIEW OF RADIOGRAPHY EQUIPMENT, TEST METHODS INCLUDED RADIOGRAPHY AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    8. VIEW OF RADIOGRAPHY EQUIPMENT, TEST METHODS INCLUDED RADIOGRAPHY AND BETA BACKSCATTERING. (7/13/56) - Rocky Flats Plant, Non-Nuclear Production Facility, South of Cottonwood Avenue, west of Seventh Avenue & east of Building 460, Golden, Jefferson County, CO

  17. 13. Historic drawing of rocket engine test facility layout, including ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    13. Historic drawing of rocket engine test facility layout, including Buildings 202, 205, 206, and 206A, February 3, 1984. NASA GRC drawing number CF-101539. On file at NASA Glenn Research Center. - Rocket Engine Testing Facility, NASA Glenn Research Center, Cleveland, Cuyahoga County, OH

  18. Testing Intelligently Includes Double-Checking Wechsler IQ Scores

    ERIC Educational Resources Information Center

    Kuentzel, Jeffrey G.; Hetterscheidt, Lesley A.; Barnett, Douglas

    2011-01-01

    The rigors of standardized testing make for numerous opportunities for examiner error, including simple computational mistakes in scoring. Although experts recommend that test scoring be double-checked, the extent to which independent double-checking would reduce scoring errors is not known. A double-checking procedure was established at a…

  19. An algorithm for computing nucleic acid base-pairing probabilities including pseudoknots.

    PubMed

    Dirks, Robert M; Pierce, Niles A

    2004-07-30

    Given a nucleic acid sequence, a recent algorithm allows the calculation of the partition function over secondary structure space including a class of physically relevant pseudoknots. Here, we present a method for computing base-pairing probabilities starting from the output of this partition function algorithm. The approach relies on the calculation of recursion probabilities that are computed by backtracking through the partition function algorithm, applying a particular transformation at each step. This transformation is applicable to any partition function algorithm that follows the same basic dynamic programming paradigm. Base-pairing probabilities are useful for analyzing the equilibrium ensemble properties of natural and engineered nucleic acids, as demonstrated for a human telomerase RNA and a synthetic DNA nanostructure. PMID:15139042

  20. Reliability based design including future tests and multiagent approaches

    NASA Astrophysics Data System (ADS)

    Villanueva, Diane

    The initial stages of reliability-based design optimization involve the formulation of objective functions and constraints, and building a model to estimate the reliability of the design with quantified uncertainties. However, even experienced hands often overlook important objective functions and constraints that affect the design. In addition, uncertainty reduction measures, such as tests and redesign, are often not considered in reliability calculations during the initial stages. This research considers two areas that concern the design of engineering systems: 1) the trade-off of the effect of a test and post-test redesign on reliability and cost and 2) the search for multiple candidate designs as insurance against unforeseen faults in some designs. In this research, a methodology was developed to estimate the effect of a single future test and post-test redesign on reliability and cost. The methodology uses assumed distributions of computational and experimental errors with re-design rules to simulate alternative future test and redesign outcomes to form a probabilistic estimate of the reliability and cost for a given design. Further, it was explored how modeling a future test and redesign provides a company an opportunity to balance development costs versus performance by simultaneously designing the design and the post-test redesign rules during the initial design stage. The second area of this research considers the use of dynamic local surrogates, or surrogate-based agents, to locate multiple candidate designs. Surrogate-based global optimization algorithms often require search in multiple candidate regions of design space, expending most of the computation needed to define multiple alternate designs. Thus, focusing on solely locating the best design may be wasteful. We extended adaptive sampling surrogate techniques to locate multiple optima by building local surrogates in sub-regions of the design space to identify optima. The efficiency of this method

  1. Variational Algorithms for Test Particle Trajectories

    NASA Astrophysics Data System (ADS)

    Ellison, C. Leland; Finn, John M.; Qin, Hong; Tang, William M.

    2015-11-01

    The theory of variational integration provides a novel framework for constructing conservative numerical methods for magnetized test particle dynamics. The retention of conservation laws in the numerical time advance captures the correct qualitative behavior of the long time dynamics. For modeling the Lorentz force system, new variational integrators have been developed that are both symplectic and electromagnetically gauge invariant. For guiding center test particle dynamics, discretization of the phase-space action principle yields multistep variational algorithms, in general. Obtaining the desired long-term numerical fidelity requires mitigation of the multistep method's parasitic modes or applying a discretization scheme that possesses a discrete degeneracy to yield a one-step method. Dissipative effects may be modeled using Lagrange-D'Alembert variational principles. Numerical results will be presented using a new numerical platform that interfaces with popular equilibrium codes and utilizes parallel hardware to achieve reduced times to solution. This work was supported by DOE Contract DE-AC02-09CH11466.

  2. Development and Testing of Data Mining Algorithms for Earth Observation

    NASA Technical Reports Server (NTRS)

    Glymour, Clark

    2005-01-01

    The new algorithms developed under this project included a principled procedure for classification of objects, events or circumstances according to a target variable when a very large number of potential predictor variables is available but the number of cases that can be used for training a classifier is relatively small. These "high dimensional" problems require finding a minimal set of variables -called the Markov Blanket-- sufficient for predicting the value of the target variable. An algorithm, the Markov Blanket Fan Search, was developed, implemented and tested on both simulated and real data in conjunction with a graphical model classifier, which was also implemented. Another algorithm developed and implemented in TETRAD IV for time series elaborated on work by C. Granger and N. Swanson, which in turn exploited some of our earlier work. The algorithms in question learn a linear time series model from data. Given such a time series, the simultaneous residual covariances, after factoring out time dependencies, may provide information about causal processes that occur more rapidly than the time series representation allow, so called simultaneous or contemporaneous causal processes. Working with A. Monetta, a graduate student from Italy, we produced the correct statistics for estimating the contemporaneous causal structure from time series data using the TETRAD IV suite of algorithms. Two economists, David Bessler and Kevin Hoover, have independently published applications using TETRAD style algorithms to the same purpose. These implementations and algorithmic developments were separately used in two kinds of studies of climate data: Short time series of geographically proximate climate variables predicting agricultural effects in California, and longer duration climate measurements of temperature teleconnections.

  3. A Study of a Network-Flow Algorithm and a Noncorrecting Algorithm for Test Assembly.

    ERIC Educational Resources Information Center

    Armstrong, R. D.; And Others

    1996-01-01

    When the network-flow algorithm (NFA) and the average growth approximation algorithm (AGAA) were used for automated test assembly with American College Test and Armed Services Vocational Aptitude Battery item banks, results indicate that reasonable error in item parameters is not harmful for test assembly using NFA or AGAA. (SLD)

  4. An Evaluation of a Flight Deck Interval Management Algorithm Including Delayed Target Trajectories

    NASA Technical Reports Server (NTRS)

    Swieringa, Kurt A.; Underwood, Matthew C.; Barmore, Bryan; Leonard, Robert D.

    2014-01-01

    NASA's first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature air traffic management technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise timebased scheduling in the terminal airspace; Controller Managed Spacing (CMS), which provides controllers with decision support tools enabling precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain precise in-trail spacing. During high demand operations, TMA-TM may produce a schedule and corresponding aircraft trajectories that include delay to ensure that a particular aircraft will be properly spaced from other aircraft at each schedule waypoint. These delayed trajectories are not communicated to the automation onboard the aircraft, forcing the IM aircraft to use the published speeds to estimate the target aircraft's estimated time of arrival. As a result, the aircraft performing IM operations may follow an aircraft whose TMA-TM generated trajectories have substantial speed deviations from the speeds expected by the spacing algorithm. Previous spacing algorithms were not designed to handle this magnitude of uncertainty. A simulation was conducted to examine a modified spacing algorithm with the ability to follow aircraft flying delayed trajectories. The simulation investigated the use of the new spacing algorithm with various delayed speed profiles and wind conditions, as well as several other variables designed to simulate real-life variability. The results and conclusions of this study indicate that the new spacing algorithm generally exhibits good performance; however, some types of target aircraft speed profiles can cause the spacing algorithm to command less than optimal speed control behavior.

  5. Algorithms and Programs for Strong Gravitational Lensing In Kerr Space-time Including Polarization

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kantowski, Ronald; Dai, Xinyu; Baron, Eddie; Maddumage, Prasad

    2015-05-01

    Active galactic nuclei (AGNs) and quasars are important astrophysical objects to understand. Recently, microlensing observations have constrained the size of the quasar X-ray emission region to be of the order of 10 gravitational radii of the central supermassive black hole. For distances within a few gravitational radii, light paths are strongly bent by the strong gravity field of the central black hole. If the central black hole has nonzero angular momentum (spin), then a photon’s polarization plane will be rotated by the gravitational Faraday effect. The observed X-ray flux and polarization will then be influenced significantly by the strong gravity field near the source. Consequently, linear gravitational lensing theory is inadequate for such extreme circumstances. We present simple algorithms computing the strong lensing effects of Kerr black holes, including the effects on polarization. Our algorithms are realized in a program “KERTAP” in two versions: MATLAB and Python. The key ingredients of KERTAP are a graphic user interface, a backward ray-tracing algorithm, a polarization propagator dealing with gravitational Faraday rotation, and algorithms computing observables such as flux magnification and polarization angles. Our algorithms can be easily realized in other programming languages such as FORTRAN, C, and C++. The MATLAB version of KERTAP is parallelized using the MATLAB Parallel Computing Toolbox and the Distributed Computing Server. The Python code was sped up using Cython and supports full implementation of MPI using the “mpi4py” package. As an example, we investigate the inclination angle dependence of the observed polarization and the strong lensing magnification of AGN X-ray emission. We conclude that it is possible to perform complex numerical-relativity related computations using interpreted languages such as MATLAB and Python.

  6. ALGORITHMS AND PROGRAMS FOR STRONG GRAVITATIONAL LENSING IN KERR SPACE-TIME INCLUDING POLARIZATION

    SciTech Connect

    Chen, Bin; Maddumage, Prasad; Kantowski, Ronald; Dai, Xinyu; Baron, Eddie

    2015-05-15

    Active galactic nuclei (AGNs) and quasars are important astrophysical objects to understand. Recently, microlensing observations have constrained the size of the quasar X-ray emission region to be of the order of 10 gravitational radii of the central supermassive black hole. For distances within a few gravitational radii, light paths are strongly bent by the strong gravity field of the central black hole. If the central black hole has nonzero angular momentum (spin), then a photon’s polarization plane will be rotated by the gravitational Faraday effect. The observed X-ray flux and polarization will then be influenced significantly by the strong gravity field near the source. Consequently, linear gravitational lensing theory is inadequate for such extreme circumstances. We present simple algorithms computing the strong lensing effects of Kerr black holes, including the effects on polarization. Our algorithms are realized in a program “KERTAP” in two versions: MATLAB and Python. The key ingredients of KERTAP are a graphic user interface, a backward ray-tracing algorithm, a polarization propagator dealing with gravitational Faraday rotation, and algorithms computing observables such as flux magnification and polarization angles. Our algorithms can be easily realized in other programming languages such as FORTRAN, C, and C++. The MATLAB version of KERTAP is parallelized using the MATLAB Parallel Computing Toolbox and the Distributed Computing Server. The Python code was sped up using Cython and supports full implementation of MPI using the “mpi4py” package. As an example, we investigate the inclination angle dependence of the observed polarization and the strong lensing magnification of AGN X-ray emission. We conclude that it is possible to perform complex numerical-relativity related computations using interpreted languages such as MATLAB and Python.

  7. A Boundary Condition Relaxation Algorithm for Strongly Coupled, Ablating Flows Including Shape Change

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.; Johnston, Christopher O.

    2011-01-01

    Implementations of a model for equilibrium, steady-state ablation boundary conditions are tested for the purpose of providing strong coupling with a hypersonic flow solver. The objective is to remove correction factors or film cooling approximations that are usually applied in coupled implementations of the flow solver and the ablation response. Three test cases are considered - the IRV-2, the Galileo probe, and a notional slender, blunted cone launched at 10 km/s from the Earth's surface. A successive substitution is employed and the order of succession is varied as a function of surface temperature to obtain converged solutions. The implementation is tested on a specified trajectory for the IRV-2 to compute shape change under the approximation of steady-state ablation. Issues associated with stability of the shape change algorithm caused by explicit time step limits are also discussed.

  8. Comparison of presumptive blood test kits including hexagon OBTI.

    PubMed

    Johnston, Emma; Ames, Carole E; Dagnall, Kathryn E; Foster, John; Daniel, Barbara E

    2008-05-01

    Four presumptive blood tests, Hexagon OBTI, Hemastix(R), Leucomalachite green (LMG), and Kastle-Meyer (KM) were compared for their sensitivity in the identification of dried bloodstains. Stains of varying blood dilutions were subjected to each presumptive test and the results compared. The Hexagon OBTI buffer volume was also reduced to ascertain whether this increased the sensitivity of the kit. The study found that Hemastix(R) was the most sensitive test for trace blood detection. Only with the reduced buffer volume was the Hexagon OBTI kit as sensitive as the LMG and KM tests. However, the Hexagon OBTI kit has the advantage of being a primate specific blood detection kit. This study also investigated whether the OBTI buffer within the kit could be utilized for DNA profiling after presumptive testing. The results show that DNA profiles can be obtained from the Hexagon OBTI kit buffer directly.

  9. A sparse reconstruction algorithm for ultrasonic images in nondestructive testing.

    PubMed

    Guarneri, Giovanni Alfredo; Pipa, Daniel Rodrigues; Neves Junior, Flávio; de Arruda, Lúcia Valéria Ramos; Zibetti, Marcelo Victor Wüst

    2015-01-01

    Ultrasound imaging systems (UIS) are essential tools in nondestructive testing (NDT). In general, the quality of images depends on two factors: system hardware features and image reconstruction algorithms. This paper presents a new image reconstruction algorithm for ultrasonic NDT. The algorithm reconstructs images from A-scan signals acquired by an ultrasonic imaging system with a monostatic transducer in pulse-echo configuration. It is based on regularized least squares using a l1 regularization norm. The method is tested to reconstruct an image of a point-like reflector, using both simulated and real data. The resolution of reconstructed image is compared with four traditional ultrasonic imaging reconstruction algorithms: B-scan, SAFT, ω-k SAFT and regularized least squares (RLS). The method demonstrates significant resolution improvement when compared with B-scan-about 91% using real data. The proposed scheme also outperforms traditional algorithms in terms of signal-to-noise ratio (SNR). PMID:25905700

  10. Experimental study on subaperture testing with iterative triangulation algorithm.

    PubMed

    Yan, Lisong; Wang, Xiaokun; Zheng, Ligong; Zeng, Xuefeng; Hu, Haixiang; Zhang, Xuejun

    2013-09-23

    Applying the iterative triangulation stitching algorithm, we provide an experimental demonstration by testing a Φ120 mm flat mirror, a Φ1450 mm off-axis parabolic mirror and a convex hyperboloid mirror. By comparing the stitching results with the self-examine subaperture, it shows that the reconstruction results are in consistent with that of the subaperture testing. As all the experiments are conducted with a 5-dof adjustment platform with big adjustment errors, it proves that using the above mentioned algorithm, the subaperture stitching can be easily performed without a precise positioning system. In addition, with the algorithm, we accomplish the coordinate unification between the testing and processing that makes it possible to guide the processing by the stitching result.

  11. Testing of Gyroless Estimation Algorithms for the Fuse Spacecraft

    NASA Technical Reports Server (NTRS)

    Harman, R.; Thienel, J.; Oshman, Yaakov

    2004-01-01

    This paper documents the testing and development of magnetometer-based gyroless attitude and rate estimation algorithms for the Far Ultraviolet Spectroscopic Explorer (FUSE). The results of two approaches are presented, one relies on a kinematic model for propagation, a method used in aircraft tracking, and the other is a pseudolinear Kalman filter that utilizes Euler's equations in the propagation of the estimated rate. Both algorithms are tested using flight data collected over a few months after the failure of two of the reaction wheels. The question of closed-loop stability is addressed. The ability of the controller to meet the science slew requirements, without the gyros, is analyzed.

  12. A test sheet generating algorithm based on intelligent genetic algorithm and hierarchical planning

    NASA Astrophysics Data System (ADS)

    Gu, Peipei; Niu, Zhendong; Chen, Xuting; Chen, Wei

    2013-03-01

    In recent years, computer-based testing has become an effective method to evaluate students' overall learning progress so that appropriate guiding strategies can be recommended. Research has been done to develop intelligent test assembling systems which can automatically generate test sheets based on given parameters of test items. A good multisubject test sheet depends on not only the quality of the test items but also the construction of the sheet. Effective and efficient construction of test sheets according to multiple subjects and criteria is a challenging problem. In this paper, a multi-subject test sheet generation problem is formulated and a test sheet generating approach based on intelligent genetic algorithm and hierarchical planning (GAHP) is proposed to tackle this problem. The proposed approach utilizes hierarchical planning to simplify the multi-subject testing problem and adopts genetic algorithm to process the layered criteria, enabling the construction of good test sheets according to multiple test item requirements. Experiments are conducted and the results show that the proposed approach is capable of effectively generating multi-subject test sheets that meet specified requirements and achieve good performance.

  13. A test sheet generating algorithm based on intelligent genetic algorithm and hierarchical planning

    NASA Astrophysics Data System (ADS)

    Gu, Peipei; Niu, Zhendong; Chen, Xuting; Chen, Wei

    2012-04-01

    In recent years, computer-based testing has become an effective method to evaluate students' overall learning progress so that appropriate guiding strategies can be recommended. Research has been done to develop intelligent test assembling systems which can automatically generate test sheets based on given parameters of test items. A good multisubject test sheet depends on not only the quality of the test items but also the construction of the sheet. Effective and efficient construction of test sheets according to multiple subjects and criteria is a challenging problem. In this paper, a multi-subject test sheet generation problem is formulated and a test sheet generating approach based on intelligent genetic algorithm and hierarchical planning (GAHP) is proposed to tackle this problem. The proposed approach utilizes hierarchical planning to simplify the multi-subject testing problem and adopts genetic algorithm to process the layered criteria, enabling the construction of good test sheets according to multiple test item requirements. Experiments are conducted and the results show that the proposed approach is capable of effectively generating multi-subject test sheets that meet specified requirements and achieve good performance.

  14. Development and Application of a Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Fulton, Christopher E.; Maul, William A.; Sowers, T. Shane

    2007-01-01

    This paper describes the development and initial demonstration of a Portable Health Algorithms Test (PHALT) System that is being developed by researchers at the NASA Glenn Research Center (GRC). The PHALT System was conceived as a means of evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT System allows systems health management algorithms to be developed in a graphical programming environment; to be tested and refined using system simulation or test data playback; and finally, to be evaluated in a real-time hardware-in-the-loop mode with a live test article. In this paper, PHALT System development is described through the presentation of a functional architecture, followed by the selection and integration of hardware and software. Also described is an initial real-time hardware-in-the-loop demonstration that used sensor data qualification algorithms to diagnose and isolate simulated sensor failures in a prototype Power Distribution Unit test-bed. Success of the initial demonstration is highlighted by the correct detection of all sensor failures and the absence of any real-time constraint violations.

  15. An enhanced bacterial foraging algorithm approach for optimal power flow problem including FACTS devices considering system loadability.

    PubMed

    Belwin Edward, J; Rajasekar, N; Sathiyasekar, K; Senthilnathan, N; Sarjila, R

    2013-09-01

    Obtaining optimal power flow solution is a strenuous task for any power system engineer. The inclusion of FACTS devices in the power system network adds to its complexity. The dual objective of OPF with fuel cost minimization along with FACTS device location for IEEE 30 bus is considered and solved using proposed Enhanced Bacterial Foraging algorithm (EBFA). The conventional Bacterial Foraging Algorithm (BFA) has the difficulty of optimal parameter selection. Hence, in this paper, BFA is enhanced by including Nelder-Mead (NM) algorithm for better performance. A MATLAB code for EBFA is developed and the problem of optimal power flow with inclusion of FACTS devices is solved. After several run with different initial values, it is found that the inclusion of FACTS devices such as SVC and TCSC in the network reduces the generation cost along with increased voltage stability limits. It is also observed that, the proposed algorithm requires lesser computational time compared to earlier proposed algorithms.

  16. Improved ant algorithms for software testing cases generation.

    PubMed

    Yang, Shunkun; Man, Tianlong; Xu, Jiaqi

    2014-01-01

    Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to produce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations.

  17. Improved Ant Algorithms for Software Testing Cases Generation

    PubMed Central

    Yang, Shunkun; Xu, Jiaqi

    2014-01-01

    Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to porduce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations. PMID:24883391

  18. Testing PEPT Algorithm on a Medical PET Scanner

    NASA Astrophysics Data System (ADS)

    Sadrmomtaz, Alireza

    The basis of Positron Emission Tomography (PET) is the detection of the photons produced, when a positron annihilates with an electron. Conservation of energy and momentum then require that two 511 keV gamma rays are emitted almost back to back (180° apart). This method is used to determine the spatial distribution of a positron emitting fluid. Verifying the position of a single emitting particle in an object instead of determining the distribution of a positron emitting fluid is the basis of another technique, which has been named positron emitting particle tracking PEPT and has been developed in Birmingham University. Birmingham University has recently obtained the PET scanner from Hammersmith Hospital which was installed there in 1987. This scanner consists of 32 detector buckets, each includes 128 bismuth germanate detection elements, which are configured in 8 rings. This scanner has been rebuilt in a flexible geometry and will be used for PEPT studies. Testing the PEPT algorithm on ECAT scanner gives a high data rate, can track approximately accurate at high speed and also has the possibility of making measurements on large vessels.

  19. A New Computer Algorithm for Simultaneous Test Construction of Two-Stage and Multistage Testing.

    ERIC Educational Resources Information Center

    Wu, Ing-Long

    2001-01-01

    Presents two binary programming models with a special network structure that can be explored computationally for simultaneous test construction. Uses an efficient special purpose network algorithm to solve these models. An empirical study illustrates the approach. (SLD)

  20. An Algorithm for Testing the Efficient Market Hypothesis

    PubMed Central

    Boboc, Ioana-Andreea; Dinică, Mihai-Cristian

    2013-01-01

    The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH). PMID:24205148

  1. An algorithm for testing the efficient market hypothesis.

    PubMed

    Boboc, Ioana-Andreea; Dinică, Mihai-Cristian

    2013-01-01

    The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH).

  2. GPS 3-D cockpit displays: Sensors, algorithms, and flight testing

    NASA Astrophysics Data System (ADS)

    Barrows, Andrew Kevin

    Tunnel-in-the-Sky 3-D flight displays have been investigated for several decades as a means of enhancing aircraft safety and utility. However, high costs have prevented commercial development and seriously hindered research into their operational benefits. The rapid development of Differential Global Positioning Systems (DGPS), inexpensive computing power, and ruggedized displays is now changing this situation. A low-cost prototype system was built and flight tested to investigate implementation and operational issues. The display provided an "out the window" 3-D perspective view of the world, letting the pilot see the horizon, runway, and desired flight path even in instrument flight conditions. The flight path was depicted as a tunnel through which the pilot flew the airplane, while predictor symbology provided guidance to minimize path-following errors. Positioning data was supplied, by various DGPS sources including the Stanford Wide Area Augmentation System (WAAS) testbed. A combination of GPS and low-cost inertial sensors provided vehicle heading, pitch, and roll information. Architectural and sensor fusion tradeoffs made during system implementation are discussed. Computational algorithms used to provide guidance on curved paths over the earth geoid are outlined along with display system design issues. It was found that current technology enables low-cost Tunnel-in-the-Sky display systems with a target cost of $20,000 for large-scale commercialization. Extensive testing on Piper Dakota and Beechcraft Queen Air aircraft demonstrated enhanced accuracy and operational flexibility on a variety of complex flight trajectories. These included curved and segmented approaches, traffic patterns flown on instruments, and skywriting by instrument reference. Overlays to existing instrument approaches at airports in California and Alaska were flown and compared with current instrument procedures. These overlays demonstrated improved utility and situational awareness for

  3. Effect of Restricting Perimetry Testing Algorithms to Reliable Sensitivities on Test-Retest Variability

    PubMed Central

    Gardiner, Stuart K.; Mansberger, Steven L.

    2016-01-01

    Purpose We have previously shown that sensitivities obtained at severely damaged visual field locations (<15–19 dB) are unreliable and highly variable. This study evaluates a testing algorithm that does not present very high contrast stimuli in damaged locations above approximately 1000% contrast, but instead concentrates on more precise estimation at remaining locations. Methods A trained ophthalmic technician tested 36 eyes of 36 participants twice with each of two different testing algorithms: ZEST0, which allowed sensitivities within the range 0 to 35 dB, and ZEST15, which allowed sensitivities between 15 and 35 dB but was otherwise identical. The difference between the two runs for the same algorithm was used as a measure of test-retest variability. These were compared between algorithms using a random effects model with homoscedastic within-group errors whose variance was allowed to differ between algorithms. Results The estimated test-retest variance for ZEST15 was 53.1% of the test-retest variance for ZEST0, with 95% confidence interval (50.5%–55.7%). Among locations whose sensitivity was ≥17 dB on all tests, the variability of ZEST15 was 86.4% of the test-retest variance for ZEST0, with 95% confidence interval (79.3%–94.0%). Conclusions Restricting the range of possible sensitivity estimates reduced test-retest variability, not only at locations with severe damage but also at locations with higher sensitivity. Future visual field algorithms should avoid high-contrast stimuli in severely damaged locations. Given that low sensitivities cannot be measured reliably enough for most clinical uses, it appears to be more efficient to concentrate on more precise testing of less damaged locations. PMID:27784065

  4. Monte Carlo tests of the ELIPGRID-PC algorithm

    SciTech Connect

    Davidson, J.R.

    1995-04-01

    The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM{reg_sign} PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within {plus_minus}0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangular sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error.

  5. Oscillation Detection Algorithm Development Summary Report and Test Plan

    SciTech Connect

    Zhou, Ning; Huang, Zhenyu; Tuffner, Francis K.; Jin, Shuangshuang

    2009-10-03

    -based modal analysis algorithms have been developed. They include Prony analysis, Regularized Ro-bust Recursive Least Square (R3LS) algorithm, Yule-Walker algorithm, Yule-Walker Spectrum algorithm, and the N4SID algo-rithm. Each has been shown to be effective for certain situations, but not as effective for some other situations. For example, the traditional Prony analysis works well for disturbance data but not for ambient data, while Yule-Walker is designed for ambient data only. Even in an algorithm that works for both disturbance data and ambient data, such as R3LS, latency results from the time window used in the algorithm is an issue in timely estimation of oscillation modes. For ambient data, the time window needs to be longer to accumulate information for a reasonably accurate estimation; while for disturbance data, the time window can be significantly shorter so the latency in estimation can be much less. In addition, adding a known input signal such as noise probing signals can increase the knowledge of system oscillatory properties and thus improve the quality of mode estimation. System situations change over time. Disturbances can occur at any time, and probing signals can be added for a certain time period and then removed. All these observations point to the need to add intelligence to ModeMeter applications. That is, a ModeMeter needs to adaptively select different algorithms and adjust parameters for various situations. This project aims to develop systematic approaches for algorithm selection and parameter adjustment. The very first step is to detect occurrence of oscillations so the algorithm and parameters can be changed accordingly. The proposed oscillation detection approach is based on the signal-noise ratio of measurements.

  6. Tailored Testing Theory and Practice: A Basic Model, Normal Ogive Submodels, and Tailored Testing Algorithms.

    ERIC Educational Resources Information Center

    Urry, Vern W.

    In this report, selection theory is used as a theoretical framework from which mathematical algorithms for tailored testing are derived. The process of tailored, or adaptive, testing is presented as analogous to personnel selection and rejection on a series of continuous variables that are related to ability. Proceeding from a single common-factor…

  7. Extended weighted fair queuing (EWFQ) algorithm for broadband applications including multicast traffic

    NASA Astrophysics Data System (ADS)

    Tufail, Mudassir; Cousin, Bernard

    1997-10-01

    Ensuring end-to-end bounded delay and fair allocation of bandwidth to a backlogged session are no more the only criterias for declaring a queue service scheme good. With the evolution of packet-switched networks, more and more distributed and multimedia applications are being developed. These applications demand that service offered to them should be homogeneously distributed at all instants contrarily to back-to-back packet's serving in WFQ scheme. There are two reasons for this demand of homogeneous service: (1) In feedback based congestion control algorithms, sources constantly sample the network state using the feedback from the receiver. The source modifies its emission rate in accordance to the feedback message. A reliable feedback message is only possible if the packet service is homogeneous. (2) In multicast applications, where packet replication is performed at switches, replicated packets are probable to be served at different rates if service to them, at different output ports, is not homogeneous. This is not desirable for such applications as the phenomena of packet replication to different multicast branches, at a switch, has to be carried out at a homogeneous speed for the following two important reasons: (1) heterogeneous service rates of replicated multicast packets result in different feedback informations, from different destinations (of same multicast session), and thus lead to unstable and less efficient network control. (2) in a switch architecture, the buffer requirement can be reduced if replication and serving of multicast packets are done at a homogeneous rate. Thus, there is a need of a service discipline which not only serve the applications at no less than their guaranteed rates but also assures a homogeneous service to packets. The homogeneous service to an application may precisely be translated in terms of maintaining a good inter-packets spacing. EWFQ scheme is identical to WFQ scheme expect that a packet is stamped with delayed

  8. Testing of Gyroless Estimation Algorithms for the FUSE Spacecraft

    NASA Technical Reports Server (NTRS)

    Thienel, Julie; Harman, Rick; Oshman, Yaakov

    2003-01-01

    The Far Ultraviolet Spectroscopic Explorer (FUSE) is equipped with two ring laser gyros on each of the spacecraft body axes. In May 2001 one gyro failed. It is anticipated that all of the remaining gyros will also fail based on intensity warnings. In addition to the gyro failure, two of four reaction wheels failed in late 2001. The spacecraft control now relies heavily on magnetic torque to perform the necessary science maneuvers and hold on target. The only sensor consistently available during slews is a magnetometer. This paper documents the testing and development of magnetometer-based gyroless attitude and rate estimation algorithms for FUSE. The results of two approaches are presented, one relies on a kinematic model for propagation, a method used in aircraft tracking, and the other is a pseudo-linear Kalman filter that utilizes Euler's equations in the propagation of the estimated rate. Both algorithms are tested using flight data collected over a few months before and after the reaction wheel failure. Finally, the question of closed-loop stability is addressed. The ability of the controller to meet the science slew requirements, without the gyros, is tested through simulations.

  9. Faith in the algorithm, part 1: beyond the turing test

    SciTech Connect

    Rodriguez, Marko A; Pepe, Alberto

    2009-01-01

    Since the Turing test was first proposed by Alan Turing in 1950, the goal of artificial intelligence has been predicated on the ability for computers to imitate human intelligence. However, the majority of uses for the computer can be said to fall outside the domain of human abilities and it is exactly outside of this domain where computers have demonstrated their greatest contribution. Another definition for artificial intelligence is one that is not predicated on human mimicry, but instead, on human amplification, where the algorithms that are best at accomplishing this are deemed the most intelligent. This article surveys various systems that augment human and social intelligence.

  10. MST Fitness Index and implicit data narratives: A comparative test on alternative unsupervised algorithms

    NASA Astrophysics Data System (ADS)

    Buscema, Massimo; Sacco, Pier Luigi

    2016-11-01

    In this paper, we introduce a new methodology for the evaluation of alternative algorithms in capturing the deep statistical structure of datasets of different types and nature, called MST Fitness, and based on the notion of Minimum Spanning Tree (MST). We test this methodology on six different databases, some of which artificial and widely used in similar experimentations, and some related to real world phenomena. Our test set consists of eight different algorithms, including some widely known and used, such as Principal Component Analysis, Linear Correlation, or Euclidean Distance. We moreover consider more sophisticated Artificial Neural Network based algorithms, such as the Self-Organizing Map (SOM) and a relatively new algorithm called Auto-Contractive Map (AutoCM). We find that, for our benchmark of datasets, AutoCM performs consistently better than all other algorithms for all of the datasets, and that its global performance is superior to that of the others of several orders of magnitude. It is to be checked in future research if AutoCM can be considered a truly general-purpose algorithm for the analysis of heterogeneous categories of datasets.

  11. Empirical Testing of an Algorithm for Defining Somatization in Children

    PubMed Central

    Eisman, Howard D.; Fogel, Joshua; Lazarovich, Regina; Pustilnik, Inna

    2007-01-01

    Introduction A previous article proposed an algorithm for defining somatization in children by classifying them into three categories: well, medically ill, and somatizer; the authors suggested further empirical validation of the algorithm (Postilnik et al., 2006). We use the Child Behavior Checklist (CBCL) to provide this empirical validation. Method Parents of children seen in pediatric clinics completed the CBCL (n=126). The physicians of these children completed specially-designed questionnaires. The sample comprised of 62 boys and 64 girls (age range 2 to 15 years). Classification categories included: well (n=53), medically ill (n=55), and somatizer (n=18). Analysis of variance (ANOVA) was used for statistical comparisons. Discriminant function analysis was conducted with the CBCL subscales. Results There were significant differences between the classification categories for the somatic complaints (p=<0.001), social problems (p=0.004), thought problems (p=0.01), attention problems (0.006), and internalizing (p=0.003) subscales and also total (p=0.001), and total-t (p=0.001) scales of the CBCL. Discriminant function analysis showed that 78% of somatizers and 66% of well were accurately classified, while only 35% of medically ill were accurately classified. Conclusion The somatization classification algorithm proposed by Postilnik et al. (2006) shows promise for classification of children and adolescents with somatic symptoms. PMID:18421368

  12. Testing of Gyroless Estimation Algorithms for the FUSE Spacecraft

    NASA Technical Reports Server (NTRS)

    Harman, Rick; Thienel, Julie; Oshman, Yaakov

    2003-01-01

    The Far Ultraviolet Spectroscopic Explorer (FUSE) is equipped with two ring laser gyros on each of the spacecraft body axes. In May 2001 one gyro failed. It is anticipated that all of the remaining gyros will fail, based on intensity warnings. In addition to the gyro failure, two of four reaction wheels failed in late 2001. The spacecraft control now relies heavily on magnetic torque to perform the necessary science maneuvers and hold on target. The only sensor consistently available during slews is a magnetometer. This paper documents the testing and development of magnetometer-based gyroless attitude and rate estimation algorithms for FUSE. The results of two approaches are presented, one relies on a kinematic model for propagation, a method used in aircraft tracking. The other is a pseudo-linear Kalman filter that utilizes Euler's equations in the propagation of the estimated rate. Both algorithms are tested using flight data collected over a few months after the reaction wheel failure. Finally, the question of closed-loop stability is addressed. The ability of the controller to meet the science slew requirements, without the gyros, is analyzed.

  13. Potential for false positive HIV test results with the serial rapid HIV testing algorithm

    PubMed Central

    2012-01-01

    Background Rapid HIV tests provide same-day results and are widely used in HIV testing programs in areas with limited personnel and laboratory infrastructure. The Uganda Ministry of Health currently recommends the serial rapid testing algorithm with Determine, STAT-PAK, and Uni-Gold for diagnosis of HIV infection. Using this algorithm, individuals who test positive on Determine, negative to STAT-PAK and positive to Uni-Gold are reported as HIV positive. We conducted further testing on this subgroup of samples using qualitative DNA PCR to assess the potential for false positive tests in this situation. Results Of the 3388 individuals who were tested, 984 were HIV positive on two consecutive tests, and 29 were considered positive by a tiebreaker (positive on Determine, negative on STAT-PAK, and positive on Uni-Gold). However, when the 29 samples were further tested using qualitative DNA PCR, 14 (48.2%) were HIV negative. Conclusion Although this study was not primarily designed to assess the validity of rapid HIV tests and thus only a subset of the samples were retested, the findings show a potential for false positive HIV results in the subset of individuals who test positive when a tiebreaker test is used in serial testing. These findings highlight a need for confirmatory testing for this category of individuals. PMID:22429706

  14. An efficient algorithm for solving coupled Schroedinger type ODE`s, whose potentials include {delta}-functions

    SciTech Connect

    Gousheh, S.S.

    1996-01-01

    I have used the shooting method to find the eigenvalues (bound state energies) of a set of strongly coupled Schroedinger type equations. I have discussed the advantages of the shooting method when the potentials include {delta}-functions. I have also discussed some points which are universal in these kind of problems, whose use make the algorithm much more efficient. These points include mapping the domain of the ODE into a finite one, using the asymptotic form of the solutions, best use of the normalization freedom, and converting the {delta}-functions into boundary conditions.

  15. Development of a Smart Release Algorithm for Mid-Air Separation of Parachute Test Articles

    NASA Technical Reports Server (NTRS)

    Moore, James W.

    2011-01-01

    The Crew Exploration Vehicle Parachute Assembly System (CPAS) project is currently developing an autonomous method to separate a capsule-shaped parachute test vehicle from an air-drop platform for use in the test program to develop and validate the parachute system for the Orion spacecraft. The CPAS project seeks to perform air-drop tests of an Orion-like boilerplate capsule. Delivery of the boilerplate capsule to the test condition has proven to be a critical and complicated task. In the current concept, the boilerplate vehicle is extracted from an aircraft on top of a Type V pallet and then separated from the pallet in mid-air. The attitude of the vehicles at separation is critical to avoiding re-contact and successfully deploying the boilerplate into a heatshield-down orientation. Neither the pallet nor the boilerplate has an active control system. However, the attitude of the mated vehicle as a function of time is somewhat predictable. CPAS engineers have designed an avionics system to monitor the attitude of the mated vehicle as it is extracted from the aircraft and command a release when the desired conditions are met. The algorithm includes contingency capabilities designed to release the test vehicle before undesirable orientations occur. The algorithm was verified with simulation and ground testing. The pre-flight development and testing is discussed and limitations of ground testing are noted. The CPAS project performed a series of three drop tests as a proof-of-concept of the release technique. These tests helped to refine the attitude instrumentation and software algorithm to be used on future tests. The drop tests are described in detail and the evolution of the release system with each test is described.

  16. Testing of the on-board attitude determination and control algorithms for SAMPEX

    NASA Astrophysics Data System (ADS)

    McCullough, Jon D.; Flatley, Thomas W.; Henretty, Debra A.; Markley, F. Landis; San, Josephine K.

    1993-02-01

    Algorithms for on-board attitude determination and control of the Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) have been expanded to include a constant gain Kalman filter for the spacecraft angular momentum, pulse width modulation for the reaction wheel command, an algorithm to avoid pointing the Heavy Ion Large Telescope (HILT) instrument boresight along the spacecraft velocity vector, and the addition of digital sun sensor (DSS) failure detection logic. These improved algorithms were tested in a closed-loop environment for three orbit geometries, one with the sun perpendicular to the orbit plane, and two with the sun near the orbit plane - at Autumnal Equinox and at Winter Solstice. The closed-loop simulator was enhanced and used as a truth model for the control systems' performance evaluation and sensor/actuator contingency analysis. The simulations were performed on a VAX 8830 using a prototype version of the on-board software.

  17. Testing of the on-board attitude determination and control algorithms for SAMPEX

    NASA Technical Reports Server (NTRS)

    Mccullough, Jon D.; Flatley, Thomas W.; Henretty, Debra A.; Markley, F. Landis; San, Josephine K.

    1993-01-01

    Algorithms for on-board attitude determination and control of the Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) have been expanded to include a constant gain Kalman filter for the spacecraft angular momentum, pulse width modulation for the reaction wheel command, an algorithm to avoid pointing the Heavy Ion Large Telescope (HILT) instrument boresight along the spacecraft velocity vector, and the addition of digital sun sensor (DSS) failure detection logic. These improved algorithms were tested in a closed-loop environment for three orbit geometries, one with the sun perpendicular to the orbit plane, and two with the sun near the orbit plane - at Autumnal Equinox and at Winter Solstice. The closed-loop simulator was enhanced and used as a truth model for the control systems' performance evaluation and sensor/actuator contingency analysis. The simulations were performed on a VAX 8830 using a prototype version of the on-board software.

  18. Testing of Gyroless Estimation Algorithms for the FUSE Spacecraft

    NASA Technical Reports Server (NTRS)

    Thienel, Julie; Harman, Rick; Oshman, Yaakov

    2003-01-01

    The Far Ultraviolet Spectroscopic Explorer (FUSE) is equipped with two ring laser gyros on each of the spacecraft body axes. In May 2001 one gyro failed. It is anticipated that all of the remaining gyros will also fail, based on intensity warnings. In addition to the gyro failure, two of four reaction wheels failed in late 2001. The spacecraft control, now relies heavily on magnetic torque to perform the necessary science maneuvers. The only sensor available during slews is a magnetometer. This paper documents the testing and development of gyroless attitude and rate estimation algorithms for FUSE. The results of two approaches are presented, one relies on a kinematics model for propagation, a method used in aircraft tracking, and the other is a traditional Extended Kalman filter that utilizes Euler's equations in the propagation of the estimated rate. Finally, the question of closed-loop stability is addressed. The ability of the controller to meet the science slew requirements, without the gyros, is tested through simulations.

  19. New algorithms for phase unwrapping: implementation and testing

    NASA Astrophysics Data System (ADS)

    Kotlicki, Krzysztof

    1998-11-01

    In this paper it is shown how the regularization theory was used for the new noise immune algorithm for phase unwrapping. The algorithm were developed by M. Servin, J.L. Marroquin and F.J. Cuevas in Centro de Investigaciones en Optica A.C. and Centro de Investigacion en Matematicas A.C. in Mexico. The theory was presented. The objective of the work was to implement the algorithm into the software able to perform the off-line unwrapping on the fringe pattern. The algorithms are present as well as the result and also the software developed for the implementation.

  20. Development of a computer algorithm for the analysis of variable-frequency AC drives: Case studies included

    NASA Technical Reports Server (NTRS)

    Kankam, M. David; Benjamin, Owen

    1991-01-01

    The development of computer software for performance prediction and analysis of voltage-fed, variable-frequency AC drives for space power applications is discussed. The AC drives discussed include the pulse width modulated inverter (PWMI), a six-step inverter and the pulse density modulated inverter (PDMI), each individually connected to a wound-rotor induction motor. Various d-q transformation models of the induction motor are incorporated for user-selection of the most applicable model for the intended purpose. Simulation results of selected AC drives correlate satisfactorily with published results. Future additions to the algorithm are indicated. These improvements should enhance the applicability of the computer program to the design and analysis of space power systems.

  1. A Test of Genetic Algorithms in Relevance Feedback.

    ERIC Educational Resources Information Center

    Lopez-Pujalte, Cristina; Guerrero Bote, Vicente P.; Moya Anegon, Felix de

    2002-01-01

    Discussion of information retrieval, query optimization techniques, and relevance feedback focuses on genetic algorithms, which are derived from artificial intelligence techniques. Describes an evaluation of different genetic algorithms using a residual collection method and compares results with the Ide dec-hi method (Salton and Buckley, 1990…

  2. Evaluating Knowledge Structure-Based Adaptive Testing Algorithms and System Development

    ERIC Educational Resources Information Center

    Wu, Huey-Min; Kuo, Bor-Chen; Yang, Jinn-Min

    2012-01-01

    In recent years, many computerized test systems have been developed for diagnosing students' learning profiles. Nevertheless, it remains a challenging issue to find an adaptive testing algorithm to both shorten testing time and precisely diagnose the knowledge status of students. In order to find a suitable algorithm, four adaptive testing…

  3. A blind test of correction algorithms for daily inhomogeneities

    NASA Astrophysics Data System (ADS)

    Stepanek, Petr; Venema, Victor; Guijarro, Jose; Nemec, Johanna; Zahradnicek, Pavel; Hadzimustafic, Jasmina

    2013-04-01

    As part of the COST Action HOME (Advances in homogenisation methods of climate series: an integrated approach), a dataset was generated that serves as a validation tool for correction of daily inhomogeneities. The dataset contains daily air temperature data and was generated based on the temperature series from the Czech Republic. The validation dataset has three different types of series: network, pair and pair-dedicated data. Different types of inhomogeneities have been inserted into the series. Parametric breaks in the first three moments were introduced and the influence of relocation was simulated by exchanging the distribution of two nearby stations. The participants have returned several contributions, including methods that are currently used: HOM, SPLIDHOM (with various modifications like HOMAD and bootstrapped SPLIDHOM), QM (RHtestsV3 software), DAP (ProClimDB), HCL (Climatol), MASH and also simple delta method. The quality of the homogenised data was measured by a large range of metrics, the most important ones are the RMSE and the trends in the moments. Thanks to RHtestsV3 algorithms we could also assess relative and absolute homogenization results. As expected, the simpler methods, correcting only the mean, are best at reducing the RMSE. For more information on the COST Action on homogenisation see: http://www.homogenisation.org/

  4. Sensitivity of SWOT discharge algorithm to measurement errors: Testing on the Sacramento River

    NASA Astrophysics Data System (ADS)

    Durand, Micheal; Andreadis, Konstantinos; Yoon, Yeosang; Rodriguez, Ernesto

    2013-04-01

    Scheduled for launch in 2019, the Surface Water and Ocean Topography (SWOT) satellite mission will utilize a Ka-band radar interferometer to measure river heights, widths, and slopes, globally, as well as characterize storage change in lakes and ocean surface dynamics with a spatial resolution ranging from 10 - 70 m, with temporal revisits on the order of a week. A discharge algorithm has been formulated to solve the inverse problem of characterizing river bathymetry and the roughness coefficient from SWOT observations. The algorithm uses a Bayesian Markov Chain estimation approach, treats rivers as sets of interconnected reaches (typically 5 km - 10 km in length), and produces best estimates of river bathymetry, roughness coefficient, and discharge, given SWOT observables. AirSWOT (the airborne version of SWOT) consists of a radar interferometer similar to SWOT, but mounted aboard an aircraft. AirSWOT spatial resolution will range from 1 - 35 m. In early 2013, AirSWOT will perform several flights over the Sacramento River, capturing river height, width, and slope at several different flow conditions. The Sacramento River presents an excellent target given that the river includes some stretches heavily affected by management (diversions, bypasses, etc.). AirSWOT measurements will be used to validate SWOT observation performance, but are also a unique opportunity for testing and demonstrating the capabilities and limitations of the discharge algorithm. This study uses HEC-RAS simulations of the Sacramento River to first, characterize expected discharge algorithm accuracy on the Sacramento River, and second to explore the required AirSWOT measurements needed to perform a successful inverse with the discharge algorithm. We focus on the sensitivity of the algorithm accuracy to the uncertainty in AirSWOT measurements of height, width, and slope.

  5. A new free-surface stabilization algorithm for geodynamical modelling: Theory and numerical tests

    NASA Astrophysics Data System (ADS)

    Andrés-Martínez, Miguel; Morgan, Jason P.; Pérez-Gussinyé, Marta; Rüpke, Lars

    2015-09-01

    The surface of the solid Earth is effectively stress free in its subaerial portions, and hydrostatic beneath the oceans. Unfortunately, this type of boundary condition is difficult to treat computationally, and for computational convenience, numerical models have often used simpler approximations that do not involve a normal stress-loaded, shear-stress free top surface that is free to move. Viscous flow models with a computational free surface typically confront stability problems when the time step is bigger than the viscous relaxation time. The small time step required for stability (< 2 Kyr) makes this type of model computationally intensive, so there remains a need to develop strategies that mitigate the stability problem by making larger (at least ∼10 Kyr) time steps stable and accurate. Here we present a new free-surface stabilization algorithm for finite element codes which solves the stability problem by adding to the Stokes formulation an intrinsic penalization term equivalent to a portion of the future load at the surface nodes. Our algorithm is straightforward to implement and can be used with both Eulerian or Lagrangian grids. It includes α and β parameters to respectively control both the vertical and the horizontal slope-dependent penalization terms, and uses Uzawa-like iterations to solve the resulting system at a cost comparable to a non-stress free surface formulation. Four tests were carried out in order to study the accuracy and the stability of the algorithm: (1) a decaying first-order sinusoidal topography test, (2) a decaying high-order sinusoidal topography test, (3) a Rayleigh-Taylor instability test, and (4) a steep-slope test. For these tests, we investigate which α and β parameters give the best results in terms of both accuracy and stability. We also compare the accuracy and the stability of our algorithm with a similar implicit approach recently developed by Kaus et al. (2010). We find that our algorithm is slightly more accurate

  6. Fast mode decision algorithm in MPEG-2 to H.264/AVC transcoding including group of picture structure conversion

    NASA Astrophysics Data System (ADS)

    Lee, Kangjun; Jeon, Gwanggil; Jeong, Jechang

    2009-05-01

    The H.264/AVC baseline profile is used in many applications, including digital multimedia broadcasting, Internet protocol television, and storage devices, while the MPEG-2 main profile is widely used in applications, such as high-definition television and digital versatile disks. The MPEG-2 main profile supports B pictures for bidirectional motion prediction. Therefore, transcoding the MPEG-2 main profile to the H.264/AVC baseline is necessary for universal multimedia access. In the cascaded pixel domain transcoder architecture, the calculation of the rate distortion cost as part of the mode decision process in the H.264/AVC encoder requires extremely complex computations. To reduce the complexity inherent in the implementation of a real-time transcoder, we propose a fast mode decision algorithm based on complexity information from the reference region that is used for motion compensation. In this study, an adaptive mode decision process was used based on the modes assigned to the reference regions. Simulation results indicated that a significant reduction in complexity was achieved without significant degradation of video quality.

  7. Testing Algorithmic Skills in Traditional and Non-Traditional Programming Environments

    ERIC Educational Resources Information Center

    Csernoch, Mária; Biró, Piroska; Máth, János; Abari, Kálmán

    2015-01-01

    The Testing Algorithmic and Application Skills (TAaAS) project was launched in the 2011/2012 academic year to test first year students of Informatics, focusing on their algorithmic skills in traditional and non-traditional programming environments, and on the transference of their knowledge of Informatics from secondary to tertiary education. The…

  8. Development of Online Cognitive and Algorithm Tests as Assessment Tools in Introductory Computer Science Courses

    ERIC Educational Resources Information Center

    Avancena, Aimee Theresa; Nishihara, Akinori; Vergara, John Paul

    2012-01-01

    This paper presents the online cognitive and algorithm tests, which were developed in order to determine if certain cognitive factors and fundamental algorithms correlate with the performance of students in their introductory computer science course. The tests were implemented among Management Information Systems majors from the Philippines and…

  9. LPT. Plot plan and site layout. Includes shield test pool/EBOR ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    LPT. Plot plan and site layout. Includes shield test pool/EBOR facility. (TAN-645 and -646) low power test building (TAN-640 and -641), water storage tanks, guard house (TAN-642), pump house (TAN-644), driveways, well, chlorination building (TAN-643), septic system. Ralph M. Parsons 1229-12 ANP/GE-7-102. November 1956. Approved by INEEL Classification Office for public release. INEEL index code no. 038-0102-00-693-107261 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  10. An open-source framework for stress-testing non-invasive foetal ECG extraction algorithms.

    PubMed

    Andreotti, Fernando; Behar, Joachim; Zaunseder, Sebastian; Oster, Julien; Clifford, Gari D

    2016-05-01

    Over the past decades, many studies have been published on the extraction of non-invasive foetal electrocardiogram (NI-FECG) from abdominal recordings. Most of these contributions claim to obtain excellent results in detecting foetal QRS (FQRS) complexes in terms of location. A small subset of authors have investigated the extraction of morphological features from the NI-FECG. However, due to the shortage of available public databases, the large variety of performance measures employed and the lack of open-source reference algorithms, most contributions cannot be meaningfully assessed. This article attempts to address these issues by presenting a standardised methodology for stress testing NI-FECG algorithms, including absolute data, as well as extraction and evaluation routines. To that end, a large database of realistic artificial signals was created, totaling 145.8 h of multichannel data and over one million FQRS complexes. An important characteristic of this dataset is the inclusion of several non-stationary events (e.g. foetal movements, uterine contractions and heart rate fluctuations) that are critical for evaluating extraction routines. To demonstrate our testing methodology, three classes of NI-FECG extraction algorithms were evaluated: blind source separation (BSS), template subtraction (TS) and adaptive methods (AM). Experiments were conducted to benchmark the performance of eight NI-FECG extraction algorithms on the artificial database focusing on: FQRS detection and morphological analysis (foetal QT and T/QRS ratio). The overall median FQRS detection accuracies (i.e. considering all non-stationary events) for the best performing methods in each group were 99.9% for BSS, 97.9% for AM and 96.0% for TS. Both FQRS detections and morphological parameters were shown to heavily depend on the extraction techniques and signal-to-noise ratio. Particularly, it is shown that their evaluation in the source domain, obtained after using a BSS technique, should be

  11. Perceptual Tests of an Algorithm for Musical Key-Finding

    ERIC Educational Resources Information Center

    Schmuckler, Mark A.; Tomovski, Robert

    2005-01-01

    Perceiving the tonality of a musical passage is a fundamental aspect of the experience of hearing music. Models for determining tonality have thus occupied a central place in music cognition research. Three experiments investigated 1 well-known model of tonal determination: the Krumhansl-Schmuckler key-finding algorithm. In Experiment 1,…

  12. Small sample training and test selection method for optimized anomaly detection algorithms in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Mindrup, Frank M.; Friend, Mark A.; Bauer, Kenneth W.

    2012-01-01

    There are numerous anomaly detection algorithms proposed for hyperspectral imagery. Robust parameter design (RPD) techniques provide an avenue to select robust settings capable of operating consistently across a large variety of image scenes. Many researchers in this area are faced with a paucity of data. Unfortunately, there are no data splitting methods for model validation of datasets with small sample sizes. Typically, training and test sets of hyperspectral images are chosen randomly. Previous research has developed a framework for optimizing anomaly detection in HSI by considering specific image characteristics as noise variables within the context of RPD; these characteristics include the Fisher's score, ratio of target pixels and number of clusters. We have developed method for selecting hyperspectral image training and test subsets that yields consistent RPD results based on these noise features. These subsets are not necessarily orthogonal, but still provide improvements over random training and test subset assignments by maximizing the volume and average distance between image noise characteristics. The small sample training and test selection method is contrasted with randomly selected training sets as well as training sets chosen from the CADEX and DUPLEX algorithms for the well known Reed-Xiaoli anomaly detector.

  13. Computational Analysis of Arc-Jet Wedge Tests Including Ablation and Shape Change

    NASA Technical Reports Server (NTRS)

    Goekcen, Tahir; Chen, Yih-Kanq; Skokova, Kristina A.; Milos, Frank S.

    2010-01-01

    Coupled fluid-material response analyses of arc-jet wedge ablation tests conducted in a NASA Ames arc-jet facility are considered. These tests were conducted using blunt wedge models placed in a free jet downstream of the 6-inch diameter conical nozzle in the Ames 60-MW Interaction Heating Facility. The fluid analysis includes computational Navier-Stokes simulations of the nonequilibrium flowfield in the facility nozzle and test box as well as the flowfield over the models. The material response analysis includes simulation of two-dimensional surface ablation and internal heat conduction, thermal decomposition, and pyrolysis gas flow. For ablating test articles undergoing shape change, the material response and fluid analyses are coupled in order to calculate the time dependent surface heating and pressure distributions that result from shape change. The ablating material used in these arc-jet tests was Phenolic Impregnated Carbon Ablator. Effects of the test article shape change on fluid and material response simulations are demonstrated, and computational predictions of surface recession, shape change, and in-depth temperatures are compared with the experimental measurements.

  14. Small-scale rotor test rig capabilities for testing vibration alleviation algorithms

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.; Leyland, Jane Anne

    1987-01-01

    A test was conducted to assess the capabilities of a small scale rotor test rig for implementing higher harmonic control and stability augmentation algorithms. The test rig uses three high speed actuators to excite the swashplate over a range of frequencies. The actuator position signals were monitored to measure the response amplitudes at several frequencies. The ratio of response amplitude to excitation amplitude was plotted as a function of frequency. In addition to actuator performance, acceleration from six accelerometers placed on the test rig was monitored to determine whether a linear relationship exists between the harmonics of N/Rev control input and the least square error (LSE) identification technique was used to identify local and global transfer matrices for two rotor speeds at two batch sizes each. It was determined that the multicyclic control computer system interfaced very well with the rotor system and kept track of the input accelerometer signals and their phase angles. However, the current high speed actuators were found to be incapable of providing sufficient control authority at the higher excitation frequencies.

  15. Comparison of Marketed Cosmetic Products Constituents with the Antigens Included in Cosmetic-related Patch Test

    PubMed Central

    Cheong, Seung Hyun; Choi, You Won; Myung, Ki Bum

    2010-01-01

    Background Currently, cosmetic series (Chemotechnique Diagnostics, Sweden) is the most widely used cosmetic-related patch test in Korea. However, no studies have been conducted on how accurately it reflects the constituents of the cosmetics in Korea. Objective We surveyed the constituents of various cosmetics and compare with the cosmetic series, to investigate whether it is accurate in determining allergic contact dermatitis caused by cosmetics sold in Korea. Methods Cosmetics were classified into 11 categories and the survey was conducted on the constituents of 55 cosmetics, with 5 cosmetics in each category. The surveyed constituents were classified by chemical function and compared with the antigens of cosmetic series. Results 155 constituents were found in 55 cosmetics, and 74 (47.7%) of constituents were included as antigen. Among them, only 20 constituents (27.0%) were included in cosmetic series. A significant number of constituents, such as fragrance, vehicle and surfactant were not included. Only 41.7% of antigens in cosmetic series were found to be in the cosmetics sampled. Conclusion The constituents not included in the patch test but possess antigenicity are widely used in cosmetics. Therefore, the patch test should be modified to reflect ingredients in the marketed products that may stimulate allergies. PMID:20711261

  16. Comparison of two extractable nuclear antigen testing algorithms: ALBIA versus ELISA/line immunoassay.

    PubMed

    Chandratilleke, Dinusha; Silvestrini, Roger; Culican, Sue; Campbell, David; Byth-Wilson, Karen; Swaminathan, Sanjay; Lin, Ming-Wei

    2016-08-01

    Extractable nuclear antigen (ENA) antibody testing is often requested in patients with suspected connective tissue diseases. Most laboratories in Australia use a two step process involving a high sensitivity screening assay followed by a high specificity confirmation test. Multiplexing technology with Addressable Laser Bead Immunoassay (e.g., FIDIS) offers simultaneous detection of multiple antibody specificities, allowing a single step screening and confirmation. We compared our current diagnostic laboratory testing algorithm [Organtec ELISA screen / Euroimmun line immunoassay (LIA) confirmation] and the FIDIS Connective Profile. A total of 529 samples (443 consecutive+86 known autoantibody positivity) were run through both algorithms, and 479 samples (90.5%) were concordant. The same autoantibody profile was detected in 100 samples (18.9%) and 379 were concordant negative samples (71.6%). The 50 discordant samples (9.5%) were subdivided into 'likely FIDIS or current method correct' or 'unresolved' based on ancillary data. 'Unresolved' samples (n = 25) were subclassified into 'potentially' versus 'potentially not' clinically significant based on the change to clinical interpretation. Only nine samples (1.7%) were deemed to be 'potentially clinically significant'. Overall, we found that the FIDIS Connective Profile ENA kit is non-inferior to the current ELISA screen/LIA characterisation. Reagent and capital costs may be limiting factors in using the FIDIS, but potential benefits include a single step analysis and simultaneous detection of dsDNA antibodies.

  17. Photo Library of the Nevada Site Office (Includes historical archive of nuclear testing images)

    DOE Data Explorer

    The Nevada Site Office makes available publicly released photos from their archive that includes photos from both current programs and historical activities. The historical collections include atmospheric and underground nuclear testing photos and photos of other events and people related to the Nevada Test Site. Current collections are focused on homeland security, stockpile stewardship, and environmental management and restoration. See also the Historical Film Library at http://www.nv.doe.gov/library/films/testfilms.aspx and the Current Film Library at http://www.nv.doe.gov/library/films/current.aspx. Current films can be viewed online, but only short clips of the historical films are viewable. They can be ordered via an online request form for a very small shipping and handling fee.

  18. Simple and Effective Algorithms: Computer-Adaptive Testing.

    ERIC Educational Resources Information Center

    Linacre, John Michael

    Computer-adaptive testing (CAT) allows improved security, greater scoring accuracy, shorter testing periods, quicker availability of results, and reduced guessing and other undesirable test behavior. Simple approaches can be applied by the classroom teacher, or other content specialist, who possesses simple computer equipment and elementary…

  19. An Algorithm for Real-Time Optimal Photocurrent Estimation Including Transient Detection for Resource-Constrained Imaging Applications

    NASA Astrophysics Data System (ADS)

    Zemcov, Michael; Crill, Brendan; Ryan, Matthew; Staniszewski, Zak

    2016-06-01

    Mega-pixel charge-integrating detectors are common in near-IR imaging applications. Optimal signal-to-noise ratio estimates of the photocurrents, which are particularly important in the low-signal regime, are produced by fitting linear models to sequential reads of the charge on the detector. Algorithms that solve this problem have a long history, but can be computationally intensive. Furthermore, the cosmic ray background is appreciable for these detectors in Earth orbit, particularly above the Earth’s magnetic poles and the South Atlantic Anomaly, and on-board reduction routines must be capable of flagging affected pixels. In this paper, we present an algorithm that generates optimal photocurrent estimates and flags random transient charge generation from cosmic rays, and is specifically designed to fit on a computationally restricted platform. We take as a case study the Spectro-Photometer for the History of the Universe, Epoch of Reionization, and Ices Explorer (SPHEREx), a NASA Small Explorer astrophysics experiment concept, and show that the algorithm can easily fit in the resource-constrained environment of such a restricted platform. Detailed simulations of the input astrophysical signals and detector array performance are used to characterize the fitting routines in the presence of complex noise properties and charge transients. We use both Hubble Space Telescope Wide Field Camera-3 and Wide-field Infrared Survey Explorer to develop an empirical understanding of the susceptibility of near-IR detectors in low earth orbit and build a model for realistic cosmic ray energy spectra and rates. We show that our algorithm generates an unbiased estimate of the true photocurrent that is identical to that from a standard line fitting package, and characterize the rate, energy, and timing of both detected and undetected transient events. This algorithm has significant potential for imaging with charge-integrating detectors in astrophysics, earth science, and remote

  20. Assessing the Reliability of Computer Adaptive Testing Branching Algorithms Using HyperCAT.

    ERIC Educational Resources Information Center

    Shermis, Mark D.; And Others

    The reliability of four branching algorithms commonly used in computer adaptive testing (CAT) was examined. These algorithms were: (1) maximum likelihood (MLE); (2) Bayesian; (3) modal Bayesian; and (4) crossover. Sixty-eight undergraduate college students were randomly assigned to one of the four conditions using the HyperCard-based CAT program,…

  1. Automated Test Assembly for Cognitive Diagnosis Models Using a Genetic Algorithm

    ERIC Educational Resources Information Center

    Finkelman, Matthew; Kim, Wonsuk; Roussos, Louis A.

    2009-01-01

    Much recent psychometric literature has focused on cognitive diagnosis models (CDMs), a promising class of instruments used to measure the strengths and weaknesses of examinees. This article introduces a genetic algorithm to perform automated test assembly alongside CDMs. The algorithm is flexible in that it can be applied whether the goal is to…

  2. Spatially Invariant Vector Quantization: A pattern matching algorithm for multiple classes of image subject matter including pathology

    PubMed Central

    Hipp, Jason D.; Cheng, Jerome Y.; Toner, Mehmet; Tompkins, Ronald G.; Balis, Ulysses J.

    2011-01-01

    Introduction: Historically, effective clinical utilization of image analysis and pattern recognition algorithms in pathology has been hampered by two critical limitations: 1) the availability of digital whole slide imagery data sets and 2) a relative domain knowledge deficit in terms of application of such algorithms, on the part of practicing pathologists. With the advent of the recent and rapid adoption of whole slide imaging solutions, the former limitation has been largely resolved. However, with the expectation that it is unlikely for the general cohort of contemporary pathologists to gain advanced image analysis skills in the short term, the latter problem remains, thus underscoring the need for a class of algorithm that has the concurrent properties of image domain (or organ system) independence and extreme ease of use, without the need for specialized training or expertise. Results: In this report, we present a novel, general case pattern recognition algorithm, Spatially Invariant Vector Quantization (SIVQ), that overcomes the aforementioned knowledge deficit. Fundamentally based on conventional Vector Quantization (VQ) pattern recognition approaches, SIVQ gains its superior performance and essentially zero-training workflow model from its use of ring vectors, which exhibit continuous symmetry, as opposed to square or rectangular vectors, which do not. By use of the stochastic matching properties inherent in continuous symmetry, a single ring vector can exhibit as much as a millionfold improvement in matching possibilities, as opposed to conventional VQ vectors. SIVQ was utilized to demonstrate rapid and highly precise pattern recognition capability in a broad range of gross and microscopic use-case settings. Conclusion: With the performance of SIVQ observed thus far, we find evidence that indeed there exist classes of image analysis/pattern recognition algorithms suitable for deployment in settings where pathologists alone can effectively incorporate their

  3. More Than Just Accuracy: A Novel Method to Incorporate Multiple Test Attributes in Evaluating Diagnostic Tests Including Point of Care Tests

    PubMed Central

    Weigl, Bernhard; Fitzpatrick, Annette; Ide, Nicole

    2016-01-01

    Current frameworks for evaluating diagnostic tests are constrained by a focus on diagnostic accuracy, and assume that all aspects of the testing process and test attributes are discrete and equally important. Determining the balance between the benefits and harms associated with new or existing tests has been overlooked. Yet, this is critically important information for stakeholders involved in developing, testing, and implementing tests. This is particularly important for point of care tests (POCTs) where tradeoffs exist between numerous aspects of the testing process and test attributes. We developed a new model that multiple stakeholders (e.g., clinicians, patients, researchers, test developers, industry, regulators, and health care funders) can use to visualize the multiple attributes of tests, the interactions that occur between these attributes, and their impacts on health outcomes. We use multiple examples to illustrate interactions between test attributes (test availability, test experience, and test results) and outcomes, including several POCTs. The model could be used to prioritize research and development efforts, and inform regulatory submissions for new diagnostics. It could potentially provide a way to incorporate the relative weights that various subgroups or clinical settings might place on different test attributes. Our model provides a novel way that multiple stakeholders can use to visualize test attributes, their interactions, and impacts on individual and population outcomes. We anticipate that this will facilitate more informed decision making around diagnostic tests. PMID:27574576

  4. More Than Just Accuracy: A Novel Method to Incorporate Multiple Test Attributes in Evaluating Diagnostic Tests Including Point of Care Tests.

    PubMed

    Thompson, Matthew; Weigl, Bernhard; Fitzpatrick, Annette; Ide, Nicole

    2016-01-01

    Current frameworks for evaluating diagnostic tests are constrained by a focus on diagnostic accuracy, and assume that all aspects of the testing process and test attributes are discrete and equally important. Determining the balance between the benefits and harms associated with new or existing tests has been overlooked. Yet, this is critically important information for stakeholders involved in developing, testing, and implementing tests. This is particularly important for point of care tests (POCTs) where tradeoffs exist between numerous aspects of the testing process and test attributes. We developed a new model that multiple stakeholders (e.g., clinicians, patients, researchers, test developers, industry, regulators, and health care funders) can use to visualize the multiple attributes of tests, the interactions that occur between these attributes, and their impacts on health outcomes. We use multiple examples to illustrate interactions between test attributes (test availability, test experience, and test results) and outcomes, including several POCTs. The model could be used to prioritize research and development efforts, and inform regulatory submissions for new diagnostics. It could potentially provide a way to incorporate the relative weights that various subgroups or clinical settings might place on different test attributes. Our model provides a novel way that multiple stakeholders can use to visualize test attributes, their interactions, and impacts on individual and population outcomes. We anticipate that this will facilitate more informed decision making around diagnostic tests. PMID:27574576

  5. More Than Just Accuracy: A Novel Method to Incorporate Multiple Test Attributes in Evaluating Diagnostic Tests Including Point of Care Tests.

    PubMed

    Thompson, Matthew; Weigl, Bernhard; Fitzpatrick, Annette; Ide, Nicole

    2016-01-01

    Current frameworks for evaluating diagnostic tests are constrained by a focus on diagnostic accuracy, and assume that all aspects of the testing process and test attributes are discrete and equally important. Determining the balance between the benefits and harms associated with new or existing tests has been overlooked. Yet, this is critically important information for stakeholders involved in developing, testing, and implementing tests. This is particularly important for point of care tests (POCTs) where tradeoffs exist between numerous aspects of the testing process and test attributes. We developed a new model that multiple stakeholders (e.g., clinicians, patients, researchers, test developers, industry, regulators, and health care funders) can use to visualize the multiple attributes of tests, the interactions that occur between these attributes, and their impacts on health outcomes. We use multiple examples to illustrate interactions between test attributes (test availability, test experience, and test results) and outcomes, including several POCTs. The model could be used to prioritize research and development efforts, and inform regulatory submissions for new diagnostics. It could potentially provide a way to incorporate the relative weights that various subgroups or clinical settings might place on different test attributes. Our model provides a novel way that multiple stakeholders can use to visualize test attributes, their interactions, and impacts on individual and population outcomes. We anticipate that this will facilitate more informed decision making around diagnostic tests.

  6. Low voltage 30-cm ion thruster development. [including performance and structural integrity (vibration) tests

    NASA Technical Reports Server (NTRS)

    King, H. J.

    1974-01-01

    The basic goal was to advance the development status of the 30-cm electron bombardment ion thruster from a laboratory model to a flight-type engineering model (EM) thruster. This advancement included the more conventional aspects of mechanical design and testing for launch loads, weight reduction, fabrication process development, reliability and quality assurance, and interface definition, as well as a relatively significant improvement in thruster total efficiency. The achievement of this goal was demonstrated by the successful completion of a series of performance and structural integrity (vibration) tests. In the course of the program, essentially every part and feature of the original 30-cm Thruster was critically evaluated. These evaluations, led to new or improved designs for the ion optical system, discharge chamber, cathode isolator vaporizer assembly, main isolator vaporizer assembly, neutralizer assembly, packaging for thermal control, electrical terminations and structure.

  7. A method for planar biaxial mechanical testing that includes in-plane shear.

    PubMed

    Sacks, M S

    1999-10-01

    A limitation in virtually all planar biaxial studies of soft tissues has been the inability to include the effects of in-plane shear. This is due to the inability of current mechanical testing devices to induce a state of in-plane shear, due to the added cost and complexity. In the current study, a straightforward method is presented for planar biaxial testing that induces a combined state of in-plane shear and normal strains. The method relies on rotation of the test specimen's material axes with respect to the device axes and on rotating carriages to allow the specimen to undergo in-plane shear freely. To demonstrate the method, five glutaraldehyde treated bovine pericardium specimens were prepared with their preferred fiber directions (defining the material axes) oriented at 45 deg to the device axes to induce a maximum shear state. The test protocol included a wide range of biaxial strain states, and the resulting biaxial data re-expressed in material axes coordinate system. The resulting biaxial data was then fit to the following strain energy function W: [equation: see text] where E'ij is the Green's strain tensor in the material axes coordinate system and c and Ai are constants. While W was able to fit the data very well, the constants A5 and A6 were found not to contribute significantly to the fit and were considered unnecessary to model the shear strain response. In conclusion, while not able to control the amount of shear strain independently or induce a state of pure shear, the method presented readily produces a state of simultaneous in-plane shear and normal strains. Further, the method is very general and can be applied to any anisotropic planar tissue that has identifiable material axes.

  8. A Test Generation Framework for Distributed Fault-Tolerant Algorithms

    NASA Technical Reports Server (NTRS)

    Goodloe, Alwyn; Bushnell, David; Miner, Paul; Pasareanu, Corina S.

    2009-01-01

    Heavyweight formal methods such as theorem proving have been successfully applied to the analysis of safety critical fault-tolerant systems. Typically, the models and proofs performed during such analysis do not inform the testing process of actual implementations. We propose a framework for generating test vectors from specifications written in the Prototype Verification System (PVS). The methodology uses a translator to produce a Java prototype from a PVS specification. Symbolic (Java) PathFinder is then employed to generate a collection of test cases. A small example is employed to illustrate how the framework can be used in practice.

  9. Statistical algorithm to test the presence of correlation between time series with age/dating uncertainties.

    NASA Astrophysics Data System (ADS)

    Haam, E. K.; Huybers, P.

    2008-12-01

    To understand the Earth's climate, we must understand the inter-relations between its specific geographical areas which, in the case of paleoclimatology, can be profitably undertaken from an empirical perspective. However, assessment of the inter-relation between separate paleoclimate records is inevitably hindered by uncertainties in the absolute and relative age/dating of these climate records, because the correlation between two paleoclimate data with age uncertainty can change dramatically when variations of the age are allowed within the uncertainty limit. Through rigorous statistical analysis of the available proxy data, we can hope to gain better insight into the nature and scope of the mechanisms governing their variability. We propose a statistical algorithm to test for the presence of correlation between two paleoclimate time series with age/dating uncertainties. Previous works in this area have focused on searching for the maximum similarity out of all possible realizations of the series, either heuristically (visual wiggle matching) or through more quantitative methods (eg. cross-correlation maximizer, dynamic programming). In contrast, this algorithm seeks to determine the statistical significance of the maximum covariance. The probability of obtaining a certain maximum covariance from purely random events can provide us with an objective standard for real correlation and it is assessed using the theory of extreme order statistics, as a multivariate normal integral. Since there is no known closed form solution for a multivariate normal integral, a numerical method is used. We apply this algorithm to test for the correlation of the Dansgaard-Oeschger variability observed during MIS3 in the GISPII ice core and millennial variability recorded at cites including Botuvera Cave in Brazil, Hulu Cave in China, Eastern Indonesia, the Arabian Sea, Villa Cave in Europe, New Zealand and the Santa Barbara basin. Results of the analysis are presented as a map of the

  10. Flight test results of failure detection and isolation algorithms for a redundant strapdown inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Morrell, F. R.; Motyka, P. R.; Bailey, M. L.

    1990-01-01

    Flight test results for two sensor fault-tolerant algorithms developed for a redundant strapdown inertial measurement unit are presented. The inertial measurement unit (IMU) consists of four two-degrees-of-freedom gyros and accelerometers mounted on the faces of a semi-octahedron. Fault tolerance is provided by edge vector test and generalized likelihood test algorithms, each of which can provide dual fail-operational capability for the IMU. To detect the wide range of failure magnitudes in inertial sensors, which provide flight crucial information for flight control and navigation, failure detection and isolation are developed in terms of a multi level structure. Threshold compensation techniques, developed to enhance the sensitivity of the failure detection process to navigation level failures, are presented. Four flight tests were conducted in a commercial transport-type environment to compare and determine the performance of the failure detection and isolation methods. Dual flight processors enabled concurrent tests for the algorithms. Failure signals such as hard-over, null, or bias shift, were added to the sensor outputs as simple or multiple failures during the flights. Both algorithms provided timely detection and isolation of flight control level failures. The generalized likelihood test algorithm provided more timely detection of low-level sensor failures, but it produced one false isolation. Both algorithms demonstrated the capability to provide dual fail-operational performance for the skewed array of inertial sensors.

  11. Creating IRT-Based Parallel Test Forms Using the Genetic Algorithm Method

    ERIC Educational Resources Information Center

    Sun, Koun-Tem; Chen, Yu-Jen; Tsai, Shu-Yen; Cheng, Chien-Fen

    2008-01-01

    In educational measurement, the construction of parallel test forms is often a combinatorial optimization problem that involves the time-consuming selection of items to construct tests having approximately the same test information functions (TIFs) and constraints. This article proposes a novel method, genetic algorithm (GA), to construct parallel…

  12. The Evaluation of a Rapid In Situ HIV Confirmation Test in a Programme with a High Failure Rate of the WHO HIV Two-Test Diagnostic Algorithm

    PubMed Central

    Klarkowski, Derryck B.; Wazome, Joseph M.; Lokuge, Kamalini M.; Shanks, Leslie; Mills, Clair F.; O'Brien, Daniel P.

    2009-01-01

    Background Concerns about false-positive HIV results led to a review of testing procedures used in a Médecins Sans Frontières (MSF) HIV programme in Bukavu, eastern Democratic Republic of Congo. In addition to the WHO HIV rapid diagnostic test algorithm (RDT) (two positive RDTs alone for HIV diagnosis) used in voluntary counselling and testing (VCT) sites we evaluated in situ a practical field-based confirmation test against western blot WB. In addition, we aimed to determine the false-positive rate of the WHO two-test algorithm compared with our adapted protocol including confirmation testing, and whether weakly reactive compared with strongly reactive rapid test results were more likely to be false positives. Methodology/Principal Findings 2864 clients presenting to MSF VCT centres in Bukavu during January to May 2006 were tested using Determine HIV-1/2® and UniGold HIV® rapid tests in parallel by nurse counsellors. Plasma samples on 229 clients confirmed as double RDT positive by laboratory retesting were further tested using both WB and the Orgenics Immunocomb Combfirm® HIV confirmation test (OIC-HIV). Of these, 24 samples were negative or indeterminate by WB representing a false-positive rate of the WHO two-test algorithm of 10.5% (95%CI 6.6-15.2). 17 of the 229 samples were weakly positive on rapid testing and all were negative or indeterminate by WB. The false-positive rate fell to 3.3% (95%CI 1.3–6.7) when only strong-positive rapid test results were considered. Agreement between OIC-HIV and WB was 99.1% (95%CI 96.9–99.9%) with no false OIC-HIV positives if stringent criteria for positive OIC-HIV diagnoses were used. Conclusions The WHO HIV two-test diagnostic algorithm produced an unacceptably high level of false-positive diagnoses in our setting, especially if results were weakly positive. The most probable causes of the false-positive results were serological cross-reactivity or non-specific immune reactivity. Our findings show that the OIC

  13. A Review of Scoring Algorithms for Ability and Aptitude Tests.

    ERIC Educational Resources Information Center

    Chevalier, Shirley A.

    In conventional practice, most educators and educational researchers score cognitive tests using a dichotomous right-wrong scoring system. Although simple and straightforward, this method does not take into consideration other factors, such as partial knowledge or guessing tendencies and abilities. This paper discusses alternative scoring models:…

  14. Test Driving ToxCast: Endocrine Profiling for 1858 Chemicals Included in Phase II

    PubMed Central

    Filer, Dayne; Patisaul, Heather B.; Schug, Thaddeus; Reif, David; Thayer, Kristina

    2014-01-01

    Identifying chemicals, beyond those already implicated, to test for potential endocrine disruption is a challenge and high throughput approaches have emerged as a potential tool for this type of screening. This review focused the Environmental Protection Agency’s (EPA) ToxCast™ high throughput in vitro screening (HTS) program. Utility for identifying compounds was assessed and reviewed by using it to run the recently expanded chemical library (from 309 compounds to 1858) through the ToxPi™ prioritization scheme for endocrine disruption. The analysis included metabolic and neuroendocrine targets. This investigative approach simultaneously assessed the utility of ToxCast, and helped identify novel chemicals which may have endocrine activity. Results from this exercise suggest the spectrum of environmental chemicals with potential endocrine activity is much broader than indicated, and that some aspects of endocrine disruption are not fully covered in ToxCast. PMID:25460227

  15. Classifying performance impairment in response to sleep loss using pattern recognition algorithms on single session testing.

    PubMed

    St Hilaire, Melissa A; Sullivan, Jason P; Anderson, Clare; Cohen, Daniel A; Barger, Laura K; Lockley, Steven W; Klerman, Elizabeth B

    2013-01-01

    There is currently no "gold standard" marker of cognitive performance impairment resulting from sleep loss. We utilized pattern recognition algorithms to determine which features of data collected under controlled laboratory conditions could most reliably identify cognitive performance impairment in response to sleep loss using data from only one testing session, such as would occur in the "real world" or field conditions. A training set for testing the pattern recognition algorithms was developed using objective Psychomotor Vigilance Task (PVT) and subjective Karolinska Sleepiness Scale (KSS) data collected from laboratory studies during which subjects were sleep deprived for 26-52h. The algorithm was then tested in data from both laboratory and field experiments. The pattern recognition algorithm was able to identify performance impairment with a single testing session in individuals studied under laboratory conditions using PVT, KSS, length of time awake and time of day information with sensitivity and specificity as high as 82%. When this algorithm was tested on data collected under real-world conditions from individuals whose data were not in the training set, accuracy of predictions for individuals categorized with low performance impairment were as high as 98%. Predictions for medium and severe performance impairment were less accurate. We conclude that pattern recognition algorithms may be a promising method for identifying performance impairment in individuals using only current information about the individual's behavior. Single testing features (e.g., number of PVT lapses) with high correlation with performance impairment in the laboratory setting may not be the best indicators of performance impairment under real-world conditions. Pattern recognition algorithms should be further tested for their ability to be used in conjunction with other assessments of sleepiness in real-world conditions to quantify performance impairment in response to sleep loss.

  16. Classifying performance impairment in response to sleep loss using pattern recognition algorithms on single session testing

    PubMed Central

    St. Hilaire, Melissa A.; Sullivan, Jason P.; Anderson, Clare; Cohen, Daniel A.; Barger, Laura K.; Lockley, Steven W.; Klerman, Elizabeth B.

    2012-01-01

    There is currently no “gold standard” marker of cognitive performance impairment resulting from sleep loss. We utilized pattern recognition algorithms to determine which features of data collected under controlled laboratory conditions could most reliably identify cognitive performance impairment in response to sleep loss using data from only one testing session, such as would occur in the “real world” or field conditions. A training set for testing the pattern recognition algorithms was developed using objective Psychomotor Vigilance Task (PVT) and subjective Karolinska Sleepiness Scale (KSS) data collected from laboratory studies during which subjects were sleep deprived for 26 – 52 hours. The algorithm was then tested in data from both laboratory and field experiments. The pattern recognition algorithm was able to identify performance impairment with a single testing session in individuals studied under laboratory conditions using PVT, KSS, length of time awake and time of day information with sensitivity and specificity as high as 82%. When this algorithm was tested on data collected under real-world conditions from individuals whose data were not in the training set, accuracy of predictions for individuals categorized with low performance impairment were as high as 98%. Predictions for medium and severe performance impairment were less accurate. We conclude that pattern recognition algorithms may be a promising method for identifying performance impairment in individuals using only current information about the individual’s behavior. Single testing features (e.g., number of PVT lapses) with high correlation with performance impairment in the laboratory setting may not be the best indicators of performance impairment under real-world conditions. Pattern recognition algorithms should be further tested for their ability to be used in conjunction with other assessments of sleepiness in real-world conditions to quantify performance impairment in

  17. Economics of resynchronization strategies including chemical tests to identify nonpregnant cows.

    PubMed

    Giordano, J O; Fricke, P M; Cabrera, V E

    2013-02-01

    Our objectives were to assess (1) the economic value of decreasing the interval between timed artificial insemination (TAI) services when using a pregnancy test that allows earlier identification of nonpregnant cows; and (2) the effect of pregnancy loss and inaccuracy of a chemical test (CT) on the economic value of a pregnancy test for dairy farms. Simulation experiments were performed using a spreadsheet-based decision support tool. In experiment 1, we assessed the effect of changing the interbreeding interval (IBI) for cows receiving TAI on the value of reproductive programs by simulating a 1,000-cow dairy herd using a combination of detection of estrus (30 to 80% of cows detected in estrus) and TAI. The IBI was incremented by 7d from 28 to 56 d to reflect intervals either observed (35 to 56 d) or potentially observed (28 d) in dairy operations. In experiment 2, we evaluated the effect of accuracy of the CT and additional pregnancy loss due to earlier testing on the value of reproductive programs. The first scenario compared the use of a CT 31 ± 3 d after a previous AI with rectal palpation (RP) 39 ± 3 d after AI. The second scenario used a CT 24 ± 3 d after AI or transrectal ultrasound (TU) 32 d after AI. Parameters evaluated included sensitivity (Se), specificity (Sp), questionable diagnosis (Qd), cost of the CT, and expected pregnancy loss. Sensitivity analysis was performed for all possible combinations of parameter values to determine their relative importance on the value of the CT. In experiment 1, programs with a shorter IBI had greater economic net returns at all levels of detection of estrus, and use of chemical tests available on the market today might be beneficial compared with RP. In experiment 2, the economic value of programs using a CT could be either greater or less than that of RP and TU, depending on the value for each of the parameters related to the CT evaluated. The value of the program using the CT was affected (in order) by (1) Se, (2

  18. Considerations When Including Students with Disabilities in Test Security Policies. NCEO Policy Directions. Number 23

    ERIC Educational Resources Information Center

    Lazarus, Sheryl; Thurlow, Martha

    2015-01-01

    Sound test security policies and procedures are needed to ensure test security and confidentiality, and to help prevent cheating. In this era when cheating on tests draws regular media attention, there is a need for thoughtful consideration of the ways in which possible test security measures may affect accessibility for some students with…

  19. A Runs-Test Algorithm: Contingent Reinforcement and Response Run Structures

    ERIC Educational Resources Information Center

    Hachiga, Yosuke; Sakagami, Takayuki

    2010-01-01

    Four rats' choices between two levers were differentially reinforced using a runs-test algorithm. On each trial, a runs-test score was calculated based on the last 20 choices. In Experiment 1, the onset of stimulus lights cued when the runs score was smaller than criterion. Following cuing, the correct choice was occasionally reinforced with food,…

  20. Experimental testing of integral truncation algorithms for the calculation of beam widths by proposed ISO standard methods

    NASA Astrophysics Data System (ADS)

    Apte, Paul; Gower, Malcolm C.; Ward, Brooke A.

    1995-04-01

    The experimental testing of baseline clipping algorithms was carried out on a purposely constructed test bench. Three different lasers were used for the tests including HeNe and collimated laserdiode. The beam profile intensity distribution was measured using a CCD camera at various distances from a reference lens. Results were analyzed on an 486 PC running custom developed software written in Turbo Pascal. This allows very fast evaluation of the algorithms to be performed at rates of several times per second depending upon computational load. Tables of beam width data were created and then analyzed using Mathematica to see if the data confirmed ABCD propagation laws. Values for the beam waist location, size, and propagation constant were calculated.

  1. Base Band Data for Testing Interference Mitigation Algorithms

    NASA Astrophysics Data System (ADS)

    Bell, Jon F.; Hall, Peter J.; Wilson, Warwick E.; Sault, Robert J.; Smegal, Rick J.; Smith, Malcolm R.; van Straten, Willem; Kesteven, Michael J.; Ferris, Richard H.; Briggs, Frank H.; Carrad, Graham J.; Sinclair, Malcom W.; Gough, Russell G.; Sarkissian, John M.; Bunton, John D.; Bailes, Matthew

    Digital signal processing is one of many valuable tools for suppressing unwanted signals or interference. Building hardware processing engines seems to be the way to best implement some classes of interference suppression but is, unfortunately, expensive and time-consuming, especially if several mitigation techniques need to be compared. Simulations can be useful, but are not a substitute for real data. CSIRO's Australia Telescope National Facility has recently commenced a `software radio telescope' project designed to fill the gap between dedicated hardware processors and pure simulation. In this approach, real telescope data are recorded coherently, then processed offline. This paper summarises the current contents of a freely available database of base band recorded data that can be used to experiment with signal processing solutions. It includes data from the following systems: single dish, multi-feed receiver; single dish with reference antenna; and an array of six 22m antennas with and without a reference antenna. Astronomical sources such as OH masers, pulsars and continuum sources subject to interfering signals were recorded. The interfering signals include signals from the US Global Positioning System (GPS) and its Russian equivalent (GLONASS), television, microwave links, a low-Earth-orbit satellite, various other transmitters, and signals leaking from local telescope systems with fast clocks. The data are available on compact disk, allowing use in general purpose computers or as input to laboratory hardware prototypes.

  2. DynaDock: A new molecular dynamics-based algorithm for protein-peptide docking including receptor flexibility.

    PubMed

    Antes, Iris

    2010-04-01

    Molecular docking programs play an important role in drug development and many well-established methods exist. However, there are two situations for which the performance of most approaches is still not satisfactory, namely inclusion of receptor flexibility and docking of large, flexible ligands like peptides. In this publication a new approach is presented for docking peptides into flexible receptors. For this purpose a two step procedure was developed: first, the protein-peptide conformational space is scanned and approximate ligand poses are identified and second, the identified ligand poses are refined by a new molecular dynamics-based method, optimized potential molecular dynamics (OPMD). The OPMD approach uses soft-core potentials for the protein-peptide interactions and applies a new optimization scheme to the soft-core potential. Comparison with refinement results obtained by conventional molecular dynamics and a soft-core scaling approach shows significant improvements in the sampling capability for the OPMD method. Thus, the number of starting poses needed for successful refinement is much lower than for the other methods. The algorithm was evaluated on 15 protein-peptide complexes with 2-16mer peptides. Docking poses with peptide RMSD values <2.10 A from the equilibrated experimental structures were obtained in all cases. For four systems docking into the unbound receptor structures was performed, leading to peptide RMSD values <2.12 A. Using a specifically fitted scoring function in 11 of 15 cases the best scoring poses featured a peptide RMSD < or = 2.10 A.

  3. Unified Framework for Development, Deployment and Robust Testing of Neuroimaging Algorithms

    PubMed Central

    Joshi, Alark; Scheinost, Dustin; Okuda, Hirohito; Belhachemi, Dominique; Murphy, Isabella; Staib, Lawrence H.; Papademetris, Xenophon

    2011-01-01

    Developing both graphical and command-line user interfaces for neuroimaging algorithms requires considerable effort. Neuroimaging algorithms can meet their potential only if they can be easily and frequently used by their intended users. Deployment of a large suite of such algorithms on multiple platforms requires consistency of user interface controls, consistent results across various platforms and thorough testing. We present the design and implementation of a novel object-oriented framework that allows for rapid development of complex image analysis algorithms with many reusable components and the ability to easily add graphical user interface controls. Our framework also allows for simplified yet robust nightly testing of the algorithms to ensure stability and cross platform interoperability. All of the functionality is encapsulated into a software object requiring no separate source code for user interfaces, testing or deployment. This formulation makes our framework ideal for developing novel, stable and easy-to-use algorithms for medical image analysis and computer assisted interventions. The framework has been both deployed at Yale and released for public use in the open source multi-platform image analysis software—BioImage Suite (bioimagesuite.org). PMID:21249532

  4. Vertical drop test of a transport fuselage center section including the wheel wells

    NASA Technical Reports Server (NTRS)

    Williams, M. S.; Hayduk, R. J.

    1983-01-01

    A Boeing 707 fuselage section was drop tested to measure structural, seat, and anthropomorphic dummy response to vertical crash loads. The specimen had nominally zero pitch, roll and yaw at impact with a sink speed of 20 ft/sec. Results from this drop test and other drop tests of different transport sections will be used to prepare for a full-scale crash test of a B-720.

  5. A new algorithm for generating highly accurate benchmark solutions to transport test problems

    SciTech Connect

    Azmy, Y.Y.

    1997-06-01

    We present a new algorithm for solving the neutron transport equation in its discrete-variable form. The new algorithm is based on computing the full matrix relating the scalar flux spatial moments in all cells to the fixed neutron source spatial moments, foregoing the need to compute the angular flux spatial moments, and thereby eliminating the need for sweeping the spatial mesh in each discrete-angular direction. The matrix equation is solved exactly in test cases, producing a solution vector that is free from iteration convergence error, and subject only to truncation and roundoff errors. Our algorithm is designed to provide method developers with a quick and simple solution scheme to test their new methods on difficult test problems without the need to develop sophisticated solution techniques, e.g. acceleration, before establishing the worthiness of their innovation. We demonstrate the utility of the new algorithm by applying it to the Arbitrarily High Order Transport Nodal (AHOT-N) method, and using it to solve two of Burre`s Suite of Test Problems (BSTP). Our results provide highly accurate benchmark solutions, that can be distributed electronically and used to verify the pointwise accuracy of other solution methods and algorithms.

  6. Interpretation of Colloid-Homologue Tracer Test 10-03, Including Comparisons to Test 10-01

    SciTech Connect

    Reimus, Paul W.

    2012-06-26

    This presentation covers the interpretations of colloid-homologue tracer test 10-03 conducted at the Grimsel Test Site, Switzerland, in 2010. It also provides a comparison of the interpreted test results with those of tracer test 10-01, which was conducted in the same fracture flow system and using the same tracers than test 10-03, but at a higher extraction flow rate. A method of correcting for apparent uranine degradation in test 10-03 is presented. Conclusions are: (1) Uranine degradation occurred in test 10-03, but not in 10-01; (2) Uranine correction based on apparent degradation rate in injection loop in test 11-02 seems reasonable when applied to data from test 10-03; (3) Colloid breakthrough curves quite similar in the two tests with similar recoveries relative to uranine (after correction); and (4) Much slower apparent desorption of homologues in test 10-03 than in 10-01 (any effect of residual homologues from test 10-01 in test 10-03?).

  7. Novel designed magnetic leakage testing sensor with GMR for image reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Sasamoto, Akira; Suzuki, Takayuki

    2012-04-01

    Authors had developed an image reconstruction algorithm that can accurately reconstruct images of flaws from data obtained using conventional ECT sensors few years ago. The developed reconstruction algorithm is designed for data which is assumed to be obtained with spatial uniform magnetic field on the target surface. On the other hand, the conventional ECT sensor author used is designed in such a manner that when the magnetic field is imposed on the target surface, the strength of the magnetic field is maximized. This violation of the assumption ruins the algorithm simplicity because it needs to employ complemental response functions called"LSF"for long line flaw which is not along original algorithm design.In order to obtain an experimental result which proves the validity of original algorithm with only one response function, the authors have developed a prototype sensor for magnetic flux leakage testing that satisfy the requirement of original algorithm last year. The developed sensor comprises a GMR magnetic field sensor to detect a static magnetic field and two magnets adjacent to the GMR sensor to magnetize the target specimen. However, obtained data had insufficient accuracy due to weakness of the strength of the magnet. Therefore author redesigned it with much stronger magnet this year. Obtained data with this new sensor shows that the algorithm is most likely to work well with only one response function for this type probe.

  8. A parameter estimation algorithm for spatial sine testing - Theory and evaluation

    NASA Technical Reports Server (NTRS)

    Rost, R. W.; Deblauwe, F.

    1992-01-01

    This paper presents the theory and an evaluation of a spatial sine testing parameter estimation algorithm that uses directly the measured forced mode of vibration and the measured force vector. The parameter estimation algorithm uses an ARMA model and a recursive QR algorithm is applied for data reduction. In this first evaluation, the algorithm has been applied to a frequency response matrix (which is a particular set of forced mode of vibration) using a sliding frequency window. The objective of the sliding frequency window is to execute the analysis simultaneously with the data acquisition. Since the pole values and the modal density are obtained from this analysis during the acquisition, the analysis information can be used to help determine the forcing vectors during the experimental data acquisition.

  9. Implementation and testing of a sensor-netting algorithm for early warning and high confidence C/B threat detection

    NASA Astrophysics Data System (ADS)

    Gruber, Thomas; Grim, Larry; Fauth, Ryan; Tercha, Brian; Powell, Chris; Steinhardt, Kristin

    2011-05-01

    Large networks of disparate chemical/biological (C/B) sensors, MET sensors, and intelligence, surveillance, and reconnaissance (ISR) sensors reporting to various command/display locations can lead to conflicting threat information, questions of alarm confidence, and a confused situational awareness. Sensor netting algorithms (SNA) are being developed to resolve these conflicts and to report high confidence consensus threat map data products on a common operating picture (COP) display. A data fusion algorithm design was completed in a Phase I SBIR effort and development continues in the Phase II SBIR effort. The initial implementation and testing of the algorithm has produced some performance results. The algorithm accepts point and/or standoff sensor data, and event detection data (e.g., the location of an explosion) from various ISR sensors (e.g., acoustic, infrared cameras, etc.). These input data are preprocessed to assign estimated uncertainty to each incoming piece of data. The data are then sent to a weighted tomography process to obtain a consensus threat map, including estimated threat concentration level uncertainty. The threat map is then tested for consistency and the overall confidence for the map result is estimated. The map and confidence results are displayed on a COP. The benefits of a modular implementation of the algorithm and comparisons of fused / un-fused data results will be presented. The metrics for judging the sensor-netting algorithm performance are warning time, threat map accuracy (as compared to ground truth), false alarm rate, and false alarm rate v. reported threat confidence level.

  10. The Langley thermal protection system test facility: A description including design operating boundaries

    NASA Technical Reports Server (NTRS)

    Klich, G. F.

    1976-01-01

    A description of the Langley thermal protection system test facility is presented. This facility was designed to provide realistic environments and times for testing thermal protection systems proposed for use on high speed vehicles such as the space shuttle. Products from the combustion of methane-air-oxygen mixtures, having a maximum total enthalpy of 10.3 MJ/kg, are used as a test medium. Test panels with maximum dimensions of 61 cm x 91.4 cm are mounted in the side wall of the test region. Static pressures in the test region can range from .005 to .1 atm and calculated equilibrium temperatures of test panels range from 700 K to 1700 K. Test times can be as long as 1800 sec. Some experimental data obtained while using combustion products of methane-air mixtures are compared with theory, and calibration of the facility is being continued to verify calculated values of parameters which are within the design operating boundaries.

  11. Group Testing with Multiple Inhibitor Sets and Error-Tolerant and Its Decoding Algorithms.

    PubMed

    Zhao, Shufang; He, Yichao; Zhang, Xinlu; Xu, Wen; Wu, Weili; Gao, Suogang

    2016-10-01

    In this article, we advance a new group testing model [Formula: see text] with multiple inhibitor sets and error-tolerant and propose decoding algorithms for it to identify all its positives by using [Formula: see text]-disjunct matrix. The decoding complexity for it is [Formula: see text], where [Formula: see text]. Moreover, we extend this new group testing to threshold group testing and give the threshold group testing model [Formula: see text] with multiple inhibitor sets and error-tolerant. By using [Formula: see text]-disjunct matrix, we propose its decoding algorithms for gap g = 0 and g > 0, respectively. Finally, we point out that the new group testing is the natural generalization for the clone model.

  12. Lyral has been included in the patch test standard series in Germany.

    PubMed

    Geier, Johannes; Brasch, Jochen; Schnuch, Axel; Lessmann, Holger; Pirker, Claudia; Frosch, Peter J

    2002-05-01

    Lyral 5% pet. was tested in 3245 consecutive patch test patients in 20 departments of dermatology in order (i) to check the diagnostic quality of this patch test preparation, (ii) to examine concomitant reactions to Lyral and fragrance mix (FM), and (iii) to assess the frequency of contact allergy to Lyral in an unselected patch test population of German dermatological clinics. 62 patients reacted to Lyral, i.e. 1.9%. One third of the positive reactions were + + and + + +. The reaction index was 0.27. Thus, the test preparation can be regarded a good diagnostic tool. Lyral and fragrance mix (FM) were tested in parallel in 3185 patients. Of these, 300 (9.4%) reacted to FM, and 59 (1.9%) to Lyral. In 40 patients, positive reactions to both occurred, which is 13.3% of those reacting to FM, and 67.8% of those reacting to Lyral. So the concordance of positive test reactions to Lyral and FM was only slight. Based on these results, the German Contact Dermatitis Research Group (DKG) decided to add Lyral 5% pet. to the standard series.

  13. Tests of Large Airfoils in the Propeller Research Tunnel, Including Two with Corrugated Surfaces

    NASA Technical Reports Server (NTRS)

    Wood, Donald H

    1930-01-01

    This report gives the results of the tests of seven 2 by 12 foot airfoils (Clark Y, smooth and corrugated, Gottingen 398, N.A.C.A. M-6, and N.A.C.A. 84). The tests were made in the propeller research tunnel of the National Advisory Committee for Aeronautics at Reynolds numbers up to 2,000,000. The Clark Y airfoil was tested with three degrees of surface smoothness. Corrugating the surface causes a flattening of the lift curve at the burble point and an increase in drag at small flying angles.

  14. Testing sensibility, including touch-pressure, two-point discrimination, point localization, and vibration.

    PubMed

    Bell-Krotoski, J; Weinstein, S; Weinstein, C

    1993-01-01

    Sensibility is much more than protective sensation, and the examiner needs to consider the various degrees of residual sensibility that influence both diagnosis and prognosis. Towards that end, objective tests of the extent and nature of peripheral nerve involvement should be employed. Objective tests reflect the current condition of sensibility and are not affected by cognitive influences, such as re-education. Most current clinical instruments used for measurement of sensibility fail to meet the criteria of an objective test because they: (1) can be shown to lack necessary sensitivity, and (2) are too variable. As a consequence, regardless of whether these instruments have been used in controlled clinical studies or are in common use, their results will not replicate with repeated testing. Unfortunately, therefore, sensibility changes will potentially go unrecognized in a large number of patients and many will be detected only in the later stages of peripheral nerve abnormality, when possibilities of treatment are less effective. This article discusses sensibility testing from the standpoint of what is known regarding strengths and weaknesses of various tests and sensory modalities, and makes an appeal for clinicians to review the instruments they use critically for sensibility measurement with regard to stimulus control. Clinicians must insist on validity and reliability in their instruments before they have confidence in the data obtained.

  15. Evaluation of the cefonicid disk test criteria, including disk quality control guidelines.

    PubMed Central

    Barry, A L; Jones, R N; Thornsberry, C

    1983-01-01

    Cefonicid (SKF 75073) is a second-generation cephalosporin which has a spectrum of antimicrobial activity similar to that of cefamandole, but cefoxitin (a cephamycin) and cephalothin have uniquely different spectra of activity. The second-generation cephalosporins tested displayed comparable susceptibility to beta-lactamases and inhibited type I beta-lactamases. Although cefonicid has a longer serum half-life (3 to 4 h) compared with the currently used drugs, the same minimal inhibitory concentration breakpoints separating susceptible and resistant categories were applied to tests with cefonicid, cefamandole, and cephalothin. Regression analysis of the disk diffusion test results confirmed the use of identical zone size breakpoints for 30-micrograms cefonicid, cefamandole, and cephalothin disks: all three produced similar parabolic regression lines. Further analysis of disk test data confirmed the fact that cefonicid and cefamandole disks might be used interchangeably. But for routine tests, cefonicid disks might be preferred in order to minimize the number of very major (false-susceptible) interpretive errors. Suggested cefonicid 30-micrograms disk interpretive criteria are: susceptible, greater than or equal to 18 mm (less than or equal to 8.0 micrograms/ml), and resistant, less than or equal to 14 mm (greater than 16 micrograms/ml). Quality control zone diameter limits were calculated from data obtained in a multilaboratory collaborative study. PMID:6601113

  16. Development, analysis, and testing of robust nonlinear guidance algorithms for space applications

    NASA Astrophysics Data System (ADS)

    Wibben, Daniel R.

    not identical. Finally, this work has a large focus on the application of these various algorithms to a large number of space based applications. These include applications to powered-terminal descent for landing on planetary bodies such as the moon and Mars and to proximity operations (landing, hovering, or maneuvering) about small bodies such as an asteroid or a comet. Further extensions of these algorithms have allowed for adaptation of a hybrid control strategy for planetary landing, and the combined modeling and simultaneous control of both the vehicle's position and orientation implemented within a full six degree-of-freedom spacecraft simulation.

  17. Nuclear Rocket Test Facility Decommissioning Including Controlled Explosive Demolition of a Neutron-Activated Shield Wall

    SciTech Connect

    Michael Kruzic

    2007-09-01

    Located in Area 25 of the Nevada Test Site, the Test Cell A Facility was used in the 1960s for the testing of nuclear rocket engines, as part of the Nuclear Rocket Development Program. The facility was decontaminated and decommissioned (D&D) in 2005 using the Streamlined Approach For Environmental Restoration (SAFER) process, under the Federal Facilities Agreement and Consent Order (FFACO). Utilities and process piping were verified void of contents, hazardous materials were removed, concrete with removable contamination decontaminated, large sections mechanically demolished, and the remaining five-foot, five-inch thick radiologically-activated reinforced concrete shield wall demolished using open-air controlled explosive demolition (CED). CED of the shield wall was closely monitored and resulted in no radiological exposure or atmospheric release.

  18. Manufacture of fiber-epoxy test specimens: Including associated jigs and instrumentation

    NASA Technical Reports Server (NTRS)

    Mathur, S. B.; Felbeck, D. K.

    1980-01-01

    Experimental work on the manufacture and strength of graphite-epoxy composites is considered. The correct data and thus a true assessment of the strength properties based on a proper and scientifically modeled test specimen with engineered design, construction, and manufacture has led to claims of a very broad spread in optimized values. Such behavior is in the main due to inadequate control during manufacture of test specimen, improper curing, and uneven scatter in the fiber orientation. The graphite fibers are strong but brittle. Even with various epoxy matrices and volume fraction, the fracture toughness is still relatively low. Graphite-epoxy prepreg tape was investigated as a sandwich construction with intermittent interlaminar bonding between the laminates in order to produce high strength, high fracture toughness composites. The quality and control of manufacture of the multilaminate test specimen blanks was emphasized. The dimensions, orientation and cure must be meticulous in order to produce the desired mix.

  19. Public interest in predictive genetic testing, including direct-to-consumer testing, for susceptibility to major depression: preliminary findings

    PubMed Central

    Wilde, Alex; Meiser, Bettina; Mitchell, Philip B; Schofield, Peter R

    2010-01-01

    The past decade has seen rapid advances in the identification of associations between candidate genes and a range of common multifactorial disorders. This paper evaluates public attitudes towards the complexity of genetic risk prediction in psychiatry involving susceptibility genes, uncertain penetrance and gene–environment interactions on which successful molecular-based mental health interventions will depend. A qualitative approach was taken to enable the exploration of the views of the public. Four structured focus groups were conducted with a total of 36 participants. The majority of participants indicated interest in having a genetic test for susceptibility to major depression, if it was available. Having a family history of mental illness was cited as a major reason. After discussion of perceived positive and negative implications of predictive genetic testing, nine of 24 participants initially interested in having such a test changed their mind. Fear of genetic discrimination and privacy issues predominantly influenced change of attitude. All participants still interested in having a predictive genetic test for risk for depression reported they would only do so through trusted medical professionals. Participants were unanimously against direct-to-consumer genetic testing marketed through the Internet, although some would consider it if there was suitable protection against discrimination. The study highlights the importance of general practitioner and public education about psychiatric genetics, and the availability of appropriate treatment and support services prior to implementation of future predictive genetic testing services. PMID:19690586

  20. Public interest in predictive genetic testing, including direct-to-consumer testing, for susceptibility to major depression: preliminary findings.

    PubMed

    Wilde, Alex; Meiser, Bettina; Mitchell, Philip B; Schofield, Peter R

    2010-01-01

    The past decade has seen rapid advances in the identification of associations between candidate genes and a range of common multifactorial disorders. This paper evaluates public attitudes towards the complexity of genetic risk prediction in psychiatry involving susceptibility genes, uncertain penetrance and gene-environment interactions on which successful molecular-based mental health interventions will depend. A qualitative approach was taken to enable the exploration of the views of the public. Four structured focus groups were conducted with a total of 36 participants. The majority of participants indicated interest in having a genetic test for susceptibility to major depression, if it was available. Having a family history of mental illness was cited as a major reason. After discussion of perceived positive and negative implications of predictive genetic testing, nine of 24 participants initially interested in having such a test changed their mind. Fear of genetic discrimination and privacy issues predominantly influenced change of attitude. All participants still interested in having a predictive genetic test for risk for depression reported they would only do so through trusted medical professionals. Participants were unanimously against direct-to-consumer genetic testing marketed through the Internet, although some would consider it if there was suitable protection against discrimination. The study highlights the importance of general practitioner and public education about psychiatric genetics, and the availability of appropriate treatment and support services prior to implementation of future predictive genetic testing services.

  1. Evaluation of five simple rapid HIV assays for potential use in the Brazilian national HIV testing algorithm.

    PubMed

    da Motta, Leonardo Rapone; Vanni, Andréa Cristina; Kato, Sérgio Kakuta; Borges, Luiz Gustavo dos Anjos; Sperhacke, Rosa Dea; Ribeiro, Rosangela Maria M; Inocêncio, Lilian Amaral

    2013-12-01

    Since 2005, the Department of Sexually Transmitted Diseases (STDs), Acquired Immunodeficiency Syndrome (AIDS) and Viral Hepatitis under the Health Surveillance Secretariat in Brazil's Ministry of Health has approved a testing algorithm for using rapid human immunodeficiency virus (HIV) tests in the country. Given the constant emergence of new rapid HIV tests in the market, it is necessary to maintain an evaluation program for them. Conscious of this need, this multicenter study was conducted to evaluate five commercially available rapid HIV tests used to detect anti-HIV antibodies in Brazil. The five commercial rapid tests under assessment were the VIKIA HIV-1/2 (bioMérieux, Rio de Janeiro, Brazil), the Rapid Check HIV 1 & 2 (Center of Infectious Diseases, Federal University of Espírito Santo, Vitória, Brazil), the HIV-1/2 3.0 Strip Test Bioeasy (S.D., Kyonggi-do, South Korea), the Labtest HIV (Labtest Diagnóstica, Lagoa Santa, Brazil) and the HIV-1/2 Rapid Test Bio-Manguinhos (Oswaldo Cruz Foundation, Rio de Janeiro, Brazil). A total of 972 whole-blood samples were collected from HIV-infected patients, pregnant women and individuals seeking voluntary counselling and testing who were recruited from five centers in different regions of the country. Informed consent was obtained from the study participants. The results were compared with those obtained using the HIV algorithm used currently in Brazil, which includes two enzyme immunoassays and one Western blot test. The operational performance of each assay was also compared to the defined criteria. A total of 972 samples were tested using reference assays, and the results indicated 143 (14.7%) reactive samples and 829 (85.3%) nonreactive samples. Sensitivity values ranged from 99.3 to 100%, and specificity was 100% for all five rapid tests. All of the rapid tests performed well, were easy to perform and yielded high scores in the operational performance analysis. Three tests, however, fulfilled all of the

  2. Evaluation of five simple rapid HIV assays for potential use in the Brazilian national HIV testing algorithm.

    PubMed

    da Motta, Leonardo Rapone; Vanni, Andréa Cristina; Kato, Sérgio Kakuta; Borges, Luiz Gustavo dos Anjos; Sperhacke, Rosa Dea; Ribeiro, Rosangela Maria M; Inocêncio, Lilian Amaral

    2013-12-01

    Since 2005, the Department of Sexually Transmitted Diseases (STDs), Acquired Immunodeficiency Syndrome (AIDS) and Viral Hepatitis under the Health Surveillance Secretariat in Brazil's Ministry of Health has approved a testing algorithm for using rapid human immunodeficiency virus (HIV) tests in the country. Given the constant emergence of new rapid HIV tests in the market, it is necessary to maintain an evaluation program for them. Conscious of this need, this multicenter study was conducted to evaluate five commercially available rapid HIV tests used to detect anti-HIV antibodies in Brazil. The five commercial rapid tests under assessment were the VIKIA HIV-1/2 (bioMérieux, Rio de Janeiro, Brazil), the Rapid Check HIV 1 & 2 (Center of Infectious Diseases, Federal University of Espírito Santo, Vitória, Brazil), the HIV-1/2 3.0 Strip Test Bioeasy (S.D., Kyonggi-do, South Korea), the Labtest HIV (Labtest Diagnóstica, Lagoa Santa, Brazil) and the HIV-1/2 Rapid Test Bio-Manguinhos (Oswaldo Cruz Foundation, Rio de Janeiro, Brazil). A total of 972 whole-blood samples were collected from HIV-infected patients, pregnant women and individuals seeking voluntary counselling and testing who were recruited from five centers in different regions of the country. Informed consent was obtained from the study participants. The results were compared with those obtained using the HIV algorithm used currently in Brazil, which includes two enzyme immunoassays and one Western blot test. The operational performance of each assay was also compared to the defined criteria. A total of 972 samples were tested using reference assays, and the results indicated 143 (14.7%) reactive samples and 829 (85.3%) nonreactive samples. Sensitivity values ranged from 99.3 to 100%, and specificity was 100% for all five rapid tests. All of the rapid tests performed well, were easy to perform and yielded high scores in the operational performance analysis. Three tests, however, fulfilled all of the

  3. Test driving ToxCast: endocrine profiling for1858 chemicals included in phase II

    EPA Science Inventory

    Introduction: Identifying chemicals to test for potential endocrine disruption beyond those already implicated in the peer-reviewed literature is a challenge. This review is intended to help by summarizing findings from the Environmental Protection Agency’s (EPA) ToxCast™ high th...

  4. Simulation analysis of the EUSAMA Plus suspension testing method including the impact of the vehicle untested side

    NASA Astrophysics Data System (ADS)

    Dobaj, K.

    2016-09-01

    The work deals with the simulation analysis of the half car vehicle model parameters on the suspension testing results. The Matlab simulation software was used. The considered model parameters are involved with the shock absorber damping coefficient, the tire radial stiffness, the car width and the rocker arm length. The consistent vibrations of both test plates were considered. Both wheels of the car were subjected to identical vibration, with frequency changed similar to the EUSAMA Plus principle. The shock absorber damping coefficient (for several values of the car width and rocker arm length) was changed on one and both sides of the vehicle. The obtained results are essential for the new suspension testing algorithm (basing on the EUSAMA Plus principle), which will be the aim of the further author's work.

  5. Simple but novel test method for quantitatively comparing robot mapping algorithms using SLAM and dead reckoning

    NASA Astrophysics Data System (ADS)

    Davey, Neil S.; Godil, Haris

    2013-05-01

    This article presents a comparative study between a well-known SLAM (Simultaneous Localization and Mapping) algorithm, called Gmapping, and a standard Dead-Reckoning algorithm; the study is based on experimental results of both approaches by using a commercial skid-based turning robot, P3DX. Five main base-case scenarios are conducted to evaluate and test the effectiveness of both algorithms. The results show that SLAM outperformed the Dead Reckoning in terms of map-making accuracy in all scenarios but one, since SLAM did not work well in a rapidly changing environment. Although the main conclusion about the excellence of SLAM is not surprising, the presented test method is valuable to professionals working in this area of mobile robots, as it is highly practical, and provides solid and valuable results. The novelty of this study lies in its simplicity. The simple but novel test method for quantitatively comparing robot mapping algorithms using SLAM and Dead Reckoning and some applications using autonomous robots are being patented by the authors in U.S. Patent Application Nos. 13/400,726 and 13/584,862.

  6. Summary of TFTR (Tokamak Fusion Test Reactor) diagnostics, including JET (Joint European Torus) and JT-60

    SciTech Connect

    Hill, K.W.; Young, K.M.; Johnson, L.C.

    1990-05-01

    The diagnostic instrumentation on TFTR (Tokamak Fusion Test Reactor) and the specific properties of each diagnostic, i.e., number of channels, time resolution, wavelength range, etc., are summarized in tables, grouped according to the plasma parameter measured. For comparison, the equivalent diagnostic capabilities of JET (Joint European Torus) and the Japanese large tokamak, JT-60, as of late 1987 are also listed in the tables. Extensive references are given to publications on each instrument.

  7. Pilot's Guide to an Airline Career, Including Sample Pre-Employment Tests.

    ERIC Educational Resources Information Center

    Traylor, W.L.

    Occupational information for persons considering a career as an airline pilot includes a detailed description of the pilot's duties and material concerning preparation for occupational entry and determining the relative merits of available jobs. The book consists of four parts: Part I, The Job, provides an overview of a pilot's duties in his daily…

  8. Bees Algorithm for Construction of Multiple Test Forms in E-Testing

    ERIC Educational Resources Information Center

    Songmuang, Pokpong; Ueno, Maomi

    2011-01-01

    The purpose of this research is to automatically construct multiple equivalent test forms that have equivalent qualities indicated by test information functions based on item response theory. There has been a trade-off in previous studies between the computational costs and the equivalent qualities of test forms. To alleviate this problem, we…

  9. Field testing of a 3D automatic target recognition and pose estimation algorithm

    NASA Astrophysics Data System (ADS)

    Ruel, Stephane; English, Chad E.; Melo, Len; Berube, Andrew; Aikman, Doug; Deslauriers, Adam M.; Church, Philip M.; Maheux, Jean

    2004-09-01

    Neptec Design Group Ltd. has developed a 3D Automatic Target Recognition (ATR) and pose estimation technology demonstrator in partnership with the Canadian DND. The system prototype was deployed for field testing at Defence Research and Development Canada (DRDC)-Valcartier. This paper discusses the performance of the developed algorithm using 3D scans acquired with an imaging LIDAR. 3D models of civilian and military vehicles were built using scans acquired with a triangulation laser scanner. The models were then used to generate a knowledge base for the recognition algorithm. A commercial imaging LIDAR was used to acquire test scans of the target vehicles with varying range, pose and degree of occlusion. Recognition and pose estimation results are presented for at least 4 different poses of each vehicle at each test range. Results obtained with targets partially occluded by an artificial plane, vegetation and military camouflage netting are also presented. Finally, future operational considerations are discussed.

  10. An evaluation of the NASA Tech House, including live-in test results, volume 1

    NASA Technical Reports Server (NTRS)

    Abbott, I. H. A.; Hopping, K. A.; Hypes, W. D.

    1979-01-01

    The NASA Tech House was designed and constructed at the NASA Langley Research Center, Hampton, Virginia, to demonstrate and evaluate new technology potentially applicable for conservation of energy and resources and for improvements in safety and security in a single-family residence. All technology items, including solar-energy systems and a waste-water-reuse system, were evaluated under actual living conditions for a 1 year period with a family of four living in the house in their normal lifestyle. Results are presented which show overall savings in energy and resources compared with requirements for a defined similar conventional house under the same conditions. General operational experience and performance data are also included for all the various items and systems of technology incorporated into the house design.

  11. Battery algorithm verification and development using hardware-in-the-loop testing

    NASA Astrophysics Data System (ADS)

    He, Yongsheng; Liu, Wei; Koch, Brain J.

    Battery algorithms play a vital role in hybrid electric vehicles (HEVs), plug-in hybrid electric vehicles (PHEVs), extended-range electric vehicles (EREVs), and electric vehicles (EVs). The energy management of hybrid and electric propulsion systems needs to rely on accurate information on the state of the battery in order to determine the optimal electric drive without abusing the battery. In this study, a cell-level hardware-in-the-loop (HIL) system is used to verify and develop state of charge (SOC) and power capability predictions of embedded battery algorithms for various vehicle applications. Two different batteries were selected as representative examples to illustrate the battery algorithm verification and development procedure. One is a lithium-ion battery with a conventional metal oxide cathode, which is a power battery for HEV applications. The other is a lithium-ion battery with an iron phosphate (LiFePO 4) cathode, which is an energy battery for applications in PHEVs, EREVs, and EVs. The battery cell HIL testing provided valuable data and critical guidance to evaluate the accuracy of the developed battery algorithms, to accelerate battery algorithm future development and improvement, and to reduce hybrid/electric vehicle system development time and costs.

  12. Reader reaction: A note on the evaluation of group testing algorithms in the presence of misclassification.

    PubMed

    Malinovsky, Yaakov; Albert, Paul S; Roy, Anindya

    2016-03-01

    In the context of group testing screening, McMahan, Tebbs, and Bilder (2012, Biometrics 68, 287-296) proposed a two-stage procedure in a heterogenous population in the presence of misclassification. In earlier work published in Biometrics, Kim, Hudgens, Dreyfuss, Westreich, and Pilcher (2007, Biometrics 63, 1152-1162) also proposed group testing algorithms in a homogeneous population with misclassification. In both cases, the authors evaluated performance of the algorithms based on the expected number of tests per person, with the optimal design being defined by minimizing this quantity. The purpose of this article is to show that although the expected number of tests per person is an appropriate evaluation criteria for group testing when there is no misclassification, it may be problematic when there is misclassification. Specifically, a valid criterion needs to take into account the amount of correct classification and not just the number of tests. We propose, a more suitable objective function that accounts for not only the expected number of tests, but also the expected number of correct classifications. We then show how using this objective function that accounts for correct classification is important for design when considering group testing under misclassification. We also present novel analytical results which characterize the optimal Dorfman (1943) design under the misclassification.

  13. Directionally solidified lamellar eutectic superalloys by edge-defined, film-fed growth. [including tensile tests

    NASA Technical Reports Server (NTRS)

    Hurley, G. F.

    1975-01-01

    A program was performed to scale up the edge-defined, film-fed growth (EFG) method for the gamma/gamma prime-beta eutectic alloy of the nominal composition Ni-19.7 Cb - 6 Cr-2.5 Al. Procedures and problem areas are described. Flat bars approximately 12 x 1.7 x 200 mm were grown, mostly at speeds of 38 mm/hr, and tensile tests on these bars at 25 and 1000 C showed lower strength than expected. The feasibility of growing hollow airfoils was also demonstrated by growing bars over 200 mm long with a teardrop shaped cross-section, having a major dimension of 12 mm and a maximum width of 5 mm.

  14. Simulation System of Car Crash Test in C-NCAP Analysis Based on an Improved Apriori Algorithm*

    NASA Astrophysics Data System (ADS)

    Xiang, LI

    In order to analysis car crash test in C-NCAP, an improved algorithm is given based on Apriori algorithm in this paper. The new algorithm is implemented with vertical data layout, breadth first searching, and intersecting. It takes advantage of the efficiency of vertical data layout and intersecting, and prunes candidate frequent item sets like Apriori. Finally, the new algorithm is applied in simulation of car crash test analysis system. The result shows that the relations will affect the C-NCAP test results, and it can provide a reference for the automotive design.

  15. Quality assurance testing of an explosives trace analysis laboratory--further improvements to include peroxide explosives.

    PubMed

    Crowson, Andrew; Cawthorne, Richard

    2012-12-01

    The Forensic Explosives Laboratory (FEL) operates within the Defence Science and Technology Laboratory (DSTL) which is part of the UK Government Ministry of Defence (MOD). The FEL provides support and advice to the Home Office and UK police forces on matters relating to the criminal misuse of explosives. During 1989 the FEL established a weekly quality assurance testing regime in its explosives trace analysis laboratory. The purpose of the regime is to prevent the accumulation of explosives traces within the laboratory at levels that could, if other precautions failed, result in the contamination of samples and controls. Designated areas within the laboratory are swabbed using cotton wool swabs moistened with ethanol:water mixture, in equal amounts. The swabs are then extracted, cleaned up and analysed using Gas Chromatography with Thermal Energy Analyser detectors or Liquid Chromatography with triple quadrupole Mass Spectrometry. This paper follows on from two previous published papers which described the regime and summarised results from approximately 14years of tests. This paper presents results from the subsequent 7years setting them within the context of previous results. It also discusses further improvements made to the systems and procedures and the inclusion of quality assurance sampling for the peroxide explosives TATP and HMTD. Monitoring samples taken from surfaces within the trace laboratories and trace vehicle examination bay have, with few exceptions, revealed only low levels of contamination, predominantly of RDX. Analysis of the control swabs, processed alongside the monitoring swabs, has demonstrated that in this environment the risk of forensic sample contamination, assuming all the relevant anti-contamination procedures have been followed, is so small that it is considered to be negligible. The monitoring regime has also been valuable in assessing the process of continuous improvement, allowing sources of contamination transfer into the trace

  16. Selecting training and test images for optimized anomaly detection algorithms in hyperspectral imagery through robust parameter design

    NASA Astrophysics Data System (ADS)

    Mindrup, Frank M.; Friend, Mark A.; Bauer, Kenneth W.

    2011-06-01

    There are numerous anomaly detection algorithms proposed for hyperspectral imagery. Robust parameter design (RPD) techniques have been applied to some of these algorithms in an attempt to choose robust settings capable of operating consistently across a large variety of image scenes. Typically, training and test sets of hyperspectral images are chosen randomly. Previous research developed a frameworkfor optimizing anomaly detection in HSI by considering specific image characteristics as noise variables within the context of RPD; these characteristics include the Fisher's score, ratio of target pixels and number of clusters. This paper describes a method for selecting hyperspectral image training and test subsets yielding consistent RPD results based on these noise features. These subsets are not necessarily orthogonal, but still provide improvements over random training and test subset assignments by maximizing the volume and average distance between image noise characteristics. Several different mathematical models representing the value of a training and test set based on such measures as the D-optimal score and various distance norms are tested in a simulation experiment.

  17. Application of a Smart Parachute Release Algorithm to the CPAS Test Architecture

    NASA Technical Reports Server (NTRS)

    Bledsoe, Kristin

    2013-01-01

    One of the primary test vehicles for the Capsule Parachute Assembly System (CPAS) is the Parachute Test Vehicle (PTV), a capsule shaped structure similar to the Orion design but truncated to fit in the cargo area of a C-17 aircraft. The PTV has a full Orion-like parachute compartment and similar aerodynamics; however, because of the single point attachment of the CPAS parachutes and the lack of Orion-like Reaction Control System (RCS), the PTV has the potential to reach significant body rates. High body rates at the time of the Drogue release may cause the PTV to flip while the parachutes deploy, which may result in the severing of the Pilot or Main risers. In order to prevent high rates at the time of Drogue release, a "smart release" algorithm was implemented in the PTV avionics system. This algorithm, which was developed for the Orion Flight system, triggers the Drogue parachute release when the body rates are near a minimum. This paper discusses the development and testing of the smart release algorithm; its implementation in the PTV avionics and the pretest simulation; and the results of its use on two CPAS tests.

  18. Results of a Saxitoxin Proficiency Test Including Characterization of Reference Material and Stability Studies

    PubMed Central

    Harju, Kirsi; Rapinoja, Marja-Leena; Avondet, Marc-André; Arnold, Werner; Schär, Martin; Luginbühl, Werner; Kremp, Anke; Suikkanen, Sanna; Kankaanpää, Harri; Burrell, Stephen; Söderström, Martin; Vanninen, Paula

    2015-01-01

    A saxitoxin (STX) proficiency test (PT) was organized as part of the Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk (EQuATox) project. The aim of this PT was to provide an evaluation of existing methods and the European laboratories’ capabilities for the analysis of STX and some of its analogues in real samples. Homogenized mussel material and algal cell materials containing paralytic shellfish poisoning (PSP) toxins were produced as reference sample matrices. The reference material was characterized using various analytical methods. Acidified algal extract samples at two concentration levels were prepared from a bulk culture of PSP toxins producing dinoflagellate Alexandrium ostenfeldii. The homogeneity and stability of the prepared PT samples were studied and found to be fit-for-purpose. Thereafter, eight STX PT samples were sent to ten participating laboratories from eight countries. The PT offered the participating laboratories the possibility to assess their performance regarding the qualitative and quantitative detection of PSP toxins. Various techniques such as official Association of Official Analytical Chemists (AOAC) methods, immunoassays, and liquid chromatography-mass spectrometry were used for sample analyses. PMID:26602927

  19. Results of a Saxitoxin Proficiency Test Including Characterization of Reference Material and Stability Studies.

    PubMed

    Harju, Kirsi; Rapinoja, Marja-Leena; Avondet, Marc-André; Arnold, Werner; Schär, Martin; Luginbühl, Werner; Kremp, Anke; Suikkanen, Sanna; Kankaanpää, Harri; Burrell, Stephen; Söderström, Martin; Vanninen, Paula

    2015-11-25

    A saxitoxin (STX) proficiency test (PT) was organized as part of the Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk (EQuATox) project. The aim of this PT was to provide an evaluation of existing methods and the European laboratories' capabilities for the analysis of STX and some of its analogues in real samples. Homogenized mussel material and algal cell materials containing paralytic shellfish poisoning (PSP) toxins were produced as reference sample matrices. The reference material was characterized using various analytical methods. Acidified algal extract samples at two concentration levels were prepared from a bulk culture of PSP toxins producing dinoflagellate Alexandrium ostenfeldii. The homogeneity and stability of the prepared PT samples were studied and found to be fit-for-purpose. Thereafter, eight STX PT samples were sent to ten participating laboratories from eight countries. The PT offered the participating laboratories the possibility to assess their performance regarding the qualitative and quantitative detection of PSP toxins. Various techniques such as official Association of Official Analytical Chemists (AOAC) methods, immunoassays, and liquid chromatography-mass spectrometry were used for sample analyses.

  20. The QCRad Value Added Product: Surface Radiation Measurement Quality Control Testing, Including Climatology Configurable Limits

    SciTech Connect

    Long, CN; Shi, Y

    2006-09-01

    This document describes the QCRad methodology, which uses climatological analyses of the surface radiation measurements to define reasonable limits for testing the data for unusual data values. The main assumption is that the majority of the climatological data are “good” data, which for field sites operated with care such as those of the Atmospheric Radiation Measurement (ARM) Program is a reasonable assumption. Data that fall outside the normal range of occurrences are labeled either “indeterminate” (meaning that the measurements are possible, but rarely occurring, and thus the values cannot be identified as good) or “bad” depending on how far outside the normal range the particular data reside. The methodology not only sets fairly standard maximum and minimum value limits, but also compares what we have learned about the behavior of these instruments in the field to other value-added products (VAPs), such as the Diffuse infrared (IR) Loss Correction VAP (Younkin and Long 2004) and the Best Estimate Flux VAP (Shi and Long 2002).

  1. Hypercoagulable states: an algorithmic approach to laboratory testing and update on monitoring of direct oral anticoagulants

    PubMed Central

    Nakashima, Megan O.

    2014-01-01

    Hypercoagulability can result from a variety of inherited and, more commonly, acquired conditions. Testing for the underlying cause of thrombosis in a patient is complicated both by the number and variety of clinical conditions that can cause hypercoagulability as well as the many potential assay interferences. Using an algorithmic approach to hypercoagulability testing provides the ability to tailor assay selection to the clinical scenario. It also reduces the number of unnecessary tests performed, saving cost and time, and preventing potential false results. New oral anticoagulants are powerful tools for managing hypercoagulable patients; however, their use introduces new challenges in terms of test interpretation and therapeutic monitoring. The coagulation laboratory plays an essential role in testing for and treating hypercoagulable states. The input of laboratory professionals is necessary to guide appropriate testing and synthesize interpretation of results. PMID:25025009

  2. Hypercoagulable states: an algorithmic approach to laboratory testing and update on monitoring of direct oral anticoagulants.

    PubMed

    Nakashima, Megan O; Rogers, Heesun J

    2014-06-01

    Hypercoagulability can result from a variety of inherited and, more commonly, acquired conditions. Testing for the underlying cause of thrombosis in a patient is complicated both by the number and variety of clinical conditions that can cause hypercoagulability as well as the many potential assay interferences. Using an algorithmic approach to hypercoagulability testing provides the ability to tailor assay selection to the clinical scenario. It also reduces the number of unnecessary tests performed, saving cost and time, and preventing potential false results. New oral anticoagulants are powerful tools for managing hypercoagulable patients; however, their use introduces new challenges in terms of test interpretation and therapeutic monitoring. The coagulation laboratory plays an essential role in testing for and treating hypercoagulable states. The input of laboratory professionals is necessary to guide appropriate testing and synthesize interpretation of results.

  3. Using modified fruit fly optimisation algorithm to perform the function test and case studies

    NASA Astrophysics Data System (ADS)

    Pan, Wen-Tsao

    2013-06-01

    Evolutionary computation is a computing mode established by practically simulating natural evolutionary processes based on the concept of Darwinian Theory, and it is a common research method. The main contribution of this paper was to reinforce the function of searching for the optimised solution using the fruit fly optimization algorithm (FOA), in order to avoid the acquisition of local extremum solutions. The evolutionary computation has grown to include the concepts of animal foraging behaviour and group behaviour. This study discussed three common evolutionary computation methods and compared them with the modified fruit fly optimization algorithm (MFOA). It further investigated the ability of the three mathematical functions in computing extreme values, as well as the algorithm execution speed and the forecast ability of the forecasting model built using the optimised general regression neural network (GRNN) parameters. The findings indicated that there was no obvious difference between particle swarm optimization and the MFOA in regards to the ability to compute extreme values; however, they were both better than the artificial fish swarm algorithm and FOA. In addition, the MFOA performed better than the particle swarm optimization in regards to the algorithm execution speed, and the forecast ability of the forecasting model built using the MFOA's GRNN parameters was better than that of the other three forecasting models.

  4. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The development of the Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large complex systems engineering challenge being addressed in part by focusing on the specific subsystems handling of off-nominal mission and fault tolerance. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML), the Mission and Fault Management (M&FM) algorithms are crafted and vetted in specialized Integrated Development Teams composed of multiple development disciplines. NASA also has formed an M&FM team for addressing fault management early in the development lifecycle. This team has developed a dedicated Vehicle Management End-to-End Testbed (VMET) that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. The flexibility of VMET enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the algorithms utilizing actual subsystem models. The intent is to validate the algorithms and substantiate them with performance baselines for each of the vehicle subsystems in an independent platform exterior to flight software test processes. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test processes. Risk reduction is addressed by working with other organizations such as S

  5. Activity of faropenem tested against Neisseria gonorrhoeae isolates including fluoroquinolone-resistant strains.

    PubMed

    Jones, Ronald N; Critchley, Ian A; Whittington, William L H; Janjic, Nebojsa; Pottumarthy, Sudha

    2005-12-01

    We evaluated the anti-gonococcal potency of faropenem along with 7 comparator reference antimicrobials against a preselected collection of clinical isolates. The 265 isolates were inclusive of 2 subsets: 1) 76 well-characterized resistant phenotypes of gonococcal strains (53 quinolone-resistant strains--31 with documented quinolone resistance-determining region changes from Japan, 15 strains resistant to penicillin and tetracycline, and 8 strains with intermediate susceptibility to penicillin) and 2) 189 recent isolates from clinical specimens in 2004 from 6 states across the United States where quinolone resistance is prevalent. Activity of faropenem was adversely affected by l-cysteine hydrochloride in IsoVitaleX (4-fold increase in [minimal inhibitory concentration] MIC50; 0.06 versus 0.25 microg/mL). The rank order of potency of the antimicrobials for the entire collection was ceftriaxone (MIC90, 0.06 microg/mL) > faropenem (0.25 microg/mL) > azithromycin (0.5 microg/mL) > cefuroxime (1 microg/mL) > tetracycline (2 microg/mL) > penicillin = ciprofloxacin = levofloxacin (4 microg/mL). Using MIC90 for comparison, faropenem was 4-fold more potent than cefuroxime (0.25 versus 1 microg/mL), but was 4-fold less active than ceftriaxone (0.25 versus 0.06 microg/mL). Although the activity of faropenem was not affected by either penicillinase production (MIC90, 0.12 microg/mL, penicillinase-positive) or increasing ciprofloxacin MIC (0.25 microg/mL, ciprofloxacin-resistant), increasing penicillin MIC was associated with an increase in MIC90 values (0.016 microg/mL for penicillin-susceptible to 0.25 microg/mL for penicillin-resistant strains). Among the recent (2004) clinical gonococcal isolates tested, reduced susceptibility to penicillins, tetracycline, and fluoroquinolones was high (28.0-94.2%). Geographic distribution of the endemic resistance rates of gonococci varied considerably, with 16.7-66.7% of the gonococcal isolates being ciprofloxacin-resistant in Oregon

  6. 78 FR 20345 - Modification and Expansion of CBP Centers of Excellence and Expertise Test To Include Six...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-04

    ... Protection's (CBP's) plan to modify and expand its test for the Centers of Excellence and Expertise (CEEs... expands the regulations that will be included in the test for the six new CEEs as well as the four CEEs... Automotive & Aerospace CEE; and the Petroleum, Natural Gas & Minerals CEE. To the extent not modified by...

  7. Scoring Divergent Thinking Tests by Computer With a Semantics-Based Algorithm.

    PubMed

    Beketayev, Kenes; Runco, Mark A

    2016-05-01

    Divergent thinking (DT) tests are useful for the assessment of creative potentials. This article reports the semantics-based algorithmic (SBA) method for assessing DT. This algorithm is fully automated: Examinees receive DT questions on a computer or mobile device and their ideas are immediately compared with norms and semantic networks. This investigation compared the scores generated by the SBA method with the traditional methods of scoring DT (i.e., fluency, originality, and flexibility). Data were collected from 250 examinees using the "Many Uses Test" of DT. The most important finding involved the flexibility scores from both scoring methods. This was critical because semantic networks are based on conceptual structures, and thus a high SBA score should be highly correlated with the traditional flexibility score from DT tests. Results confirmed this correlation (r = .74). This supports the use of algorithmic scoring of DT. The nearly-immediate computation time required by SBA method may make it the method of choice, especially when it comes to moderate- and large-scale DT assessment investigations. Correlations between SBA scores and GPA were insignificant, providing evidence of the discriminant and construct validity of SBA scores. Limitations of the present study and directions for future research are offered. PMID:27298632

  8. Classification of audiograms by sequential testing: reliability and validity of an automated behavioral hearing screening algorithm.

    PubMed

    Eilers, R E; Ozdamar, O; Steffens, M L

    1993-05-01

    In 1990, CAST (classification of audiograms by sequential testing) was proposed and developed as an automated, innovative approach to screening infant hearing using a modified Bayesian method. The method generated a four-frequency audiogram in a minimal number of test trials using VRA (visual reinforcement audiometry) techniques. Computer simulations were used to explore the properties (efficiency and accuracy) of the paradigm. The current work is designed to further test the utility of the paradigm with human infants and young children. Accordingly, infants and children between 6 months and 2 years of age were screened for hearing loss. The algorithm's efficacy was studied with respect to validity and reliability. Validity was evaluated by comparing CAST results with tympanometric data and outcomes of staircase-based testing. Test-retest reliability was also assessed. Results indicate that CAST is a valid, efficient, reliable, and potentially cost-effective screening method. PMID:8318708

  9. Developments of aerosol retrieval algorithm for Geostationary Environmental Monitoring Spectrometer (GEMS) and the retrieval accuracy test

    NASA Astrophysics Data System (ADS)

    KIM, M.; Kim, J.; Jeong, U.; Ahn, C.; Bhartia, P. K.; Torres, O.

    2013-12-01

    A scanning UV-Visible spectrometer, the GEMS (Geostationary Environment Monitoring Spectrometer) onboard the GEO-KOMPSAT2B (Geostationary Korea Multi-Purpose Satellite) is planned to be launched in geostationary orbit in 2018. The GEMS employs hyper-spectral imaging with 0.6 nm resolution to observe solar backscatter radiation in the UV and Visible range. In the UV range, the low surface contribution to the backscattered radiation and strong interaction between aerosol absorption and molecular scattering can be advantageous in retrieving aerosol optical properties such as aerosol optical depth (AOD) and single scattering albedo (SSA). By taking the advantage, the OMI UV aerosol algorithm has provided information on the absorbing aerosol (Torres et al., 2007; Ahn et al., 2008). This study presents a UV-VIS algorithm to retrieve AOD and SSA from GEMS. The algorithm is based on the general inversion method, which uses pre-calculated look-up table with assumed aerosol properties and measurement condition. To obtain the retrieval accuracy, the error of the look-up table method occurred by the interpolation of pre-calculated radiances is estimated by using the reference dataset, and the uncertainties about aerosol type and height are evaluated. Also, the GEMS aerosol algorithm is tested with measured normalized radiance from OMI, a provisional data set for GEMS measurement, and the results are compared with the values from AERONET measurements over Asia. Additionally, the method for simultaneous retrieve of the AOD and aerosol height is discussed.

  10. Improving the quantitative testing of fast aspherics surfaces with null screen using Dijkstra algorithm

    NASA Astrophysics Data System (ADS)

    Moreno Oliva, Víctor Iván; Castañeda Mendoza, Álvaro; Campos García, Manuel; Díaz Uribe, Rufino

    2011-09-01

    The null screen is a geometric method that allows the testing of fast aspherical surfaces, this method measured the local slope at the surface and by numerical integration the shape of the surface is measured. The usual technique for the numerical evaluation of the surface is the trapezoidal rule, is well-known fact that the truncation error increases with the second power of the spacing between spots of the integration path. Those paths are constructed following spots reflected on the surface and starting in an initial select spot. To reduce the numerical errors in this work we propose the use of the Dijkstra algorithm.1 This algorithm can find the shortest path from one spot (or vertex) to another spot in a weighted connex graph. Using a modification of the algorithm it is possible to find the minimal path from one select spot to all others ones. This automates and simplifies the integration process in the test with null screens. In this work is shown the efficient proposed evaluating a previously surface with a traditional process.

  11. Scoring Divergent Thinking Tests by Computer With a Semantics-Based Algorithm

    PubMed Central

    Beketayev, Kenes; Runco, Mark A.

    2016-01-01

    Divergent thinking (DT) tests are useful for the assessment of creative potentials. This article reports the semantics-based algorithmic (SBA) method for assessing DT. This algorithm is fully automated: Examinees receive DT questions on a computer or mobile device and their ideas are immediately compared with norms and semantic networks. This investigation compared the scores generated by the SBA method with the traditional methods of scoring DT (i.e., fluency, originality, and flexibility). Data were collected from 250 examinees using the “Many Uses Test” of DT. The most important finding involved the flexibility scores from both scoring methods. This was critical because semantic networks are based on conceptual structures, and thus a high SBA score should be highly correlated with the traditional flexibility score from DT tests. Results confirmed this correlation (r = .74). This supports the use of algorithmic scoring of DT. The nearly-immediate computation time required by SBA method may make it the method of choice, especially when it comes to moderate- and large-scale DT assessment investigations. Correlations between SBA scores and GPA were insignificant, providing evidence of the discriminant and construct validity of SBA scores. Limitations of the present study and directions for future research are offered. PMID:27298632

  12. A New Lidar Data Processing Algorithm Including Full Uncertainty Budget and Standardized Vertical Resolution for use Within the NDACC and GRUAN Networks

    NASA Astrophysics Data System (ADS)

    Leblanc, T.; Haefele, A.; Sica, R. J.; van Gijsel, A.

    2014-12-01

    A new lidar data processing algorithm for the retrieval of ozone, temperature and water vapor has been developed for centralized use within the Network for the Detection of Atmospheric Composition Change (NDACC) and the GCOS Reference Upper Air Network (GRUAN). The program is written with the objective that raw data from a large number of lidar instruments can be analyzed consistently. The uncertainty budget includes 13 sources of uncertainty that are explicitly propagated taking into account vertical and inter-channel dependencies. Several standardized definitions of vertical resolution can be used, leading to a maximum flexibility, and to the production of tropospheric ozone, stratospheric ozone, middle atmospheric temperature and tropospheric water vapor profiles optimized for multiple user needs such as long-term monitoring, process studies and model and satellite validation. A review of the program's functionalities as well as the first retrieved products will be presented.

  13. The Research on Web-Based Testing Environment Using Simulated Annealing Algorithm

    PubMed Central

    2014-01-01

    The computerized evaluation is now one of the most important methods to diagnose learning; with the application of artificial intelligence techniques in the field of evaluation, the computerized adaptive testing gradually becomes one of the most important evaluation methods. In this test, the computer dynamic updates the learner's ability level and selects tailored items from the item pool. In order to meet the needs of the test it requires that the system has a relatively high efficiency of the implementation. To solve this problem, we proposed a novel method of web-based testing environment based on simulated annealing algorithm. In the development of the system, through a series of experiments, we compared the simulated annealing method and other methods of the efficiency and efficacy. The experimental results show that this method ensures choosing nearly optimal items from the item bank for learners, meeting a variety of assessment needs, being reliable, and having valid judgment in the ability of learners. In addition, using simulated annealing algorithm to solve the computing complexity of the system greatly improves the efficiency of select items from system and near-optimal solutions. PMID:24959600

  14. Development and Implementation of a Hardware In-the-Loop Test Bed for Unmanned Aerial Vehicle Control Algorithms

    NASA Technical Reports Server (NTRS)

    Nyangweso, Emmanuel; Bole, Brian

    2014-01-01

    Successful prediction and management of battery life using prognostic algorithms through ground and flight tests is important for performance evaluation of electrical systems. This paper details the design of test beds suitable for replicating loading profiles that would be encountered in deployed electrical systems. The test bed data will be used to develop and validate prognostic algorithms for predicting battery discharge time and battery failure time. Online battery prognostic algorithms will enable health management strategies. The platform used for algorithm demonstration is the EDGE 540T electric unmanned aerial vehicle (UAV). The fully designed test beds developed and detailed in this paper can be used to conduct battery life tests by controlling current and recording voltage and temperature to develop a model that makes a prediction of end-of-charge and end-of-life of the system based on rapid state of health (SOH) assessment.

  15. Development of region processing algorithm for HSTAMIDS: status and field test results

    NASA Astrophysics Data System (ADS)

    Ngan, Peter; Burke, Sean; Cresci, Roger; Wilson, Joseph N.; Gader, Paul; Ho, K. C.; Bartosz, Elizabeth; Duvoisin, Herbert

    2007-04-01

    The Region Processing Algorithm (RPA) has been developed by the Office of the Army Humanitarian Demining Research and Development (HD R&D) Program as part of improvements for the AN/PSS-14. The effort was a collaboration between the HD R&D Program, L-3 Communication CyTerra Corporation, University of Florida, Duke University and University of Missouri. RPA has been integrated into and implemented in a real-time AN/PSS-14. The subject unit was used to collect data and tested for its performance at three Army test sites within the United States of America. This paper describes the status of the technology and its recent test results.

  16. Brief Communication: A new testing field for debris flow warning systems and algorithms

    NASA Astrophysics Data System (ADS)

    Arattano, M.; Coviello, V.; Cavalli, M.; Comiti, F.; Macconi, P.; Marchi, L.; Theule, J.; Crema, S.

    2015-03-01

    Early warning systems (EWSs) are among the measures adopted for the mitigation of debris flow hazards. EWSs often employ algorithms that require careful and long testing to grant their effectiveness. A permanent installation has been so equipped in the Gadria basin (Eastern Italian Alps) for the systematic test of event-EWSs. The installation is conceived to produce didactic videos and host informative visits. The populace involvement and education is in fact an essential step in any hazard mitigation activity and it should envisaged in planning any research activity. The occurrence of a debris flow in the Gadria creek, in the summer of 2014, allowed a first test of the installation and the recording of an informative video on EWSs.

  17. GUEST EDITORS' INTRODUCTION: Testing inversion algorithms against experimental data: inhomogeneous targets

    NASA Astrophysics Data System (ADS)

    Belkebir, Kamal; Saillard, Marc

    2005-12-01

    This special section deals with the reconstruction of scattering objects from experimental data. A few years ago, inspired by the Ipswich database [1 4], we started to build an experimental database in order to validate and test inversion algorithms against experimental data. In the special section entitled 'Testing inversion algorithms against experimental data' [5], preliminary results were reported through 11 contributions from several research teams. (The experimental data are free for scientific use and can be downloaded from the web site.) The success of this previous section has encouraged us to go further and to design new challenges for the inverse scattering community. Taking into account the remarks formulated by several colleagues, the new data sets deal with inhomogeneous cylindrical targets and transverse electric (TE) polarized incident fields have also been used. Among the four inhomogeneous targets, three are purely dielectric, while the last one is a `hybrid' target mixing dielectric and metallic cylinders. Data have been collected in the anechoic chamber of the Centre Commun de Ressources Micro-ondes in Marseille. The experimental setup as well as the layout of the files containing the measurements are presented in the contribution by J-M Geffrin, P Sabouroux and C Eyraud. The antennas did not change from the ones used previously [5], namely wide-band horn antennas. However, improvements have been achieved by refining the mechanical positioning devices. In order to enlarge the scope of applications, both TE and transverse magnetic (TM) polarizations have been carried out for all targets. Special care has been taken not to move the target under test when switching from TE to TM measurements, ensuring that TE and TM data are available for the same configuration. All data correspond to electric field measurements. In TE polarization the measured component is orthogonal to the axis of invariance. Contributions A Abubakar, P M van den Berg and T M

  18. Real-time test of MOCS algorithm during Superflux 1980. [ocean color algorithm for remotely detecting suspended solids

    NASA Technical Reports Server (NTRS)

    Grew, G. W.

    1981-01-01

    A remote sensing experiment was conducted in which success depended upon the real-time use of an algorithm, generated from MOCS (multichannel ocean color sensor) data onboard the NASA P-3 aircraft, to direct the NOAA ship Kelez to oceanic stations where vitally needed sea truth could be collected. Remote data sets collected on two consecutive days of the mission were consistent with the sea truth for low concentrations of chlorophyll a. Two oceanic regions of special interest were located. The algorithm and the collected data are described.

  19. LOTOS code for local earthquake tomographic inversion: benchmarks for testing tomographic algorithms

    NASA Astrophysics Data System (ADS)

    Koulakov, I. Yu.

    2009-04-01

    We present the LOTOS-07 code for performing local earthquake tomographic (LET) inversion, which is freely available at www.ivan-art.com/science/LOTOS_07. The initial data for the code are the arrival times from local seismicity and coordinates of the stations. It does not require any information about the sources. The calculations start from absolute location of sources and estimates of an optimal 1D velocity model. Then the sources are relocated simultaneously with the 3D velocity distribution during iterative coupled tomographic inversions. The code allows results to be compared based on node or cell parameterizations. Both Vp-Vs and Vp - Vp/Vs inversion schemes can be performed by the LOTOS code. The working ability of the LOTOS code is illustrated with different real and synthetic datasets. Some of the tests are used to disprove existing stereotypes of LET schemes such as using trade-off curves for evaluation of damping parameters and GAP criterion for selection of events. We also present a series of synthetic datasets with unknown sources and velocity models (www.ivan-art.com/science/benchmark) that can be used as blind benchmarks for testing different tomographic algorithms. We encourage other users of tomography algorithms to join the program on creating benchmarks that can be used to check existing codes. The program codes and testing datasets will be freely distributed during the poster presentation.

  20. Implementation and Initial Testing of Advanced Processing and Analysis Algorithms for Correlated Neutron Counting

    SciTech Connect

    Santi, Peter Angelo; Cutler, Theresa Elizabeth; Favalli, Andrea; Koehler, Katrina Elizabeth; Henzl, Vladimir; Henzlova, Daniela; Parker, Robert Francis; Croft, Stephen

    2015-12-01

    In order to improve the accuracy and capabilities of neutron multiplicity counting, additional quantifiable information is needed in order to address the assumptions that are present in the point model. Extracting and utilizing higher order moments (Quads and Pents) from the neutron pulse train represents the most direct way of extracting additional information from the measurement data to allow for an improved determination of the physical properties of the item of interest. The extraction of higher order moments from a neutron pulse train required the development of advanced dead time correction algorithms which could correct for dead time effects in all of the measurement moments in a self-consistent manner. In addition, advanced analysis algorithms have been developed to address specific assumptions that are made within the current analysis model, namely that all neutrons are created at a single point within the item of interest, and that all neutrons that are produced within an item are created with the same energy distribution. This report will discuss the current status of implementation and initial testing of the advanced dead time correction and analysis algorithms that have been developed in an attempt to utilize higher order moments to improve the capabilities of correlated neutron measurement techniques.

  1. Hypersensitivity reactions to metallic implants - diagnostic algorithm and suggested patch test series for clinical use.

    PubMed

    Schalock, Peter C; Menné, Torkil; Johansen, Jeanne D; Taylor, James S; Maibach, Howard I; Lidén, Carola; Bruze, Magnus; Thyssen, Jacob P

    2012-01-01

    Cutaneous and systemic hypersensitivity reactions to implanted metals are challenging to evaluate and treat. Although they are uncommon, they do exist, and require appropriate and complete evaluation. This review summarizes the evidence regarding evaluation tools, especially patch and lymphocyte transformation tests, for hypersensitivity reactions to implanted metal devices. Patch test evaluation is the gold standard for metal hypersensitivity, although the results may be subjective. Regarding pre-implant testing, those patients with a reported history of metal dermatitis should be evaluated by patch testing. Those without a history of dermatitis should not be tested unless considerable concern exists. Regarding post-implant testing, a subset of patients with metal hypersensitivity may develop cutaneous or systemic reactions to implanted metals following implant. For symptomatic patients, a diagnostic algorithm to guide the selection of screening allergen series for patch testing is provided. At a minimum, an extended baseline screening series and metal screening is necessary. Static and dynamic orthopaedic implants, intravascular stent devices, implanted defibrillators and dental and gynaecological devices are considered. Basic management suggestions are provided. Our goal is to provide a comprehensive reference for use by those evaluating suspected cutaneous and systemic metal hypersensitivity reactions.

  2. Performance of rapid tests and algorithms for HIV screening in Abidjan, Ivory Coast.

    PubMed

    Loukou, Y G; Cabran, M A; Yessé, Zinzendorf Nanga; Adouko, B M O; Lathro, S J; Agbessi-Kouassi, K B T

    2014-01-01

    Seven rapid diagnosis tests (RDTs) of HIV were evaluated by a panel group who collected serum samples from patients in Abidjan (HIV-1 = 203, HIV-2 = 25, HIV-dual = 25, HIV = 305). Kit performances were recorded after the reference techniques (enzyme-linked immunosorbent assay). The following RDTs showed a sensitivity of 100% and a specificity higher than 99%: Determine, Oraquick, SD Bioline, BCP, and Stat-Pak. These kits were used to establish infection screening strategies. The combination with 2 or 3 of these tests in series or parallel algorithms showed that series combinations with 2 tests (Oraquick and Bioline) and 3 tests (Determine, BCP, and Stat-Pak) gave the best performances (sensitivity, specificity, positive predictive value, and negative predictive value of 100%). However, the combination with 2 tests appeared to be more onerous than the combination with 3 tests. The combination with Determine, BCP, and Stat-Pak tests serving as a tiebreaker could be an alternative to the HIV/AIDS serological screening in Abidjan.

  3. Preliminary test results of a flight management algorithm for fuel conservative descents in a time based metered traffic environment. [flight tests of an algorithm to minimize fuel consumption of aircraft based on flight time

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Cannon, D. G.

    1979-01-01

    A flight management algorithm designed to improve the accuracy of delivering the airplane fuel efficiently to a metering fix at a time designated by air traffic control is discussed. The algorithm provides a 3-D path with time control (4-D) for a test B 737 airplane to make an idle thrust, clean configured descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path is calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithms and the results of the flight tests are discussed.

  4. Application of the HWVP measurement error model and feed test algorithms to pilot scale feed testing

    SciTech Connect

    Adams, T.L.

    1996-03-01

    The purpose of the feed preparation subsystem in the Hanford Waste Vitrification Plant (HWVP) is to provide, for control of the properties of the slurry that are sent to the melter. The slurry properties are adjusted so that two classes of constraints are satisfied. Processability constraints guarantee that the process conditions required by the melter can be obtained. For example, there are processability constraints associated with electrical conductivity and viscosity. Acceptability constraints guarantee that the processed glass can be safely stored in a repository. An example of an acceptability constraint is the durability of the product glass. The primary control focus for satisfying both processability and acceptability constraints is the composition of the slurry. The primary mechanism for adjusting the composition of the slurry is mixing the waste slurry with frit of known composition. Spent frit from canister decontamination is also recycled by adding it to the melter feed. A number of processes in addition to mixing are used to condition the waste slurry prior to melting, including evaporation and the addition of formic acid. These processes also have an effect on the feed composition.

  5. A super-resolution algorithm for enhancement of flash lidar data: flight test results

    NASA Astrophysics Data System (ADS)

    Bulyshev, Alexander; Amzajerdian, Farzin; Roback, Eric; Reisse, Robert

    2013-03-01

    This paper describes the results of a 3D super-resolution algorithm applied to the range data obtained from a recent Flash Lidar helicopter flight test. The flight test was conducted by the NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project over a simulated lunar terrain facility at NASA Kennedy Space Center. ALHAT is developing the technology for safe autonomous landing on the surface of celestial bodies: Moon, Mars, asteroids. One of the test objectives was to verify the ability of 3D super-resolution technique to generate high resolution digital elevation models (DEMs) and to determine time resolved relative positions and orientations of the vehicle. 3D super-resolution algorithm was developed earlier and tested in computational modeling, and laboratory experiments, and in a few dynamic experiments using a moving truck. Prior to the helicopter flight test campaign, a 100mX100m hazard field was constructed having most of the relevant extraterrestrial hazard: slopes, rocks, and craters with different sizes. Data were collected during the flight and then processed by the super-resolution code. The detailed DEM of the hazard field was constructed using independent measurement to be used for comparison. ALHAT navigation system data were used to verify abilities of super-resolution method to provide accurate relative navigation information. Namely, the 6 degree of freedom state vector of the instrument as a function of time was restored from super-resolution data. The results of comparisons show that the super-resolution method can construct high quality DEMs and allows for identifying hazards like rocks and craters within the accordance of ALHAT requirements.

  6. A Super-Resolution Algorithm for Enhancement of FLASH LIDAR Data: Flight Test Results

    NASA Technical Reports Server (NTRS)

    Bulyshev, Alexander; Amzajerdian, Farzin; Roback, Eric; Reisse Robert

    2014-01-01

    This paper describes the results of a 3D super-resolution algorithm applied to the range data obtained from a recent Flash Lidar helicopter flight test. The flight test was conducted by the NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project over a simulated lunar terrain facility at NASA Kennedy Space Center. ALHAT is developing the technology for safe autonomous landing on the surface of celestial bodies: Moon, Mars, asteroids. One of the test objectives was to verify the ability of 3D super-resolution technique to generate high resolution digital elevation models (DEMs) and to determine time resolved relative positions and orientations of the vehicle. 3D super-resolution algorithm was developed earlier and tested in computational modeling, and laboratory experiments, and in a few dynamic experiments using a moving truck. Prior to the helicopter flight test campaign, a 100mX100m hazard field was constructed having most of the relevant extraterrestrial hazard: slopes, rocks, and craters with different sizes. Data were collected during the flight and then processed by the super-resolution code. The detailed DEM of the hazard field was constructed using independent measurement to be used for comparison. ALHAT navigation system data were used to verify abilities of super-resolution method to provide accurate relative navigation information. Namely, the 6 degree of freedom state vector of the instrument as a function of time was restored from super-resolution data. The results of comparisons show that the super-resolution method can construct high quality DEMs and allows for identifying hazards like rocks and craters within the accordance of ALHAT requirements.

  7. Cost-effectiveness of collaborative care including PST and an antidepressant treatment algorithm for the treatment of major depressive disorder in primary care; a randomised clinical trial

    PubMed Central

    IJff, Marjoliek A; Huijbregts, Klaas ML; van Marwijk, Harm WJ; Beekman, Aartjan TF; Hakkaart-van Roijen, Leona; Rutten, Frans F; Unützer, Jürgen; van der Feltz-Cornelis, Christina M

    2007-01-01

    Background Depressive disorder is currently one of the most burdensome disorders worldwide. Evidence-based treatments for depressive disorder are already available, but these are used insufficiently, and with less positive results than possible. Earlier research in the USA has shown good results in the treatment of depressive disorder based on a collaborative care approach with Problem Solving Treatment and an antidepressant treatment algorithm, and research in the UK has also shown good results with Problem Solving Treatment. These treatment strategies may also work very well in the Netherlands too, even though health care systems differ between countries. Methods/design This study is a two-armed randomised clinical trial, with randomization on patient-level. The aim of the trial is to evaluate the treatment of depressive disorder in primary care in the Netherlands by means of an adapted collaborative care framework, including contracting and adherence-improving strategies, combined with Problem Solving Treatment and antidepressant medication according to a treatment algorithm. Forty general practices will be randomised to either the intervention group or the control group. Included will be patients who are diagnosed with moderate to severe depression, based on DSM-IV criteria, and stratified according to comorbid chronic physical illness. Patients in the intervention group will receive treatment based on the collaborative care approach, and patients in the control group will receive care as usual. Baseline measurements and follow up measures (3, 6, 9 and 12 months) are assessed using questionnaires and an interview. The primary outcome measure is severity of depressive symptoms, according to the PHQ9. Secondary outcome measures are remission as measured with the PHQ9 and the IDS-SR, and cost-effectiveness measured with the TiC-P, the EQ-5D and the SF-36. Discussion In this study, an American model to enhance care for patients with a depressive disorder, the

  8. Evaluation and Comparison of Multiple Test Methods, Including Real-time PCR, for Legionella Detection in Clinical Specimens.

    PubMed

    Peci, Adriana; Winter, Anne-Luise; Gubbay, Jonathan B

    2016-01-01

    Legionella is a Gram-negative bacterium that can cause Pontiac fever, a mild upper respiratory infection and Legionnaire's disease, a more severe illness. We aimed to compare the performance of urine antigen, culture, and polymerase chain reaction (PCR) test methods and to determine if sputum is an acceptable alternative to the use of more invasive bronchoalveolar lavage (BAL). Data for this study included specimens tested for Legionella at Public Health Ontario Laboratories from 1st January, 2010 to 30th April, 2014, as part of routine clinical testing. We found sensitivity of urinary antigen test (UAT) compared to culture to be 87%, specificity 94.7%, positive predictive value (PPV) 63.8%, and negative predictive value (NPV) 98.5%. Sensitivity of UAT compared to PCR was 74.7%, specificity 98.3%, PPV 77.7%, and NPV 98.1%. Out of 146 patients who had a Legionella-positive result by PCR, only 66 (45.2%) also had a positive result by culture. Sensitivity for culture was the same using either sputum or BAL (13.6%); sensitivity for PCR was 10.3% for sputum and 12.8% for BAL. Both sputum and BAL yield similar results regardless testing methods (Fisher Exact p-values = 1.0, for each test). In summary, all test methods have inherent weaknesses in identifying Legionella; therefore, more than one testing method should be used. Obtaining a single specimen type from patients with pneumonia limits the ability to diagnose Legionella, particularly when urine is the specimen type submitted. Given ease of collection and similar sensitivity to BAL, clinicians are encouraged to submit sputum in addition to urine when BAL submission is not practical from patients being tested for Legionella. PMID:27630979

  9. Evaluation and Comparison of Multiple Test Methods, Including Real-time PCR, for Legionella Detection in Clinical Specimens

    PubMed Central

    Peci, Adriana; Winter, Anne-Luise; Gubbay, Jonathan B.

    2016-01-01

    Legionella is a Gram-negative bacterium that can cause Pontiac fever, a mild upper respiratory infection and Legionnaire’s disease, a more severe illness. We aimed to compare the performance of urine antigen, culture, and polymerase chain reaction (PCR) test methods and to determine if sputum is an acceptable alternative to the use of more invasive bronchoalveolar lavage (BAL). Data for this study included specimens tested for Legionella at Public Health Ontario Laboratories from 1st January, 2010 to 30th April, 2014, as part of routine clinical testing. We found sensitivity of urinary antigen test (UAT) compared to culture to be 87%, specificity 94.7%, positive predictive value (PPV) 63.8%, and negative predictive value (NPV) 98.5%. Sensitivity of UAT compared to PCR was 74.7%, specificity 98.3%, PPV 77.7%, and NPV 98.1%. Out of 146 patients who had a Legionella-positive result by PCR, only 66 (45.2%) also had a positive result by culture. Sensitivity for culture was the same using either sputum or BAL (13.6%); sensitivity for PCR was 10.3% for sputum and 12.8% for BAL. Both sputum and BAL yield similar results regardless testing methods (Fisher Exact p-values = 1.0, for each test). In summary, all test methods have inherent weaknesses in identifying Legionella; therefore, more than one testing method should be used. Obtaining a single specimen type from patients with pneumonia limits the ability to diagnose Legionella, particularly when urine is the specimen type submitted. Given ease of collection and similar sensitivity to BAL, clinicians are encouraged to submit sputum in addition to urine when BAL submission is not practical from patients being tested for Legionella. PMID:27630979

  10. Evaluation and Comparison of Multiple Test Methods, Including Real-time PCR, for Legionella Detection in Clinical Specimens

    PubMed Central

    Peci, Adriana; Winter, Anne-Luise; Gubbay, Jonathan B.

    2016-01-01

    Legionella is a Gram-negative bacterium that can cause Pontiac fever, a mild upper respiratory infection and Legionnaire’s disease, a more severe illness. We aimed to compare the performance of urine antigen, culture, and polymerase chain reaction (PCR) test methods and to determine if sputum is an acceptable alternative to the use of more invasive bronchoalveolar lavage (BAL). Data for this study included specimens tested for Legionella at Public Health Ontario Laboratories from 1st January, 2010 to 30th April, 2014, as part of routine clinical testing. We found sensitivity of urinary antigen test (UAT) compared to culture to be 87%, specificity 94.7%, positive predictive value (PPV) 63.8%, and negative predictive value (NPV) 98.5%. Sensitivity of UAT compared to PCR was 74.7%, specificity 98.3%, PPV 77.7%, and NPV 98.1%. Out of 146 patients who had a Legionella-positive result by PCR, only 66 (45.2%) also had a positive result by culture. Sensitivity for culture was the same using either sputum or BAL (13.6%); sensitivity for PCR was 10.3% for sputum and 12.8% for BAL. Both sputum and BAL yield similar results regardless testing methods (Fisher Exact p-values = 1.0, for each test). In summary, all test methods have inherent weaknesses in identifying Legionella; therefore, more than one testing method should be used. Obtaining a single specimen type from patients with pneumonia limits the ability to diagnose Legionella, particularly when urine is the specimen type submitted. Given ease of collection and similar sensitivity to BAL, clinicians are encouraged to submit sputum in addition to urine when BAL submission is not practical from patients being tested for Legionella.

  11. The Cyborg Astrobiologist: testing a novelty detection algorithm on two mobile exploration systems at Rivas Vaciamadrid in Spain and at the Mars Desert Research Station in Utah

    NASA Astrophysics Data System (ADS)

    McGuire, P. C.; Gross, C.; Wendt, L.; Bonnici, A.; Souza-Egipsy, V.; Ormö, J.; Díaz-Martínez, E.; Foing, B. H.; Bose, R.; Walter, S.; Oesker, M.; Ontrup, J.; Haschke, R.; Ritter, H.

    2010-01-01

    In previous work, a platform was developed for testing computer-vision algorithms for robotic planetary exploration. This platform consisted of a digital video camera connected to a wearable computer for real-time processing of images at geological and astrobiological field sites. The real-time processing included image segmentation and the generation of interest points based upon uncommonness in the segmentation maps. Also in previous work, this platform for testing computer-vision algorithms has been ported to a more ergonomic alternative platform, consisting of a phone camera connected via the Global System for Mobile Communications (GSM) network to a remote-server computer. The wearable-computer platform has been tested at geological and astrobiological field sites in Spain (Rivas Vaciamadrid and Riba de Santiuste), and the phone camera has been tested at a geological field site in Malta. In this work, we (i) apply a Hopfield neural-network algorithm for novelty detection based upon colour, (ii) integrate a field-capable digital microscope on the wearable computer platform, (iii) test this novelty detection with the digital microscope at Rivas Vaciamadrid, (iv) develop a Bluetooth communication mode for the phone-camera platform, in order to allow access to a mobile processing computer at the field sites, and (v) test the novelty detection on the Bluetooth-enabled phone camera connected to a netbook computer at the Mars Desert Research Station in Utah. This systems engineering and field testing have together allowed us to develop a real-time computer-vision system that is capable, for example, of identifying lichens as novel within a series of images acquired in semi-arid desert environments. We acquired sequences of images of geologic outcrops in Utah and Spain consisting of various rock types and colours to test this algorithm. The algorithm robustly recognized previously observed units by their colour, while requiring only a single image or a few images to

  12. Automated ethernet-based test setup for long wave infrared camera analysis and algorithm evaluation

    NASA Astrophysics Data System (ADS)

    Edeler, Torsten; Ohliger, Kevin; Lawrenz, Sönke; Hussmann, Stephan

    2009-06-01

    In this paper we consider a new way for automated camera calibration and specification. The proposed setup is optimized for working with uncooled long wave infrared (thermal) cameras, while the concept itself is not restricted to those cameras. Every component of the setup like black body source, climate chamber, remote power switch, and the camera itself is connected to a network via Ethernet and a Windows XP workstation is controlling all components by the use of the TCL - script language. Beside the job of communicating with the components the script tool is also capable to run Matlab code via the matlab kernel. Data exchange during the measurement is possible and offers a variety of different advantages from drastically reduction of the amount of data to enormous speedup of the measuring procedure due to data analysis during measurement. A parameter based software framework is presented to create generic test cases, where modification to the test scenario does not require any programming skills. In the second part of the paper the measurement results of a self developed GigE-Vision thermal camera are presented and correction algorithms, providing high quality image output, are shown. These algorithms are fully implemented in the FPGA of the camera to provide real time processing while maintaining GigE-Vision as standard transmission protocol as an interface to arbitrary software tools. Artefacts taken into account are spatial noise, defective pixel and offset drift due to self heating after power on.

  13. A procedure for testing the quality of LANDSAT atmospheric correction algorithms

    NASA Technical Reports Server (NTRS)

    Dias, L. A. V. (Principal Investigator); Vijaykumar, N. L.; Neto, G. C.

    1982-01-01

    There are two basic methods for testing the quality of an algorithm to minimize atmospheric effects on LANDSAT imagery: (1) test the results a posteriori, using ground truth or control points; (2) use a method based on image data plus estimation of additional ground and/or atmospheric parameters. A procedure based on the second method is described. In order to select the parameters, initially the image contrast is examined for a series of parameter combinations. The contrast improves for better corrections. In addition the correlation coefficient between two subimages, taken at different times, of the same scene is used for parameter's selection. The regions to be correlated should not have changed considerably in time. A few examples using this proposed procedure are presented.

  14. Antiphospholipid antibody testing for the antiphospholipid syndrome: a comprehensive practical review including a synopsis of challenges and recent guidelines.

    PubMed

    Favaloro, Emmanuel J; Wong, Richard C W

    2014-10-01

    The antiphospholipid (antibody) syndrome (APS) is an autoimmune condition characterised by a wide range of clinical features, but primarily identified as thrombotic and/or obstetric related adverse events. APS is associated with the presence of antiphospholipid antibodies (aPL), including the so-called lupus anticoagulant (LA). These aPL are heterogeneous in nature, detected with varying sensitivity and specificity by a diverse range of laboratory tests. All these tests are unfortunately imperfect, suffer from poor assay reproducibility (inter-method and inter-laboratory) and a lack of standardisation and harmonisation. Clinicians and laboratory personnel may struggle to keep abreast of these factors, as well as the expanding range of available aPL tests, and consequent result interpretation. Therefore, APS remains a significant diagnostic challenge for many clinicians across a wide range of clinical specialities, due to these issues related to laboratory testing as well as the ever-expanding range of reported clinical manifestations. This review is primarily focussed on issues related to laboratory testing for APS in regards to the currently available assays, and summarises recent international consensus guidelines for aPL testing, both for the liquid phase functional LA assays and the solid phase assays (anticardiolipin and anti-beta-2-Glycoprotein-I).

  15. The COST-HOME monthly benchmark dataset with temperature and precipitation data for testing homogenisation algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.

    2009-04-01

    noise. The idealised dataset is valuable because its statistical characteristics are assumed in most homogenisation algorithms and Gaussian white noise is the signal most used for testing the algorithms. The surrogate and synthetic data represent homogeneous climate data. To this data known inhomogeneities are added: outliers, as well as break inhomogeneities and local trends. Furthermore missing data is simulated and a global trend is added. Every scientist working on homogenisation is invited to join this intercomparison. For more information on the COST Action on homogenisation see: http://www.homogenisation.org/ For more information on - and for downloading - the benchmark dataset see: http://www.meteo.uni-bonn.de/venema/themes/homogenisation/

  16. A retrieval algorithm for approaching XCH4 from satellite measurements: Sensitivity study and preliminary test

    NASA Astrophysics Data System (ADS)

    Deng, Jianbo; Liu, Yi; Yang, Dongxu; Cai, Zhaonan

    2014-05-01

    Satellite measurements of column-averaged dry air mole fractions of CH4 (XCH4) in shortwave infrared (SWIR) with very high spectral resolution and high sensitivity near the surface, such as the Thermal And Near-infrared Sensor for carbon Observation (TANSO) onboard the Green gas Observing SATellite (GOSAT, launched 2009), are expected to provide the large spatial and temporal information on the sources and sinks of CH4, which would contribute to the understanding of CH4 variation in global region and its impact on climate change. One of the important science requirements of monitoring CH4 from hypsespectral measurements is to establish a highly accurate retrieval algorithm. To approach XCH4efficiently, we developed a SWIR two-band (5900-6150 cm-1 and 4800-4900 cm-1) physical retrieval algorithm after a series of sensitivity study. The forward model in this algorithm was based on a vector linearized discrete ordinate radiative transfer (VLIDORT) model coupled with a line-by-line radiative transfer model (LBLRTM), which was applied to realize online calculation of absorption coefficient and backscattered solar radiance. The information content of CH4, H2O, CO2 and temperature in different retrieval band and bands combination was investigated in order to improve the algorithm. The selected retrieval bands retains more than 90% of the information content of CH4, CO2, and temperature, and more than 85% of that of H2O. The sensitivity studies demonstrate that the uncertainty of H2O, temperature and CO2 will cause unacceptable errors if they were ignored, for example, a 10% bias on H2O profile will lead to 50 ppb retrieval error, and a 5 K shift on temperature profile will cause 20 ppb error to the result while CO2 has little influence. The simulated retrieval test shows it is more efficient to revise the influence of temperature and H2O with a profile model than with a temperature offset and a H2O scale factor model. A preliminarily retrieval test using GOSAT Level 1B

  17. Flight test results of a vector-based failure detection and isolation algorithm for a redundant strapdown inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Morrell, F. R.; Bailey, M. L.; Motyka, P. R.

    1988-01-01

    Flight test results of a vector-based fault-tolerant algorithm for a redundant strapdown inertial measurement unit are presented. Because the inertial sensors provide flight-critical information for flight control and navigation, failure detection and isolation is developed in terms of a multi-level structure. Threshold compensation techniques for gyros and accelerometers, developed to enhance the sensitivity of the failure detection process to low-level failures, are presented. Four flight tests, conducted in a commercial transport type environment, were used to determine the ability of the failure detection and isolation algorithm to detect failure signals, such a hard-over, null, or bias shifts. The algorithm provided timely detection and correct isolation of flight control- and low-level failures. The flight tests of the vector-based algorithm demonstrated its capability to provide false alarm free dual fail-operational performance for the skewed array of inertial sensors.

  18. DATA SUMMARY REPORT SMALL SCALE MELTER TESTING OF HLW ALGORITHM GLASSES MATRIX1 TESTS VSL-07S1220-1 REV 0 7/25/07

    SciTech Connect

    KRUGER AA; MATLACK KS; PEGG IL

    2011-12-29

    Eight tests using different HLW feeds were conducted on the DM100-BL to determine the effect of variations in glass properties and feed composition on processing rates and melter conditions (off-gas characteristics, glass processing, foaming, cold cap, etc.) at constant bubbling rate. In over seven hundred hours of testing, the property extremes of glass viscosity, electrical conductivity, and T{sub 1%}, as well as minimum and maximum concentrations of several major and minor glass components were evaluated using glass compositions that have been tested previously at the crucible scale. Other parameters evaluated with respect to glass processing properties were +/-15% batching errors in the addition of glass forming chemicals (GFCs) to the feed, and variation in the sources of boron and sodium used in the GFCs. Tests evaluating batching errors and GFC source employed variations on the HLW98-86 formulation (a glass composition formulated for HLW C-106/AY-102 waste and processed in several previous melter tests) in order to best isolate the effect of each test variable. These tests are outlined in a Test Plan that was prepared in response to the Test Specification for this work. The present report provides summary level data for all of the tests in the first test matrix (Matrix 1) in the Test Plan. Summary results from the remaining tests, investigating minimum and maximum concentrations of major and minor glass components employing variations on the HLW98-86 formulation and glasses generated by the HLW glass formulation algorithm, will be reported separately after those tests are completed. The test data summarized herein include glass production rates, the type and amount of feed used, a variety of measured melter parameters including temperatures and electrode power, feed sample analysis, measured glass properties, and gaseous emissions rates. More detailed information and analysis from the melter tests with complete emission chemistry, glass durability, and

  19. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASAs Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The engineering development of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS) requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The nominal and off-nominal characteristics of SLS's elements and subsystems must be understood and matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex systems engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model-based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model-based algorithms and their development lifecycle from inception through FSW certification are an important focus of SLS's development effort to further ensure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. To test and validate these M&FM algorithms a dedicated test-bed was developed for full Vehicle Management End-to-End Testing (VMET). For addressing fault management (FM

  20. An E-M algorithm and testing strategy for multiple-locus haplotypes

    SciTech Connect

    Long, J.C.; Williams, R.C.; Urbanek, M.

    1995-03-01

    This paper gives an expectation maximization (EM) algorithm to obtain allele frequencies, haplotype frequencies, and gametic disequilibrium coefficients for multiple-locus systems. It permits high polymorphism and null alleles at all loci. This approach effectively deals with the primary estimation problems associated with such systems; that is, there is not a one-to-one correspondence between phenotypic and genotypic categories, and sample sizes tend to be much smaller than the number of phenotypic categories. The EM method provides maximum-likelihood estimates and therefore allows hypothesis tests using likelihood ratio statistics that have X{sup 2} distributions with large sample sizes. We also suggest a data resampling approach to estimate test statistic sampling distributions. The resampling approach is more computer intensive, but it is applicable to all sample sizes. A strategy to test hypotheses about aggregate groups of gametic disequilibrium coefficients is recommended. This strategy minimizes the number of necessary hypothesis tests while at the same time describing the structure of equilibrium. These methods are applied to three unlinked dinucleotide repeat loci in Navajo Indians and to three linked HLA loci in Gila River (Pima) Indians. The likelihood functions of both data sets are shown to be maximized by the EM estimates, and the testing strategy provides a useful description of the structure of gametic disequilibrium. Following these applications, a number of simulation experiments are performed to test how well the likelihood-ratio statistic distributions are approximated by X{sup 2} distributions. In most circumstances X{sup 2} grossly underestimated the probability of type I errors. However, at times they also overestimated the type 1 error probability. Accordingly, we recommend hypothesis tests that use the resampling method. 41 refs., 3 figs., 6 tabs.

  1. Building a LiDAR point cloud simulator: Testing algorithms for high resolution topographic change

    NASA Astrophysics Data System (ADS)

    Carrea, Dario; Abellán, Antonio; Derron, Marc-Henri; Jaboyedoff, Michel

    2014-05-01

    Terrestrial laser technique (TLS) is becoming a common tool in Geosciences, with clear applications ranging from the generation of a high resolution 3D models to the monitoring of unstable slopes and the quantification of morphological changes. Nevertheless, like every measurement techniques, TLS still has some limitations that are not clearly understood and affect the accuracy of the dataset (point cloud). A challenge in LiDAR research is to understand the influence of instrumental parameters on measurement errors during LiDAR acquisition. Indeed, different critical parameters interact with the scans quality at different ranges: the existence of shadow areas, the spatial resolution (point density), and the diameter of the laser beam, the incidence angle and the single point accuracy. The objective of this study is to test the main limitations of different algorithms usually applied on point cloud data treatment, from alignment to monitoring. To this end, we built in MATLAB(c) environment a LiDAR point cloud simulator able to recreate the multiple sources of errors related to instrumental settings that we normally observe in real datasets. In a first step we characterized the error from single laser pulse by modelling the influence of range and incidence angle on single point data accuracy. In a second step, we simulated the scanning part of the system in order to analyze the shifting and angular error effects. Other parameters have been added to the point cloud simulator, such as point spacing, acquisition window, etc., in order to create point clouds of simple and/or complex geometries. We tested the influence of point density and vitiating point of view on the Iterative Closest Point (ICP) alignment and also in some deformation tracking algorithm with same point cloud geometry, in order to determine alignment and deformation detection threshold. We also generated a series of high resolution point clouds in order to model small changes on different environments

  2. Subsonic flight test evaluation of a performance seeking control algorithm on an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Gilyard, Glenn B.; Orme, John S.

    1992-01-01

    The subsonic flight test evaluation phase of the NASA F-15 (powered by F 100 engines) performance seeking control program was completed for single-engine operation at part- and military-power settings. The subsonic performance seeking control algorithm optimizes the quasi-steady-state performance of the propulsion system for three modes of operation. The minimum fuel flow mode minimizes fuel consumption. The minimum thrust mode maximizes thrust at military power. Decreases in thrust-specific fuel consumption of 1 to 2 percent were measured in the minimum fuel flow mode; these fuel savings are significant, especially for supersonic cruise aircraft. Decreases of up to approximately 100 degree R in fan turbine inlet temperature were measured in the minimum temperature mode. Temperature reductions of this magnitude would more than double turbine life if inlet temperature was the only life factor. Measured thrust increases of up to approximately 15 percent in the maximum thrust mode cause substantial increases in aircraft acceleration. The system dynamics of the closed-loop algorithm operation were good. The subsonic flight phase has validated the performance seeking control technology, which can significantly benefit the next generation of fighter and transport aircraft.

  3. Acceleration of degradation by highly accelerated stress test and air-included highly accelerated stress test in crystalline silicon photovoltaic modules

    NASA Astrophysics Data System (ADS)

    Suzuki, Soh; Tanahashi, Tadanori; Doi, Takuya; Masuda, Atsushi

    2016-02-01

    We examined the effects of hyper-hygrothermal stresses with or without air on the degradation of crystalline silicon (c-Si) photovoltaic (PV) modules, to shorten the required duration of a conventional hygrothermal-stress test [i.e., the “damp heat (DH) stress test”, which is conducted at 85 °C/85% relative humidity for 1,000 h]. Interestingly, the encapsulant within a PV module becomes discolored under the air-included hygrothermal conditions achieved using DH stress test equipment and an air-included highly accelerated stress test (air-HAST) apparatus, but not under the air-excluded hygrothermal conditions realized using a highly accelerated stress test (HAST) machine. In contrast, the reduction in the output power of the PV module is accelerated irrespective of air inclusion in hyper-hygrothermal test atmosphere. From these findings, we conclude that the required duration of the DH stress test will at least be significantly shortened using air-HAST, but not HAST.

  4. Rainfall estimation from soil moisture data: crash test for SM2RAIN algorithm

    NASA Astrophysics Data System (ADS)

    Brocca, Luca; Albergel, Clement; Massari, Christian; Ciabatta, Luca; Moramarco, Tommaso; de Rosnay, Patricia

    2015-04-01

    Soil moisture governs the partitioning of mass and energy fluxes between the land surface and the atmosphere and, hence, it represents a key variable for many applications in hydrology and earth science. In recent years, it was demonstrated that soil moisture observations from ground and satellite sensors contain important information useful for improving rainfall estimation. Indeed, soil moisture data have been used for correcting rainfall estimates from state-of-the-art satellite sensors (e.g. Crow et al., 2011), and also for improving flood prediction through a dual data assimilation approach (e.g. Massari et al., 2014; Chen et al., 2014). Brocca et al. (2013; 2014) developed a simple algorithm, called SM2RAIN, which allows estimating rainfall directly from soil moisture data. SM2RAIN has been applied successfully to in situ and satellite observations. Specifically, by using three satellite soil moisture products from ASCAT (Advanced SCATterometer), AMSR-E (Advanced Microwave Scanning Radiometer for Earth Observation) and SMOS (Soil Moisture and Ocean Salinity); it was found that the SM2RAIN-derived rainfall products are as accurate as state-of-the-art products, e.g., the real-time version of the TRMM (Tropical Rainfall Measuring Mission) product. Notwithstanding these promising results, a detailed study investigating the physical basis of the SM2RAIN algorithm, its range of applicability and its limitations on a global scale has still to be carried out. In this study, we carried out a crash test for SM2RAIN algorithm on a global scale by performing a synthetic experiment. Specifically, modelled soil moisture data are obtained from HTESSEL model (Hydrology Tiled ECMWF Scheme for Surface Exchanges over Land) forced by ERA-Interim near-surface meteorology. Afterwards, the modelled soil moisture data are used as input into SM2RAIN algorithm for testing weather or not the resulting rainfall estimates are able to reproduce ERA-Interim rainfall data. Correlation, root

  5. Development and Implementation of Image-based Algorithms for Measurement of Deformations in Material Testing

    PubMed Central

    Barazzetti, Luigi; Scaioni, Marco

    2010-01-01

    This paper presents the development and implementation of three image-based methods used to detect and measure the displacements of a vast number of points in the case of laboratory testing on construction materials. Starting from the needs of structural engineers, three ad hoc tools for crack measurement in fibre-reinforced specimens and 2D or 3D deformation analysis through digital images were implemented and tested. These tools make use of advanced image processing algorithms and can integrate or even substitute some traditional sensors employed today in most laboratories. In addition, the automation provided by the implemented software, the limited cost of the instruments and the possibility to operate with an indefinite number of points offer new and more extensive analysis in the field of material testing. Several comparisons with other traditional sensors widely adopted inside most laboratories were carried out in order to demonstrate the accuracy of the implemented software. Implementation details, simulations and real applications are reported and discussed in this paper. PMID:22163612

  6. A comparison of retesting rates using alternative testing algorithms in the pilot implementation of critical congenital heart disease screening in Minnesota.

    PubMed

    Kochilas, Lazaros K; Menk, Jeremiah S; Saarinen, Annamarie; Gaviglio, Amy; Lohr, Jamie L

    2015-03-01

    Prior to state-wide implementation of newborn screening for critical congenital heart disease (CCHD) in Minnesota, a pilot program was completed using the protocol recommended by the Secretary's Advisory Committee on Heritable Disorders in Newborns and Children (SACHDNC). This report compares the retesting rates for newborn screening for CCHDs using the SACHDNC protocol and four alternative algorithms used in large published CCHD screening studies. Data from the original Minnesota study were reanalyzed using the passing values from these four alternative protocols. The retesting rate for the first pulse oximeter measurement ranged from 1.1 % in the SACHDNC protocol to 9.6 % in the Ewer protocol. The SACHDNC protocol generated the lowest rate of retesting among all tested algorithms. Our data suggest that even minor modifications of CCHD screening protocol would significantly impact screening retesting rate. In addition, we provide support for including lower extremity oxygen saturations in the screening algorithm.

  7. Diagnostic Algorithm for Glycogenoses and Myoadenylate Deaminase Deficiency Based on Exercise Testing Parameters: A Prospective Study

    PubMed Central

    Rannou, Fabrice; Uguen, Arnaud; Scotet, Virginie; Le Maréchal, Cédric; Rigal, Odile; Marcorelles, Pascale; Gobin, Eric; Carré, Jean-Luc; Zagnoli, Fabien; Giroux-Metges, Marie-Agnès

    2015-01-01

    Aim Our aim was to evaluate the accuracy of aerobic exercise testing to diagnose metabolic myopathies. Methods From December 2008 to September 2012, all the consecutive patients that underwent both metabolic exercise testing and a muscle biopsy were prospectively enrolled. Subjects performed an incremental and maximal exercise testing on a cycle ergometer. Lactate, pyruvate, and ammonia concentrations were determined from venous blood samples drawn at rest, during exercise (50% predicted maximal power, peak exercise), and recovery (2, 5, 10, and 15 min). Biopsies from vastus lateralis or deltoid muscles were analysed using standard techniques (reference test). Myoadenylate deaminase (MAD) activity was determined using p-nitro blue tetrazolium staining in muscle cryostat sections. Glycogen storage was assessed using periodic acid-Schiff staining. The diagnostic accuracy of plasma metabolite levels to identify absent and decreased MAD activity was assessed using Receiver Operating Characteristic (ROC) curve analysis. Results The study involved 51 patients. Omitting patients with glycogenoses (n = 3), MAD staining was absent in 5, decreased in 6, and normal in 37 subjects. Lactate/pyruvate at the 10th minute of recovery provided the greatest area under the ROC curves (AUC, 0.893 ± 0.067) to differentiate Abnormal from Normal MAD activity. The lactate/rest ratio at the 10th minute of recovery from exercise displayed the best AUC (1.0) for discriminating between Decreased and Absent MAD activities. The resulting decision tree achieved a diagnostic accuracy of 86.3%. Conclusion The present algorithm provides a non-invasive test to accurately predict absent and decreased MAD activity, facilitating the selection of patients for muscle biopsy and target appropriate histochemical analysis. PMID:26207760

  8. Tests of a Semi-Analytical Case 1 and Gelbstoff Case 2 SeaWiFS Algorithm with a Global Data Set

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.; Hawes, Steve K.; Lee, Zhongping

    1997-01-01

    A semi-analytical algorithm was tested with a total of 733 points of either unpackaged or packaged-pigment data, with corresponding algorithm parameters for each data type. The 'unpackaged' type consisted of data sets that were generally consistent with the Case 1 CZCS algorithm and other well calibrated data sets. The 'packaged' type consisted of data sets apparently containing somewhat more packaged pigments, requiring modification of the absorption parameters of the model consistent with the CalCOFI study area. This resulted in two equally divided data sets. A more thorough scrutiny of these and other data sets using a semianalytical model requires improved knowledge of the phytoplankton and gelbstoff of the specific environment studied. Since the semi-analytical algorithm is dependent upon 4 spectral channels including the 412 nm channel, while most other algorithms are not, a means of testing data sets for consistency was sought. A numerical filter was developed to classify data sets into the above classes. The filter uses reflectance ratios, which can be determined from space. The sensitivity of such numerical filters to measurement resulting from atmospheric correction and sensor noise errors requires further study. The semi-analytical algorithm performed superbly on each of the data sets after classification, resulting in RMS1 errors of 0.107 and 0.121, respectively, for the unpackaged and packaged data-set classes, with little bias and slopes near 1.0. In combination, the RMS1 performance was 0.114. While these numbers appear rather sterling, one must bear in mind what mis-classification does to the results. Using an average or compromise parameterization on the modified global data set yielded an RMS1 error of 0.171, while using the unpackaged parameterization on the global evaluation data set yielded an RMS1 error of 0.284. So, without classification, the algorithm performs better globally using the average parameters than it does using the unpackaged

  9. Testing the Landscape Reconstruction Algorithm for spatially explicit reconstruction of vegetation in northern Michigan and Wisconsin

    NASA Astrophysics Data System (ADS)

    Sugita, Shinya; Parshall, Tim; Calcote, Randy; Walker, Karen

    2010-09-01

    The Landscape Reconstruction Algorithm (LRA) overcomes some of the fundamental problems in pollen analysis for quantitative reconstruction of vegetation. LRA first uses the REVEALS model to estimate regional vegetation using pollen data from large sites and then the LOVE model to estimate vegetation composition within the relevant source area of pollen (RSAP) at small sites by subtracting the background pollen estimated from the regional vegetation composition. This study tests LRA using training data from forest hollows in northern Michigan (35 sites) and northwestern Wisconsin (43 sites). In northern Michigan, surface pollen from 152-ha and 332-ha lakes is used for REVEALS. Because of the lack of pollen data from large lakes in northwestern Wisconsin, we use pollen from 21 hollows randomly selected from the 43 sites for REVEALS. RSAP indirectly estimated by LRA is comparable to the expected value in each region. A regression analysis and permutation test validate that the LRA-based vegetation reconstruction is significantly more accurate than pollen percentages alone in both regions. Even though the site selection in northwestern Wisconsin is not ideal, the results are robust. The LRA is a significant step forward in quantitative reconstruction of vegetation.

  10. A simplified flight-test method for determining aircraft takeoff performance that includes effects of pilot technique

    NASA Technical Reports Server (NTRS)

    Larson, T. J.; Schweikhard, W. G.

    1974-01-01

    A method for evaluating aircraft takeoff performance from brake release to air-phase height that requires fewer tests than conventionally required is evaluated with data for the XB-70 airplane. The method defines the effects of pilot technique on takeoff performance quantitatively, including the decrease in acceleration from drag due to lift. For a given takeoff weight and throttle setting, a single takeoff provides enough data to establish a standardizing relationship for the distance from brake release to any point where velocity is appropriate to rotation. The lower rotation rates penalized takeoff performance in terms of ground roll distance; the lowest observed rotation rate required a ground roll distance that was 19 percent longer than the highest. Rotations at the minimum rate also resulted in lift-off velocities that were approximately 5 knots lower than the highest rotation rate at any given lift-off distance.

  11. Flight Testing of the Space Launch System (SLS) Adaptive Augmenting Control (AAC) Algorithm on an F/A-18

    NASA Technical Reports Server (NTRS)

    Dennehy, Cornelius J.; VanZwieten, Tannen S.; Hanson, Curtis E.; Wall, John H.; Miller, Chris J.; Gilligan, Eric T.; Orr, Jeb S.

    2014-01-01

    The Marshall Space Flight Center (MSFC) Flight Mechanics and Analysis Division developed an adaptive augmenting control (AAC) algorithm for launch vehicles that improves robustness and performance on an as-needed basis by adapting a classical control algorithm to unexpected environments or variations in vehicle dynamics. This was baselined as part of the Space Launch System (SLS) flight control system. The NASA Engineering and Safety Center (NESC) was asked to partner with the SLS Program and the Space Technology Mission Directorate (STMD) Game Changing Development Program (GCDP) to flight test the AAC algorithm on a manned aircraft that can achieve a high level of dynamic similarity to a launch vehicle and raise the technology readiness of the algorithm early in the program. This document reports the outcome of the NESC assessment.

  12. Competing Sudakov veto algorithms

    NASA Astrophysics Data System (ADS)

    Kleiss, Ronald; Verheyen, Rob

    2016-07-01

    We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.

  13. Reliability-based design optimization of reinforced concrete structures including soil-structure interaction using a discrete gravitational search algorithm and a proposed metamodel

    NASA Astrophysics Data System (ADS)

    Khatibinia, M.; Salajegheh, E.; Salajegheh, J.; Fadaee, M. J.

    2013-10-01

    A new discrete gravitational search algorithm (DGSA) and a metamodelling framework are introduced for reliability-based design optimization (RBDO) of reinforced concrete structures. The RBDO of structures with soil-structure interaction (SSI) effects is investigated in accordance with performance-based design. The proposed DGSA is based on the standard gravitational search algorithm (GSA) to optimize the structural cost under deterministic and probabilistic constraints. The Monte-Carlo simulation (MCS) method is considered as the most reliable method for estimating the probabilities of reliability. In order to reduce the computational time of MCS, the proposed metamodelling framework is employed to predict the responses of the SSI system in the RBDO procedure. The metamodel consists of a weighted least squares support vector machine (WLS-SVM) and a wavelet kernel function, which is called WWLS-SVM. Numerical results demonstrate the efficiency and computational advantages of DGSA and the proposed metamodel for RBDO of reinforced concrete structures.

  14. Exploring New Ways to Deliver Value to Healthcare Organizations: Algorithmic Testing, Data Integration, and Diagnostic E-consult Service.

    PubMed

    Risin, Semyon A; Chang, Brian N; Welsh, Kerry J; Kidd, Laura R; Moreno, Vanessa; Chen, Lei; Tholpady, Ashok; Wahed, Amer; Nguyen, Nghia; Kott, Marylee; Hunter, Robert L

    2015-01-01

    As the USA Health Care System undergoes transformation and transitions to value-based models it is critical for laboratory medicine/clinical pathology physicians to explore opportunities and find new ways to deliver value, become an integral part of the healthcare team. This is also essential for ensuring financial health and stability of the profession when the payment paradigm changes from fee-for-service to fee-for-performance. About 5 years ago we started searching for ways to achieve this goal. Among other approaches, the search included addressing the laboratory work-ups for specialists' referrals in the HarrisHealth System, a major safety net health care organization serving mostly indigent and underserved population of Harris County, TX. We present here our experience in improving the efficiency of laboratory testing for the referral process and in building a prototype of a diagnostic e-consult service using rheumatologic diseases as a starting point. The service incorporates algorithmic testing, integration of clinical, laboratory and imaging data, issuing structured comprehensive consultation reports, incorporating all the relevant information, and maintaining personal contacts and an e-line of communications with the primary providers and referral center personnel. Ongoing survey of providers affords testimony of service value in terms of facilitating their work and increasing productivity. Analysis of the cost effectiveness and of other value indicators is currently underway. We also discuss our pioneering experience in building pathology residents and fellows training in integrated diagnostic consulting service.

  15. Exploring New Ways to Deliver Value to Healthcare Organizations: Algorithmic Testing, Data Integration, and Diagnostic E-consult Service.

    PubMed

    Risin, Semyon A; Chang, Brian N; Welsh, Kerry J; Kidd, Laura R; Moreno, Vanessa; Chen, Lei; Tholpady, Ashok; Wahed, Amer; Nguyen, Nghia; Kott, Marylee; Hunter, Robert L

    2015-01-01

    As the USA Health Care System undergoes transformation and transitions to value-based models it is critical for laboratory medicine/clinical pathology physicians to explore opportunities and find new ways to deliver value, become an integral part of the healthcare team. This is also essential for ensuring financial health and stability of the profession when the payment paradigm changes from fee-for-service to fee-for-performance. About 5 years ago we started searching for ways to achieve this goal. Among other approaches, the search included addressing the laboratory work-ups for specialists' referrals in the HarrisHealth System, a major safety net health care organization serving mostly indigent and underserved population of Harris County, TX. We present here our experience in improving the efficiency of laboratory testing for the referral process and in building a prototype of a diagnostic e-consult service using rheumatologic diseases as a starting point. The service incorporates algorithmic testing, integration of clinical, laboratory and imaging data, issuing structured comprehensive consultation reports, incorporating all the relevant information, and maintaining personal contacts and an e-line of communications with the primary providers and referral center personnel. Ongoing survey of providers affords testimony of service value in terms of facilitating their work and increasing productivity. Analysis of the cost effectiveness and of other value indicators is currently underway. We also discuss our pioneering experience in building pathology residents and fellows training in integrated diagnostic consulting service. PMID:26116586

  16. A sequential nonparametric pattern classification algorithm based on the Wald SPRT. [Sequential Probability Ratio Test

    NASA Technical Reports Server (NTRS)

    Poage, J. L.

    1975-01-01

    A sequential nonparametric pattern classification procedure is presented. The method presented is an estimated version of the Wald sequential probability ratio test (SPRT). This method utilizes density function estimates, and the density estimate used is discussed, including a proof of convergence in probability of the estimate to the true density function. The classification procedure proposed makes use of the theory of order statistics, and estimates of the probabilities of misclassification are given. The procedure was tested on discriminating between two classes of Gaussian samples and on discriminating between two kinds of electroencephalogram (EEG) responses.

  17. Numerical tests for effects of various parameters in niching genetic algorithm applied to regional waveform inversion

    NASA Astrophysics Data System (ADS)

    Li, Cong; Lei, Jianshe

    2014-10-01

    In this paper, we focus on the influences of various parameters in the niching genetic algorithm inversion procedure on the results, such as various objective functions, the number of the models in each subpopulation, and the critical separation radius. The frequency-waveform integration (F-K) method is applied to synthesize three-component waveform data with noise in various epicentral distances and azimuths. Our results show that if we use a zero-th-lag cross-correlation function, then we will obtain the model with a faster convergence and a higher precision than other objective functions. The number of models in each subpopulation has a great influence on the rate of convergence and computation time, suggesting that it should be obtained through tests in practical problems. The critical separation radius should be determined carefully because it directly affects the multi-extreme values in the inversion. We also compare the inverted results from full-band waveform data and surface-wave frequency-band (0.02-0.1 Hz) data, and find that the latter is relatively poorer but still has a higher precision, suggesting that surface-wave frequency-band data can also be used to invert for the crustal structure.

  18. Numerical tests for effects of various parameters in niching genetic algorithm applied to regional waveform inversion

    NASA Astrophysics Data System (ADS)

    Li, Cong; Lei, Jianshe

    2014-09-01

    In this paper, we focus on the influences of various parameters in the niching genetic algorithm inversion procedure on the results, such as various objective functions, the number of the models in each subpopulation, and the critical separation radius. The frequency-waveform integration (F-K) method is applied to synthesize three-component waveform data with noise in various epicentral distances and azimuths. Our results show that if we use a zero-th-lag cross-correlation function, then we will obtain the model with a faster convergence and a higher precision than other objective functions. The number of models in each subpopulation has a great influence on the rate of convergence and computation time, suggesting that it should be obtained through tests in practical problems. The critical separation radius should be determined carefully because it directly affects the multi-extreme values in the inversion. We also compare the inverted results from full-band waveform data and surface-wave frequency-band (0.02-0.1 Hz) data, and find that the latter is relatively poorer but still has a higher precision, suggesting that surface-wave frequency-band data can also be used to invert for the crustal structure.

  19. Subsonic flight test evaluation of a performance seeking control algorithm on an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Gilyard, Glenn B.; Orme, John S.

    1992-01-01

    The subsonic flight test evaluation phase of the NASA4 F-15 (powered by F100 engines) performance-seeking control program was completed for single-engine operation at part- and military-power settings. The subsonic performance-seeking control algorithm optimizes the quasi-steady-state performance of the propulsion system for three modes of operation: the minimum-fuel-flow mode, the minimum-temperature mode, and the maximum-thrust mode. Decreases in thrust-specific fuel consumption of 1 to 2 percent were measured in the minimum-fuel-flow mode; these fuel savings are significant especially for supersonic cruise aircraft. Decreases of up to approximately 100 R in fan turbine inlet temperature were measured in the minimum-temperature mode. Temperature reductions of this magnitude would more than double turbine life if inlet temperature was the only life factor. Measured thrust increases of up to approximately 15 percent in the maximum-thrust mode cause substantial increases in aircraft acceleration. The subsonic flight phase has validated the performance-seeking control technology which can significantly benefit the next generation of fighter and transport aircraft.

  20. End-to-End Design, Development and Testing of GOES-R Level 1 and 2 Algorithms

    NASA Astrophysics Data System (ADS)

    Zaccheo, T.; Copeland, A.; Steinfelt, E.; Van Rompay, P.; Werbos, A.

    2012-12-01

    GOES-R is the next generation of the National Oceanic and Atmospheric Administration's (NOAA) Geostationary Operational Environmental Satellite (GOES) System, and it represents a new technological era in operational geostationary environmental satellite systems. GOES-R will provide advanced products, based on government-supplied algorithms, which describe the state of the atmosphere, land, and oceans over the Western Hemisphere. The Harris GOES-R Core Ground Segment (GS) Team will provide the ground processing software and infrastructure needed to produce and distribute these data products. As part of this effort, new or updated Level 1b and Level 2+ algorithms will be deployed in the GOES-R Product Generation (PG) Element. In this work, we describe the general approach currently being employed to migrate these Level 1b (L1b) and Level 2+ (L2+) GOES-R PG algorithms from government-provided scientific descriptions to their implementation as integrated software, and provide an overview of how Product Generation software works with the other elements of the Ground Segment to produce Level 1/Level 2+ end-products. In general, GOES-R L1b algorithms ingest reformatted raw sensor data and ancillary information to produce geo-located GOES-R L1b data, and GOES-R L2+ algorithms ingest L1b data and other ancillary/auxiliary/intermediate information to produce L2+ products such as aerosol optical depth, rainfall rate, derived motion winds, and snow cover. In this presentation we provide an overview of the Algorithm development life cycle, the common Product Generation software architecture, and the common test strategies used to verify/validate the scientific implementation. This work will highlight the Software Integration and Test phase of the software life-cycle and the suite of automated test/analysis tools developed to insure the implemented algorithms meet desired reproducibility. As part of this discussion we will summarize the results of our algorithm testing to date

  1. Covariance Structure Model Fit Testing under Missing Data: An Application of the Supplemented EM Algorithm

    ERIC Educational Resources Information Center

    Cai, Li; Lee, Taehun

    2009-01-01

    We apply the Supplemented EM algorithm (Meng & Rubin, 1991) to address a chronic problem with the "two-stage" fitting of covariance structure models in the presence of ignorable missing data: the lack of an asymptotically chi-square distributed goodness-of-fit statistic. We show that the Supplemented EM algorithm provides a convenient…

  2. Design considerations for flight test of a fault inferring nonlinear detection system algorithm for avionics sensors

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Godiwala, P. M.; Morrell, F. R.

    1986-01-01

    The modifications to the design of a fault inferring nonlinear detection system (FINDS) algorithm to accommodate flight computer constraints and the resulting impact on the algorithm performance are summarized. An overview of the flight data-driven FINDS algorithm is presented. This is followed by a brief analysis of the effects of modifications to the algorithm on program size and execution speed. Significant improvements in estimation performance for the aircraft states and normal operating sensor biases, which have resulted from improved noise design parameters and a new steady-state wind model, are documented. The aircraft state and sensor bias estimation performances of the algorithm's extended Kalman filter are presented as a function of update frequency of the piecewise constant filter gains. The results of a new detection system strategy and failure detection performance, as a function of gain update frequency, are also presented.

  3. Presentation of a general algorithm to include effect assessment on secondary poisoning in the derivation of environmental quality criteria. Part 1. Aquatic food chains

    SciTech Connect

    Romijn, C.A.; Luttik, R.; van de Meent, D.; Slooff, W.; Canton, J.H. , Bilthoven )

    1993-08-01

    Effect assessment on secondary poisoning can be an asset to effect assessments on direct poisoning in setting quality criteria for the environment. This study presents an algorithm for effect assessment on secondary poisoning. The water-fish-fish-eating bird or mammal pathway was analyzed as an example of a secondary poisoning pathway. Parameters used in this algorithm are the bioconcentration factor for fish (BCF) and the no-observed-effect concentration for the group of fish-eating birds and mammals (NOECfish-eater). For the derivation of reliable BCFs preference is given to the use of experimentally derived BCFs over QSAR estimates. NOECs for fish eaters are derived by extrapolating toxicity data on single species. Because data on fish-eating species are seldom available, toxicity data on all birds and mammalian species were used. The proposed algorithm (MAR = NOECfish-eater/BCF) was used to calculate MARS (maximum acceptable risk levels) for the compounds lindane, dieldrin, cadmium, mercury, PCB153, and PCB118. By subsequently, comparing these MARs to MARs derived by effect assessment for aquatic organisms, it was concluded that for methyl mercury and PCB153 secondary poisoning of fish-eating birds and mammals could be a critical pathway. For these compounds, effects on populations of fish-eating birds and mammals can occur at levels in surface water below the MAR calculated for aquatic ecosystems. Secondary poisoning of fish-eating birds and mammals is not likely to occur for cadmium at levels in water below the MAR calculated for aquatic ecosystems.

  4. Parallel training and testing methods for complex image processing algorithms on distributed, heterogeneous, unreliable, and non-dedicated resources

    NASA Astrophysics Data System (ADS)

    Usamentiaga, Rubén; García, Daniel F.; Molleda, Julio; Sainz, Ignacio; Bulnes, Francisco G.

    2011-01-01

    Advances in the image processing field have brought new methods which are able to perform complex tasks robustly. However, in order to meet constraints on functionality and reliability, imaging application developers often design complex algorithms with many parameters which must be finely tuned for each particular environment. The best approach for tuning these algorithms is to use an automatic training method, but the computational cost of this kind of training method is prohibitive, making it inviable even in powerful machines. The same problem arises when designing testing procedures. This work presents methods to train and test complex image processing algorithms in parallel execution environments. The approach proposed in this work is to use existing resources in offices or laboratories, rather than expensive clusters. These resources are typically non-dedicated, heterogeneous and unreliable. The proposed methods have been designed to deal with all these issues. Two methods are proposed: intelligent training based on genetic algorithms and PVM, and a full factorial design based on grid computing which can be used for training or testing. These methods are capable of harnessing the available computational power resources, giving more work to more powerful machines, while taking its unreliable nature into account. Both methods have been tested using real applications.

  5. Universal test fixture for monolithic mm-wave integrated circuits calibrated with an augmented TRD algorithm

    NASA Technical Reports Server (NTRS)

    Romanofsky, Robert R.; Shalkhauser, Kurt A.

    1989-01-01

    The design and evaluation of a novel fixturing technique for characterizing millimeter wave solid state devices is presented. The technique utilizes a cosine-tapered ridge guide fixture and a one-tier de-embedding procedure to produce accurate and repeatable device level data. Advanced features of this technique include nondestructive testing, full waveguide bandwidth operation, universality of application, and rapid, yet repeatable, chip-level characterization. In addition, only one set of calibration standards is required regardless of the device geometry.

  6. An efficient algorithm for finding optimal gain-ratio multiple-split tests on hierarchical attributes in decision tree learning

    SciTech Connect

    Almuallim, H.; Akiba, Yasuhiro; Kaneda, Shigeo

    1996-12-31

    Given a set of training examples S and a tree-structured attribute x, the goal in this work is to find a multiple-split test defined on x that maximizes Quinlan`s gain-ratio measure. The number of possible such multiple-split tests grows exponentially in the size of the hierarchy associated with the attribute. It is, therefore, impractical to enumerate and evaluate all these tests in order to choose the best one. We introduce an efficient algorithm for solving this problem that guarantees maximizing the gain-ratio over all possible tests. For a training set of m examples and an attribute hierarchy of height d, our algorithm runs in time proportional to dm, which makes it efficient enough for practical use.

  7. Evaluation of a New Method of Fossil Retrodeformation by Algorithmic Symmetrization: Crania of Papionins (Primates, Cercopithecidae) as a Test Case

    PubMed Central

    Tallman, Melissa; Amenta, Nina; Delson, Eric; Frost, Stephen R.; Ghosh, Deboshmita; Klukkert, Zachary S.; Morrow, Andrea; Sawyer, Gary J.

    2014-01-01

    Diagenetic distortion can be a major obstacle to collecting quantitative shape data on paleontological specimens, especially for three-dimensional geometric morphometric analysis. Here we utilize the recently -published algorithmic symmetrization method of fossil reconstruction and compare it to the more traditional reflection & averaging approach. In order to have an objective test of this method, five casts of a female cranium of Papio hamadryas kindae were manually deformed while the plaster hardened. These were subsequently “retrodeformed” using both algorithmic symmetrization and reflection & averaging and then compared to the original, undeformed specimen. We found that in all cases, algorithmic retrodeformation improved the shape of the deformed cranium and in four out of five cases, the algorithmically symmetrized crania were more similar in shape to the original crania than the reflected & averaged reconstructions. In three out of five cases, the difference between the algorithmically symmetrized crania and the original cranium could be contained within the magnitude of variation among individuals in a single subspecies of Papio. Instances of asymmetric distortion, such as breakage on one side, or bending in the axis of symmetry, were well handled, whereas symmetrical distortion remained uncorrected. This technique was further tested on a naturally deformed and fossilized cranium of Paradolichopithecus arvernensis. Results, based on a principal components analysis and Procrustes distances, showed that the algorithmically symmetrized Paradolichopithecus cranium was more similar to other, less-deformed crania from the same species than was the original. These results illustrate the efficacy of this method of retrodeformation by algorithmic symmetrization for the correction of asymmetrical distortion in fossils. Symmetrical distortion remains a problem for all currently developed methods of retrodeformation. PMID:24992483

  8. Objective markers for sleep propensity: comparison between the Multiple Sleep Latency Test and the Vigilance Algorithm Leipzig.

    PubMed

    Olbrich, Sebastian; Fischer, Marie M; Sander, Christian; Hegerl, Ulrich; Wirtz, Hubert; Bosse-Henck, Andrea

    2015-08-01

    The regulation of wakefulness is important for high-order organisms. Its dysregulation is involved in the pathomechanism of several psychiatric disorders. Thus, a tool for its objective but little time-consuming assessment would be of importance. The Vigilance Algorithm Leipzig allows the objective measurement of sleep propensity, based on a single resting state electroencephalogram. To compare the Vigilance Algorithm Leipzig with the standard for objective assessment of excessive daytime sleepiness, a four-trial Multiple Sleep Latency Test in 25 healthy subjects was conducted. Between the first two trials, a 15-min, 25-channel resting electroencephalogram was recorded, and Vigilance Algorithm Leipzig was used to classify the sleep propensity (i.e., type of vigilance regulation) of each subject. The results of both methods showed significant correlations with the Epworth Sleepiness Scale (ρ = -0.70; ρ = 0.45, respectively) and correlated with each other (ρ = -0.54). Subjects with a stable electroencephalogram-vigilance regulation yielded significant increased sleep latencies compared with an unstable regulation (multiple sleep latency 898.5 s versus 549.9 s; P = 0.03). Further, Vigilance Algorithm Leipzig classifications allowed the identification of subjects with average sleep latencies <6 min with a sensitivity of 100% and a specificity of 77%. Thus, Vigilance Algorithm Leipzig provides similar information on wakefulness regulation in comparison to the much more cost- and time-consuming Multiple Sleep Latency Test. Due to its high sensitivity and specificity for large sleep propensity, Vigilance Algorithm Leipzig could be an effective and reliable alternative to the Multiple Sleep Latency Test, for example for screening purposes in large cohorts, where objective information about wakefulness regulation is needed.

  9. Ground Testing of Prototype Hardware and Processing Algorithms for a Wide Area Space Surveillance System (WASSS)

    NASA Astrophysics Data System (ADS)

    Goldstein, N.; Dressler, R. A.; Richtsmeier, S. S.; McLean, J.; Dao, P. D.; Murray-Krezan, J.; Fulcoly, D. O.

    2013-09-01

    Recent ground testing of a wide area camera system and automated star removal algorithms has demonstrated the potential to detect, quantify, and track deep space objects using small aperture cameras and on-board processors. The camera system, which was originally developed for a space-based Wide Area Space Surveillance System (WASSS), operates in a fixed-stare mode, continuously monitoring a wide swath of space and differentiating celestial objects from satellites based on differential motion across the field of view. It would have greatest utility in a LEO orbit to provide automated and continuous monitoring of deep space with high refresh rates, and with particular emphasis on the GEO belt and GEO transfer space. Continuous monitoring allows a concept of change detection and custody maintenance not possible with existing sensors. The detection approach is equally applicable to Earth-based sensor systems. A distributed system of such sensors, either Earth-based, or space-based, could provide automated, persistent night-time monitoring of all of deep space. The continuous monitoring provides a daily record of the light curves of all GEO objects above a certain brightness within the field of view. The daily updates of satellite light curves offers a means to identify specific satellites, to note changes in orientation and operational mode, and to queue other SSA assets for higher resolution queries. The data processing approach may also be applied to larger-aperture, higher resolution camera systems to extend the sensitivity towards dimmer objects. In order to demonstrate the utility of the WASSS system and data processing, a ground based field test was conducted in October 2012. We report here the results of the observations made at Magdalena Ridge Observatory using the prototype WASSS camera, which has a 4×60° field-of-view , <0.05° resolution, a 2.8 cm2 aperture, and the ability to view within 4° of the sun. A single camera pointed at the GEO belt provided a

  10. Temperature rise tests on a forced-oil-air cooled (FOA) (OFAF) core-form transformer, including loading beyond nameplate

    SciTech Connect

    Thaden, M.V.; Mehta, S.P.; Tuli, S.C.; Grubb, R.L.

    1995-04-01

    Results of temperature rise tests performed in accordance with PC57.119/ Draft 12, recommended Procedures for Performing Temperature Rise Tests on Oil-Immersed Power Transformers at Loads Beyond Nameplate Ratings, are presented. Tested data is compared with calculated values using IEEE and IEC loading guide equations and exponential power constants are determined and are compared with those given in the loading guide. Discussion is offered that may be useful in future drafts of the procedure and to the users of the proposed test procedure.

  11. Industrial Sites Work Plan for Leachfield Corrective Action Units: Nevada Test Site and Tonopah Test Range, Nevada (including Record of Technical Change Nos. 1, 2, 3, and 4)

    SciTech Connect

    DOE /NV

    1998-12-18

    This Leachfield Corrective Action Units (CAUs) Work Plan has been developed in accordance with the Federal Facility Agreement and Consent Order (FFACO) that was agreed to by the U.S. Department of Energy, Nevada Operations Office (DOE/NV); the State of Nevada Division of Environmental Protection (NDEP); and the U.S. Department of Defense (FFACO, 1996). Under the FFACO, a work plan is an optional planning document that provides information for a CAU or group of CAUs where significant commonality exists. A work plan may be developed that can be referenced by leachfield Corrective Action Investigation Plans (CAIPs) to eliminate redundant CAU documentation. This Work Plan includes FFACO-required management, technical, quality assurance (QA), health and safety, public involvement, field sampling, and waste management documentation common to several CAUs with similar site histories and characteristics, namely the leachfield systems at the Nevada Test Site (NTS) and the Tonopah Test Range (TT R). For each CAU, a CAIP will be prepared to present detailed, site-specific information regarding contaminants of potential concern (COPCs), sampling locations, and investigation methods.

  12. Multispectral imaging burn wound tissue classification system: a comparison of test accuracies between several common machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Squiers, John J.; Li, Weizhi; King, Darlene R.; Mo, Weirong; Zhang, Xu; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.

    2016-03-01

    The clinical judgment of expert burn surgeons is currently the standard on which diagnostic and therapeutic decisionmaking regarding burn injuries is based. Multispectral imaging (MSI) has the potential to increase the accuracy of burn depth assessment and the intraoperative identification of viable wound bed during surgical debridement of burn injuries. A highly accurate classification model must be developed using machine-learning techniques in order to translate MSI data into clinically-relevant information. An animal burn model was developed to build an MSI training database and to study the burn tissue classification ability of several models trained via common machine-learning algorithms. The algorithms tested, from least to most complex, were: K-nearest neighbors (KNN), decision tree (DT), linear discriminant analysis (LDA), weighted linear discriminant analysis (W-LDA), quadratic discriminant analysis (QDA), ensemble linear discriminant analysis (EN-LDA), ensemble K-nearest neighbors (EN-KNN), and ensemble decision tree (EN-DT). After the ground-truth database of six tissue types (healthy skin, wound bed, blood, hyperemia, partial injury, full injury) was generated by histopathological analysis, we used 10-fold cross validation to compare the algorithms' performances based on their accuracies in classifying data against the ground truth, and each algorithm was tested 100 times. The mean test accuracy of the algorithms were KNN 68.3%, DT 61.5%, LDA 70.5%, W-LDA 68.1%, QDA 68.9%, EN-LDA 56.8%, EN-KNN 49.7%, and EN-DT 36.5%. LDA had the highest test accuracy, reflecting the bias-variance tradeoff over the range of complexities inherent to the algorithms tested. Several algorithms were able to match the current standard in burn tissue classification, the clinical judgment of expert burn surgeons. These results will guide further development of an MSI burn tissue classification system. Given that there are few surgeons and facilities specializing in burn care

  13. An Exact Algorithm using Edges and Routes Pegging Test for the Input-Output Scheduling Problem in Automated Warehouses

    NASA Astrophysics Data System (ADS)

    Kubota, Yoshitsune; Numata, Kazumiti

    In this paper we propose and evaluate some idea to improve an existing exact algorithm for Input-Output Scheduling Problem (IOSP) in automated warehouses. The existing algorithm is based on LP relaxation of IOSP, which is solved by the column generation method allowing relaxed columns (routes). Our idea is, expecting to enhance LP solution, to impliment the column generation using only exact routes, and to reduce consequently increasing calculation cost by dropping (pegging) unusable edges. The pegging test is done in the preprocessing phase by solving Lagrangian relaxation of IOSP formulated in node cover decision variables. The results of computational experiments show that the proposed algorithm can solve slightly large sized instances in less execution time than existing one.

  14. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: Algorithm development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Vicroy, D. D.; Simmon, D. A.

    1985-01-01

    A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

  15. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: algorithm development and flight test results

    SciTech Connect

    Knox, C.E.; Vicroy, D.D.; Simmon, D.A.

    1985-05-01

    A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

  16. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Patterson, Jonathan; Teare, David; Johnson, Stephen

    2015-01-01

    The engineering development of the new Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these spacecraft systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex system engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in specialized Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model based algorithms and their development lifecycle from inception through Flight Software certification are an important focus of this development effort to further insure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. NASA formed a dedicated M&FM team for addressing fault management early in the development lifecycle for the SLS initiative. As part of the development of the M&FM capabilities, this team has developed a dedicated testbed that

  17. Lord-Wingersky Algorithm Version 2.0 for Hierarchical Item Factor Models with Applications in Test Scoring, Scale Alignment, and Model Fit Testing

    PubMed Central

    Cai, Li

    2014-01-01

    Lord and Wingersky’s (1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined on a grid formed by direct products of quadrature points. However, the increase in computational burden remains exponential in the number of dimensions, making the implementation of the recursive algorithm cumbersome for truly high dimensional models. In this paper, a dimension reduction method that is specific to the Lord-Wingersky recursions is developed. This method can take advantage of the restrictions implied by hierarchical item factor models, e.g., the bifactor model, the testlet model, or the two-tier model, such that a version of the Lord-Wingersky recursive algorithm can operate on a dramatically reduced set of quadrature points. For instance, in a bifactor model, the dimension of integration is always equal to 2, regardless of the number of factors. The new algorithm not only provides an effective mechanism to produce summed score to IRT scaled score translation tables properly adjusted for residual dependence, but leads to new applications in test scoring, linking, and model fit checking as well. Simulated and empirical examples are used to illustrate the new applications. PMID:25233839

  18. Lord-Wingersky Algorithm Version 2.0 for Hierarchical Item Factor Models with Applications in Test Scoring, Scale Alignment, and Model Fit Testing.

    PubMed

    Cai, Li

    2015-06-01

    Lord and Wingersky's (Appl Psychol Meas 8:453-461, 1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined on a grid formed by direct products of quadrature points. However, the increase in computational burden remains exponential in the number of dimensions, making the implementation of the recursive algorithm cumbersome for truly high-dimensional models. In this paper, a dimension reduction method that is specific to the Lord-Wingersky recursions is developed. This method can take advantage of the restrictions implied by hierarchical item factor models, e.g., the bifactor model, the testlet model, or the two-tier model, such that a version of the Lord-Wingersky recursive algorithm can operate on a dramatically reduced set of quadrature points. For instance, in a bifactor model, the dimension of integration is always equal to 2, regardless of the number of factors. The new algorithm not only provides an effective mechanism to produce summed score to IRT scaled score translation tables properly adjusted for residual dependence, but leads to new applications in test scoring, linking, and model fit checking as well. Simulated and empirical examples are used to illustrate the new applications.

  19. A grand canonical genetic algorithm for the prediction of multi-component phase diagrams and testing of empirical potentials

    NASA Astrophysics Data System (ADS)

    Tipton, William W.; Hennig, Richard G.

    2013-12-01

    We present an evolutionary algorithm which predicts stable atomic structures and phase diagrams by searching the energy landscape of empirical and ab initio Hamiltonians. Composition and geometrical degrees of freedom may be varied simultaneously. We show that this method utilizes information from favorable local structure at one composition to predict that at others, achieving far greater efficiency of phase diagram prediction than a method which relies on sampling compositions individually. We detail this and a number of other efficiency-improving techniques implemented in the genetic algorithm for structure prediction code that is now publicly available. We test the efficiency of the software by searching the ternary Zr-Cu-Al system using an empirical embedded-atom model potential. In addition to testing the algorithm, we also evaluate the accuracy of the potential itself. We find that the potential stabilizes several correct ternary phases, while a few of the predicted ground states are unphysical. Our results suggest that genetic algorithm searches can be used to improve the methodology of empirical potential design.

  20. A grand canonical genetic algorithm for the prediction of multi-component phase diagrams and testing of empirical potentials.

    PubMed

    Tipton, William W; Hennig, Richard G

    2013-12-11

    We present an evolutionary algorithm which predicts stable atomic structures and phase diagrams by searching the energy landscape of empirical and ab initio Hamiltonians. Composition and geometrical degrees of freedom may be varied simultaneously. We show that this method utilizes information from favorable local structure at one composition to predict that at others, achieving far greater efficiency of phase diagram prediction than a method which relies on sampling compositions individually. We detail this and a number of other efficiency-improving techniques implemented in the genetic algorithm for structure prediction code that is now publicly available. We test the efficiency of the software by searching the ternary Zr-Cu-Al system using an empirical embedded-atom model potential. In addition to testing the algorithm, we also evaluate the accuracy of the potential itself. We find that the potential stabilizes several correct ternary phases, while a few of the predicted ground states are unphysical. Our results suggest that genetic algorithm searches can be used to improve the methodology of empirical potential design. PMID:24184679

  1. Compilation, design tests: Energetic particles Satellite S-3 including design tests for S-3A, S-3B and S-3C

    NASA Technical Reports Server (NTRS)

    Ledoux, F. N.

    1973-01-01

    A compilation of engineering design tests which were conducted in support of the Energetic Particle Satellite S-3, S-3A, and S-3b programs. The purpose for conducting the tests was to determine the adequacy and reliability of the Energetic Particles Series of satellites designs. The various tests consisted of: (1) moments of inertia, (2) functional reliability, (3) component and structural integrity, (4) initiators and explosives tests, and (5) acceptance tests.

  2. Generalization of the Lord-Wingersky Algorithm to Computing the Distribution of Summed Test Scores Based on Real-Number Item Scores

    ERIC Educational Resources Information Center

    Kim, Seonghoon

    2013-01-01

    With known item response theory (IRT) item parameters, Lord and Wingersky provided a recursive algorithm for computing the conditional frequency distribution of number-correct test scores, given proficiency. This article presents a generalized algorithm for computing the conditional distribution of summed test scores involving real-number item…

  3. Should We Stop Looking for a Better Scoring Algorithm for Handling Implicit Association Test Data? Test of the Role of Errors, Extreme Latencies Treatment, Scoring Formula, and Practice Trials on Reliability and Validity

    PubMed Central

    Perugini, Marco; Schönbrodt, Felix

    2015-01-01

    Since the development of D scores for the Implicit Association Test, few studies have examined whether there is a better scoring method. In this contribution, we tested the effect of four relevant parameters for IAT data that are the treatment of extreme latencies, the error treatment, the method for computing the IAT difference, and the distinction between practice and test critical trials. For some options of these different parameters, we included robust statistic methods that can provide viable alternative metrics to existing scoring algorithms, especially given the specificity of reaction time data. We thus elaborated 420 algorithms that result from the combination of all the different options and test the main effect of the four parameters with robust statistical analyses as well as their interaction with the type of IAT (i.e., with or without built-in penalty included in the IAT procedure). From the results, we can elaborate some recommendations. A treatment of extreme latencies is preferable but only if it consists in replacing rather than eliminating them. Errors contain important information and should not be discarded. The D score seems to be still a good way to compute the difference although the G score could be a good alternative, and finally it seems better to not compute the IAT difference separately for practice and test critical trials. From this recommendation, we propose to improve the traditional D scores with small yet effective modifications. PMID:26107176

  4. Genomic selection in a pig population including information from slaughtered full sibs of boars within a sib-testing program.

    PubMed

    Samorè, A B; Buttazzoni, L; Gallo, M; Russo, V; Fontanesi, L

    2015-05-01

    Genomic selection is becoming a common practise in dairy cattle, but only few works have studied its introduction in pig selection programs. Results described for this species are highly dependent on the considered traits and the specific population structure. This paper aims to simulate the impact of genomic selection in a pig population with a training cohort of performance-tested and slaughtered full sibs. This population is selected for performance, carcass and meat quality traits by full-sib testing of boars. Data were simulated using a forward-in-time simulation process that modeled around 60K single nucleotide polymorphisms and several quantitative trait loci distributed across the 18 porcine autosomes. Data were edited to obtain, for each cycle, 200 sires mated with 800 dams to produce 800 litters of 4 piglets each, two males and two females (needed for the sib test), for a total of 3200 newborns. At each cycle, a subset of 200 litters were sib tested, and 60 boars and 160 sows were selected to replace the same number of culled male and female parents. Simulated selection of boars based on performance test data of their full sibs (one castrated brother and two sisters per boar in 200 litters) lasted for 15 cycles. Genotyping and phenotyping of the three tested sibs (training population) and genotyping of the candidate boars (prediction population) were assumed. Breeding values were calculated for traits with two heritability levels (h 2=0.40, carcass traits, and h 2=0.10, meat quality parameters) on simulated pedigrees, phenotypes and genotypes. Genomic breeding values, estimated by various models (GBLUP from raw phenotype or using breeding values and single-step models), were compared with the classical BLUP Animal Model predictions in terms of predictive ability. Results obtained for traits with moderate heritability (h 2=0.40), similar to the heritability of traits commonly measured within a sib-testing program, did not show any benefit from the

  5. Evaluation of a wind-tunnel gust response technique including correlations with analytical and flight test results

    NASA Technical Reports Server (NTRS)

    Redd, L. T.; Hanson, P. W.; Wynne, E. C.

    1979-01-01

    A wind tunnel technique for obtaining gust frequency response functions for use in predicting the response of flexible aircraft to atmospheric turbulence is evaluated. The tunnel test results for a dynamically scaled cable supported aeroelastic model are compared with analytical and flight data. The wind tunnel technique, which employs oscillating vanes in the tunnel throat section to generate a sinusoidally varying flow field around the model, was evaluated by use of a 1/30 scale model of the B-52E airplane. Correlation between the wind tunnel results, flight test results, and analytical predictions for response in the short period and wing first elastic modes of motion are presented.

  6. A Historical Perspective of Testing and Assessment Including the Impact of Summative and Formative Assessment on Student Achievement

    ERIC Educational Resources Information Center

    Brink, Carole Sanger

    2011-01-01

    In 2007, Georgia developed a comprehensive framework to define what students need to know. One component of this framework emphasizes the use of both formative and summative assessments as part of an integral and specific component of the teachers. performance evaluation. Georgia administers the Criterion-Referenced Competency Test (CRCT) to every…

  7. Multi-color space threshold segmentation and self-learning k-NN algorithm for surge test EUT status identification

    NASA Astrophysics Data System (ADS)

    Huang, Jian; Liu, Gui-xiong

    2016-09-01

    The identification of targets varies in different surge tests. A multi-color space threshold segmentation and self-learning k-nearest neighbor algorithm ( k-NN) for equipment under test status identification was proposed after using feature matching to identify equipment status had to train new patterns every time before testing. First, color space (L*a*b*, hue saturation lightness (HSL), hue saturation value (HSV)) to segment was selected according to the high luminance points ratio and white luminance points ratio of the image. Second, the unknown class sample S r was classified by the k-NN algorithm with training set T z according to the feature vector, which was formed from number of pixels, eccentricity ratio, compactness ratio, and Euler's numbers. Last, while the classification confidence coefficient equaled k, made S r as one sample of pre-training set T z '. The training set T z increased to T z+1 by T z ' if T z ' was saturated. In nine series of illuminant, indicator light, screen, and disturbances samples (a total of 21600 frames), the algorithm had a 98.65%identification accuracy, also selected five groups of samples to enlarge the training set from T 0 to T 5 by itself.

  8. Statistical Analysis of a Large Sample Size Pyroshock Test Data Set Including Post Flight Data Assessment. Revision 1

    NASA Technical Reports Server (NTRS)

    Hughes, William O.; McNelis, Anne M.

    2010-01-01

    The Earth Observing System (EOS) Terra spacecraft was launched on an Atlas IIAS launch vehicle on its mission to observe planet Earth in late 1999. Prior to launch, the new design of the spacecraft's pyroshock separation system was characterized by a series of 13 separation ground tests. The analysis methods used to evaluate this unusually large amount of shock data will be discussed in this paper, with particular emphasis on population distributions and finding statistically significant families of data, leading to an overall shock separation interface level. The wealth of ground test data also allowed a derivation of a Mission Assurance level for the flight. All of the flight shock measurements were below the EOS Terra Mission Assurance level thus contributing to the overall success of the EOS Terra mission. The effectiveness of the statistical methodology for characterizing the shock interface level and for developing a flight Mission Assurance level from a large sample size of shock data is demonstrated in this paper.

  9. FG syndrome, an X-linked multiple congenital anomaly syndrome: The clinical phenotype and an algorithm for diagnostic testing

    PubMed Central

    Clark, Robin Dawn; Graham, John M.; Friez, Michael J.; Hoo, Joe J.; Jones, Kenneth Lyons; McKeown, Carole; Moeschler, John B.; Raymond, F. Lucy; Rogers, R. Curtis; Schwartz, Charles E.; Battaglia, Agatino; Lyons, Michael J.; Stevenson, Roger E.

    2014-01-01

    FG syndrome is a rare X-linked multiple congenital anomaly-cognitive impairment disorder caused by the p.R961W mutation in the MED12 gene. We identified all known patients with this mutation to delineate their clinical phenotype and devise a clinical algorithm to facilitate molecular diagnosis. We ascertained 23 males with the p.R961W mutation in MED12 from 9 previously reported FG syndrome families and 1 new family. Six patients are reviewed in detail. These 23 patients were compared with 48 MED12 mutation-negative patients, who had the clinical diagnosis of FG syndrome. Traits that best discriminated between these two groups were chosen to develop an algorithm with high sensitivity and specificity for the p.R961W MED12 mutation. FG syndrome has a recognizable dysmorphic phenotype with a high incidence of congenital anomalies. A family history of X-linked mental retardation, deceased male infants, and/or multiple fetal losses was documented in all families. The algorithm identifies the p.R961W MED12 mutation-positive group with 100% sensitivity and 90% spec-ificity. The clinical phenotype of FG syndrome defines a recognizable pattern of X-linked multiple congenital anomalies and cognitive impairment. This algorithm can assist the clinician in selecting the patients for testing who are most likely to have the recurrent p.R961W MED12 mutation. PMID:19938245

  10. 78 FR 28633 - Prometric, Inc., a Subsidiary of Educational Testing Service, Including On-Site Leased Workers...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-15

    ... Department's notice of determination was published in the Federal Register on October 19, 2012 (77 FR 64357..., Including On-Site Leased Workers From Office Team St. Paul, Minnesota; Amended Certification Regarding... workers of the subject firm. The company reports that workers leased from Office Team were employed...

  11. Including Students with Disabilities in Large-Scale Testing: Emerging Practices. ERIC/OSEP Digest E564.

    ERIC Educational Resources Information Center

    Fitzsimmons, Mary K.

    This brief identifies practices that include students with disabilities in large-scale assessments as required by the reauthorized and amended 1997 Individuals with Disabilities Education Act. It notes relevant research by the National Center on Educational Outcomes and summarizes major findings of studies funded by the U.S. Office of Special…

  12. Status Report of the Frankfurt H--Test LEBT Including a Non-destructive Emittance Measurement Device

    NASA Astrophysics Data System (ADS)

    Gabor, C.; Jakob, A.; Meusel, O.; Schäfer, J.; Klomp, A.; Santić, F.; Pozimski, J.; Klein, H.; Ratzinger, U.

    2002-11-01

    For high power proton accelerators like SNS, ESS or the planned neutrino factory (CERN), negative ions are preferred because they offer charge exchange injection into the accumulation rings (non Liouvillian stacking). The low energy beam emittance is a key parameter in order to avoid emittance growth and particle losses in the high-energy sections. Conventional destructive emittance measurement methods like slit-harp systems are restricted for high power ion beams by the interaction of the ion beam with e.g. slit or harp. Therefore a non-destructive emittance measurement has several technical and physical advantages. To study the transport of high perveance beams of negative ions, a Low Energy Beam Transport (LEBT) section is under construction. The study of non destructive emittance measurement devices is one major subject of the test bench. For negative ions -especially H--ions-photodetachment can be applied for a non-destructive emittance measurement instrument (PD-EMI). The paper will present the status of that emittance diagnostic and of the test bench.

  13. 40 CFR 1048.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... Calculate cycle statistics and compare with the established criteria as specified in 40 CFR 1065.514 to... emissions and cycle statistics the same as for transient testing as specified in 40 CFR part 1065, subpart G... 6 Intermediate test 10 0.10 7 Warm idle 0 0.15 1 Speed terms are defined in 40 CFR part 1065. 2...

  14. NWRA AVOSS Wake Vortex Prediction Algorithm. 3.1.1

    NASA Technical Reports Server (NTRS)

    Robins, R. E.; Delisi, D. P.; Hinton, David (Technical Monitor)

    2002-01-01

    This report provides a detailed description of the wake vortex prediction algorithm used in the Demonstration Version of NASA's Aircraft Vortex Spacing System (AVOSS). The report includes all equations used in the algorithm, an explanation of how to run the algorithm, and a discussion of how the source code for the algorithm is organized. Several appendices contain important supplementary information, including suggestions for enhancing the algorithm and results from test cases.

  15. Random sampler M-estimator algorithm with sequential probability ratio test for robust function approximation via feed-forward neural networks.

    PubMed

    El-Melegy, Moumen T

    2013-07-01

    This paper addresses the problem of fitting a functional model to data corrupted with outliers using a multilayered feed-forward neural network. Although it is of high importance in practical applications, this problem has not received careful attention from the neural network research community. One recent approach to solving this problem is to use a neural network training algorithm based on the random sample consensus (RANSAC) framework. This paper proposes a new algorithm that offers two enhancements over the original RANSAC algorithm. The first one improves the algorithm accuracy and robustness by employing an M-estimator cost function to decide on the best estimated model from the randomly selected samples. The other one improves the time performance of the algorithm by utilizing a statistical pretest based on Wald's sequential probability ratio test. The proposed algorithm is successfully evaluated on synthetic and real data, contaminated with varying degrees of outliers, and compared with existing neural network training algorithms.

  16. Testing the Generalization Efficiency of Oil Slick Classification Algorithm Using Multiple SAR Data for Deepwater Horizon Oil Spill

    NASA Astrophysics Data System (ADS)

    Ozkan, C.; Osmanoglu, B.; Sunar, F.; Staples, G.; Kalkan, K.; Balık Sanlı, F.

    2012-07-01

    Marine oil spills due to releases of crude oil from tankers, offshore platforms, drilling rigs and wells, etc. are seriously affecting the fragile marine and coastal ecosystem and cause political and environmental concern. A catastrophic explosion and subsequent fire in the Deepwater Horizon oil platform caused the platform to burn and sink, and oil leaked continuously between April 20th and July 15th of 2010, releasing about 780,000 m3 of crude oil into the Gulf of Mexico. Today, space-borne SAR sensors are extensively used for the detection of oil spills in the marine environment, as they are independent from sun light, not affected by cloudiness, and more cost-effective than air patrolling due to covering large areas. In this study, generalization extent of an object based classification algorithm was tested for oil spill detection using multiple SAR imagery data. Among many geometrical, physical and textural features, some more distinctive ones were selected to distinguish oil and look alike objects from each others. The tested classifier was constructed from a Multilayer Perception Artificial Neural Network trained by ABC, LM and BP optimization algorithms. The training data to train the classifier were constituted from SAR data consisting of oil spill originated from Lebanon in 2007. The classifier was then applied to the Deepwater Horizon oil spill data in the Gulf of Mexico on RADARSAT-2 and ALOS PALSAR images to demonstrate the generalization efficiency of oil slick classification algorithm.

  17. Testing a polarimetric cloud imager aboard research vessel Polarstern: comparison of color-based and polarimetric cloud detection algorithms.

    PubMed

    Barta, András; Horváth, Gábor; Horváth, Ákos; Egri, Ádám; Blahó, Miklós; Barta, Pál; Bumke, Karl; Macke, Andreas

    2015-02-10

    Cloud cover estimation is an important part of routine meteorological observations. Cloudiness measurements are used in climate model evaluation, nowcasting solar radiation, parameterizing the fluctuations of sea surface insolation, and building energy transfer models of the atmosphere. Currently, the most widespread ground-based method to measure cloudiness is based on analyzing the unpolarized intensity and color distribution of the sky obtained by digital cameras. As a new approach, we propose that cloud detection can be aided by the additional use of skylight polarization measured by 180° field-of-view imaging polarimetry. In the fall of 2010, we tested such a novel polarimetric cloud detector aboard the research vessel Polarstern during expedition ANT-XXVII/1. One of our goals was to test the durability of the measurement hardware under the extreme conditions of a trans-Atlantic cruise. Here, we describe the instrument and compare the results of several different cloud detection algorithms, some conventional and some newly developed. We also discuss the weaknesses of our design and its possible improvements. The comparison with cloud detection algorithms developed for traditional nonpolarimetric full-sky imagers allowed us to evaluate the added value of polarimetric quantities. We found that (1) neural-network-based algorithms perform the best among the investigated schemes and (2) global information (the mean and variance of intensity), nonoptical information (e.g., sun-view geometry), and polarimetric information (e.g., the degree of polarization) improve the accuracy of cloud detection, albeit slightly.

  18. A test on a Neuro-Fuzzy algorithm used to reduce continuous gravity records for the effect of meteorological parameters

    NASA Astrophysics Data System (ADS)

    Andò, Bruno; Carbone, Daniele

    2004-05-01

    Gravity measurements are utilized at active volcanoes to detect mass changes linked to magma transfer processes and thus to recognize forerunners to paroxysmal volcanic events. Continuous gravity measurements are now increasingly performed at sites very close to active craters, where there is the greatest chance to detect meaningful gravity changes. Unfortunately, especially when used against the adverse environmental conditions usually encountered at such places, gravimeters have been proved to be affected by meteorological parameters, mainly by changes in the atmospheric temperature. The pseudo-signal generated by these perturbations is often stronger than the signal generated by actual changes in the gravity field. Thus, the implementation of well-performing algorithms for reducing the gravity signal for the effect of meteorological parameters is vital to obtain sequences useful from the volcano surveillance standpoint. In the present paper, a Neuro-Fuzzy algorithm, which was already proved to accomplish the required task satisfactorily, is tested over a data set from three gravimeters which worked continuously for about 50 days at a site far away from active zones, where changes due to actual fluctuation of the gravity field are expected to be within a few microgal. After accomplishing the reduction of the gravity series, residuals are within about 15 μGal peak-to-peak, thus confirming the capabilities of the Neuro-Fuzzy algorithm under test of performing the required task satisfactorily.

  19. Combined radiation pressure and thermal modelling of complex satellites: algorithms and on-orbit tests

    NASA Astrophysics Data System (ADS)

    Ziebart, M.; Adhya, S.; Sibthorpe, A.; Edwards, S.; Cross, P.

    In an era of high resolution gravity field modelling the dominant error sources in spacecraft orbit determination are non-conservative spacecraft surface forces. These forces include: solar radiation pressure, thermal re-radiation forces, the forces due to radiation both reflected and emitted by the Earth and atmospheric drag effects. All of these forces can be difficult to characterise a priori because they require detailed modelling of the spacecraft geometry and surface properties, its attitude behaviour, the incident flux spatial and temporal variations and the interaction of these fluxes with the surface. The conventional approach to overcoming these problems is to build simplified box-and-wing models of the satellites and to estimate empirically factors that account for the inevitable mis-modelling. Over the last five years the authors have developed a suite of software utilities that model analytically the first three effects in the list above: solar radiation pressure, thermal forces and the albedo/earthshine force. The techniques are designed specifically to deal with complex spacecraft structures, no structural simplifications are made and the method can be applied to any spacecraft. Substantial quality control measures are used during computation to both avoid and trap errors. The paper presents the broad basis of the modelling techniques for each of the effects. Two operational tests of the output models, using the medium earth orbit satellite GPS Block IIR and the low earth orbit Jason-1, are presented. Model tests for GPS IIR are based on predicting the satellite orbit using the dynamic models alone (with no empirical scaling or augmentation) and comparing the integrated trajectory with precise, post-processed orbits. Using one month's worth of precise orbits, and all available Block IIR satellites, the RMS difference between the predicted orbits and the precise orbits over 12 hours are: 0.14m (height), 0.07m across track and 0.51m (along track). The

  20. A new test statistic for climate models that includes field and spatial dependencies using Gaussian Markov random fields

    NASA Astrophysics Data System (ADS)

    Nosedal-Sanchez, Alvaro; Jackson, Charles S.; Huerta, Gabriel

    2016-07-01

    A new test statistic for climate model evaluation has been developed that potentially mitigates some of the limitations that exist for observing and representing field and space dependencies of climate phenomena. Traditionally such dependencies have been ignored when climate models have been evaluated against observational data, which makes it difficult to assess whether any given model is simulating observed climate for the right reasons. The new statistic uses Gaussian Markov random fields for estimating field and space dependencies within a first-order grid point neighborhood structure. We illustrate the ability of Gaussian Markov random fields to represent empirical estimates of field and space covariances using "witch hat" graphs. We further use the new statistic to evaluate the tropical response of a climate model (CAM3.1) to changes in two parameters important to its representation of cloud and precipitation physics. Overall, the inclusion of dependency information did not alter significantly the recognition of those regions of parameter space that best approximated observations. However, there were some qualitative differences in the shape of the response surface that suggest how such a measure could affect estimates of model uncertainty.

  1. A new test statistic for climate models that includes field and spatial dependencies using Gaussian Markov random fields

    DOE PAGES

    Nosedal-Sanchez, Alvaro; Jackson, Charles S.; Huerta, Gabriel

    2016-07-20

    A new test statistic for climate model evaluation has been developed that potentially mitigates some of the limitations that exist for observing and representing field and space dependencies of climate phenomena. Traditionally such dependencies have been ignored when climate models have been evaluated against observational data, which makes it difficult to assess whether any given model is simulating observed climate for the right reasons. The new statistic uses Gaussian Markov random fields for estimating field and space dependencies within a first-order grid point neighborhood structure. We illustrate the ability of Gaussian Markov random fields to represent empirical estimates of fieldmore » and space covariances using "witch hat" graphs. We further use the new statistic to evaluate the tropical response of a climate model (CAM3.1) to changes in two parameters important to its representation of cloud and precipitation physics. Overall, the inclusion of dependency information did not alter significantly the recognition of those regions of parameter space that best approximated observations. However, there were some qualitative differences in the shape of the response surface that suggest how such a measure could affect estimates of model uncertainty.« less

  2. Inventory of forest and rangeland resources, including forest stress. [Atlanta, Georgia, Black Hills, and Manitou, Colorado test sites

    NASA Technical Reports Server (NTRS)

    Heller, R. C.; Aldrich, R. C.; Weber, F. P.; Driscoll, R. S. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. Some current beetle-killed ponderosa pine can be detected on S190-B photography imaged over the Bear Lodge mountains in the Black Hills National Forest. Detections were made on SL-3 imagery (September 13, 1973) using a zoom lens microscope to view the photography. At this time correlations have not been made to all of the known infestation spots in the Bear Lodge mountains; rather, known infestations have been located on the SL-3 imagery. It was determined that the beetle-killed trees were current kills by stereo viewing of SL-3 imagery on one side and SL-2 on the other. A successful technique was developed for mapping current beetle-killed pine using MSS imagery from mission 247 flown by the C-130 over the Black Hills test site in September 1973. Color enhancement processing on the NASA/JSC, DAS system using three MSS channels produced an excellent quality detection map for current kill pine. More importantly it provides a way to inventory the dead trees by relating PCM counts to actual numbers of dead trees.

  3. Optimizing tuning masses for helicopter rotor blade vibration reduction including computed airloads and comparison with test data

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.; Walsh, Joanne L.; Wilbur, Matthew L.

    1992-01-01

    The development and validation of an optimization procedure to systematically place tuning masses along a rotor blade span to minimize vibratory loads are described. The masses and their corresponding locations are the design variables that are manipulated to reduce the harmonics of hub shear for a four-bladed rotor system without adding a large mass penalty. The procedure incorporates a comprehensive helicopter analysis to calculate the airloads. Predicting changes in airloads due to changes in design variables is an important feature of this research. The procedure was applied to a one-sixth, Mach-scaled rotor blade model to place three masses and then again to place six masses. In both cases the added mass was able to achieve significant reductions in the hub shear. In addition, the procedure was applied to place a single mass of fixed value on a blade model to reduce the hub shear for three flight conditions. The analytical results were compared to experimental data from a wind tunnel test performed in the Langley Transonic Dynamics Tunnel. The correlation of the mass location was good and the trend of the mass location with respect to flight speed was predicted fairly well. However, it was noted that the analysis was not entirely successful at predicting the absolute magnitudes of the fixed system loads.

  4. Corrective Action Decision Document for Corrective Action Unit 204: Storage Bunkers, Nevada Test Site, Nevada: Revision 0, Including Errata Sheet

    SciTech Connect

    U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office

    2004-04-01

    This Corrective Action Decision Document identifies the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office's corrective action alternative recommendation for each of the corrective action sites (CASs) within Corrective Action Unit (CAU) 204: Storage Bunkers, Nevada Test Site (NTS), Nevada, under the Federal Facility Agreement and Consent Order. An evaluation of analytical data from the corrective action investigation, review of current and future operations at each CAS, and a detailed comparative analysis of potential corrective action alternatives were used to determine the appropriate corrective action for each CAS. There are six CASs in CAU 204, which are all located between Areas 1, 2, 3, and 5 on the NTS. The No Further Action alternative was recommended for CASs 01-34-01, 02-34-01, 03-34-01, and 05-99-02; and a Closure in Place with Administrative Controls recommendation was the preferred corrective action for CASs 05-18-02 and 05-33-01. These alternatives were judged to meet all requirements for the technical components evaluated as well as applicable state and federal regulations for closure of the sites and will eliminate potential future exposure pathways to the contaminated media at CAU 204.

  5. Comparison of experimental vision performance testing techniques, including the implementation of an active matrix electrophoretic ink display

    NASA Astrophysics Data System (ADS)

    Swinney, Mathew W.; Marasco, Peter L.; Heft, Eric L.

    2007-04-01

    Standard black and white printed targets have been used for numerous vision related experiments, and are ideal with respect to contrast and spectral uniformity in the visible and near-infrared (NIR) regions of the electromagnetic (EM) spectrum. However, these targets lack the ability to refresh, update, or perform as a real-time, dynamic stimulus. This impacts their ability to be used in various standard vision performance measurement techniques. Emissive displays, such as a LCD's, possess some of the attributes printed targets lack, but come with a disadvantage of their own: LCD's lack the spectral uniformity of printed targets, making them of debatable value for presenting test targets in the near and short wave infrared regions of the spectrum. Yet a new option has recently become viable that may retain favorable attributes of both of the previously mentioned alternatives. The electrophoretic ink display is a dynamic, refreshable, and easily manipulated display that performs much like printed targets with respect to spectral uniformity. This paper will compare and contrast the various techniques that can be used to measure observer visual performance through night vision devices and imagers - focusing on the visible to infrared region of the EM spectrum. Furthermore, it will quantify the electrophoretic ink display option, determining its advantages and situations that it would be best suited for.

  6. An algorithm for circular test and improved optical configuration by two-dimensional (2D) laser heterodyne interferometer

    NASA Astrophysics Data System (ADS)

    Tang, Shanzhi; Yu, Shengrui; Han, Qingfu; Li, Ming; Wang, Zhao

    2016-09-01

    Circular test is an important tactic to assess motion accuracy in many fields especially machine tool and coordinate measuring machine. There are setup errors due to using directly centring of the measuring instrument for both of contact double ball bar and existed non-contact methods. To solve this problem, an algorithm for circular test using function construction based on matrix operation is proposed, which is not only used for the solution of radial deviation (F) but also should be applied to obtain two other evaluation parameters especially circular hysteresis (H). Furthermore, an improved optical configuration with a single laser is presented based on a 2D laser heterodyne interferometer. Compared with the existed non-contact method, it has a more pure homogeneity of the laser sources of 2D displacement sensing for advanced metrology. The algorithm and modeling are both illustrated. And error budget is also achieved. At last, to validate them, test experiments for motion paths are implemented based on a gantry machining center. Contrast test results support the proposal.

  7. Testing a discrete choice experiment including duration to value health states for large descriptive systems: addressing design and sampling issues.

    PubMed

    Bansback, Nick; Hole, Arne Risa; Mulhern, Brendan; Tsuchiya, Aki

    2014-08-01

    There is interest in the use of discrete choice experiments that include a duration attribute (DCETTO) to generate health utility values, but questions remain on its feasibility in large health state descriptive systems. This study examines the stability of DCETTO to estimate health utility values from the five-level EQ-5D, an instrument with depicts 3125 different health states. Between January and March 2011, we administered 120 DCETTO tasks based on the five-level EQ-5D to a total of 1799 respondents in the UK (each completed 15 DCETTO tasks on-line). We compared models across different sample sizes and different total numbers of observations. We found the DCETTO coefficients were generally consistent, with high agreement between individual ordinal preferences and aggregate cardinal values. Keeping the DCE design and the total number of observations fixed, subsamples consisting of 10 tasks per respondent with an intermediate sized sample, and 15 tasks with a smaller sample provide similar results in comparison to the whole sample model. In conclusion, we find that the DCETTO is a feasible method for developing values for larger descriptive systems such as EQ-5D-5L, and find evidence supporting important design features for future valuation studies that use the DCETTO.

  8. Homogenisation algorithm skill testing with synthetic global benchmarks for the International Surface Temperature Initiative

    NASA Astrophysics Data System (ADS)

    Willett, Katharine; Venema, Victor; Williams, Claude; Aguilar, Enric; Lopardo, Giuseppina; Jolliffe, Ian; Alexander, Lisa; Vincent, Lucie; Lund, Robert; Menne, Matt; Thorne, Peter; Auchmann, Renate; Warren, Rachel; Bronnimann, Stefan; Thorarinsdottir, Thordis; Easterbrook, Steve; Gallagher, Colin

    2014-05-01

    Our surface temperature data are good enough to give us confidence that the world has warmed since 1880. However, they are not perfect - we cannot be precise in the amount of warming for the globe and especially for small regions or specific locations. Inhomogeneity (non-climate changes to the station record) is a major problem. While progress in detection of, and adjustment for inhomogeneities is continually advancing, monitoring effectiveness on large networks and gauging respective improvements in climate data quality is non-trivial. There is currently no internationally recognised means of robustly assessing the effectiveness of homogenisation methods on real data - and thus, the inhomogeneity uncertainty in those data. Here I present the work of the International Surface Temperature Initiative (ISTI; www.surfacetemperatures.org) Benchmarking working group. The aim is to quantify homogenisation algorithm skill on the global scale against realistic benchmarks. This involves the creation of synthetic worlds of surface temperature data, deliberate contamination of these with known errors and then assessment of the ability of homogenisation algorithms to detect and remove these errors. The ultimate aim is threefold: quantifying uncertainties in surface temperature data; enabling more meaningful product intercomparison; and improving homogenisation methods. There are five components work: 1) Create ~30000 synthetic benchmark stations that look and feel like the real global temperature network, but do not contain any inhomogeneities - analog-clean-worlds 2) Design a set of error models which mimic the main types of inhomogeneities found in practice, and combined them with the analog-clean-worlds to give analog-error-worlds 3) Engage with dataset creators to run their homogenisation algorithms blind on the analog-error- world stations as they have done with the real data 4) Design an assessment framework to gauge the degree to which analog-error-worlds are returned to

  9. Homogenisation algorithm skill testing with synthetic global benchmarks for the International Surface Temperature Initiative

    NASA Astrophysics Data System (ADS)

    Willet, Katherine; Venema, Victor; Williams, Claude; Aguilar, Enric; joliffe, Ian; Alexander, Lisa; Vincent, Lucie; Lund, Robert; Menne, Matt; Thorne, Peter; Auchmann, Renate; Warren, Rachel; Bronniman, Stefan; Thorarinsdotir, Thordis; Easterbrook, Steve; Gallagher, Colin; Lopardo, Giuseppina; Hausfather, Zeke; Berry, David

    2015-04-01

    Our surface temperature data are good enough to give us confidence that the world has warmed since 1880. However, they are not perfect - we cannot be precise in the amount of warming for the globe and especially for small regions or specific locations. Inhomogeneity (non-climate changes to the station record) is a major problem. While progress in detection of, and adjustment for inhomogeneities is continually advancing, monitoring effectiveness on large networks and gauging respective improvements in climate data quality is non-trivial. There is currently no internationally recognised means of robustly assessing the effectiveness of homogenisation methods on real data - and thus, the inhomogeneity uncertainty in those data. Here I present the work of the International Surface Temperature Initiative (ISTI; www.surfacetemperatures.org) Benchmarking working group. The aim is to quantify homogenisation algorithm skill on the global scale against realistic benchmarks. This involves the creation of synthetic worlds of surface temperature data, deliberate contamination of these with known errors and then assessment of the ability of homogenisation algorithms to detect and remove these errors. The ultimate aim is threefold: quantifying uncertainties in surface temperature data; enabling more meaningful product intercomparison; and improving homogenisation methods. There are five components work: 1. Create 30000 synthetic benchmark stations that look and feel like the real global temperature network, but do not contain any inhomogeneities: analog clean-worlds. 2. Design a set of error models which mimic the main types of inhomogeneities found in practice, and combined them with the analog clean-worlds to give analog error-worlds. 3. Engage with dataset creators to run their homogenisation algorithms blind on the analog error-world stations as they have done with the real data. 4. Design an assessment framework to gauge the degree to which analog error-worlds are returned to

  10. A Parametric Testing Environment for Finding the Operational Envelopes of Simulated Guidance Algorithms

    NASA Technical Reports Server (NTRS)

    Barrett, Anthony

    2011-01-01

    The Problem: As NASA missions become ever more complex and subsystems become ever more complicated, testing for correctness becomes progressively more difficult. Exhaustive testing is usually impractical, so how does one select a smaller set of test cases that is effective at finding/analyzing bugs? Solution:(1) Let an analyst pose test-space coverage requirements and then refine these requirements to focus on regions of interest in response to visualized test results. (2) Instead of validating correctness around set points (with Monte Carlo analysis) find and characterize the margins of the performance envelop where the system starts to fail.

  11. Space shuttle orbiter avionics software: Post review report for the entry FACI (First Article Configuration Inspection). [including orbital flight tests integrated system

    NASA Technical Reports Server (NTRS)

    Markos, H.

    1978-01-01

    Status of the computer programs dealing with space shuttle orbiter avionics is reported. Specific topics covered include: delivery status; SSW software; SM software; DL software; GNC software; level 3/4 testing; level 5 testing; performance analysis, SDL readiness for entry first article configuration inspection; and verification assessment.

  12. 40 CFR 1048.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... speed, use the 5-mode duty cycle or the corresponding ramped-modal cycle described in 40 CFR part 1039.... Calculate cycle statistics and compare with the established criteria as specified in 40 CFR 1065.514 to... emissions and cycle statistics the same as for transient testing as specified in 40 CFR part 1065, subpart...

  13. 40 CFR 1048.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... speed, use the 5-mode duty cycle or the corresponding ramped-modal cycle described in 40 CFR part 1039.... Calculate cycle statistics and compare with the established criteria as specified in 40 CFR 1065.514 to... emissions and cycle statistics the same as for transient testing as specified in 40 CFR part 1065, subpart...

  14. 40 CFR 1048.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...-modal cycles described in 40 CFR Part 1065. (b) Measure emissions by testing the engine on a dynamometer... are defined in 40 CFR part 1065. 2 The percent torque is relative to the maximum torque at the given... Steady-state 124 Warm idle 0 1 Speed terms are defined in 40 CFR part 1065. 2 Advance from one mode...

  15. 40 CFR 1048.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... speed, use the 5-mode duty cycle or the corresponding ramped-modal cycle described in 40 CFR part 1039.... Calculate cycle statistics and compare with the established criteria as specified in 40 CFR 1065.514 to... emissions and cycle statistics the same as for transient testing as specified in 40 CFR part 1065, subpart...

  16. 40 CFR 1039.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... in 40 CFR 1065.514 to confirm that the test is valid. Operate the engine and sampling system as... idle mode, operate the engine at its warm idle speed as described in 40 CFR part 1065. (d) For constant... under 40 CFR 1065.10(c) to replace full-load operation with the maximum load for which the engine...

  17. 40 CFR 1039.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... in 40 CFR 1065.514 to confirm that the test is valid. Operate the engine and sampling system as... idle mode, operate the engine at its warm idle speed as described in 40 CFR part 1065. (d) For constant... under 40 CFR 1065.10(c) to replace full-load operation with the maximum load for which the engine...

  18. 40 CFR 1039.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...-mode or ramped-modal cycles as described in 40 CFR part 1065. (b) Measure emissions by testing the... approval under 40 CFR 1065.10(c) to replace full-load operation with the maximum load for which the engine... Appendix II of this part for variable-speed engines below 19 kW. You may instead use the 8-mode duty...

  19. 40 CFR 1039.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... in 40 CFR 1065.514 to confirm that the test is valid. Operate the engine and sampling system as... idle mode, operate the engine at its warm idle speed as described in 40 CFR part 1065. (d) For constant... under 40 CFR 1065.10(c) to replace full-load operation with the maximum load for which the engine...

  20. 40 CFR 1039.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... in 40 CFR 1065.514 to confirm that the test is valid. Operate the engine and sampling system as... idle mode, operate the engine at its warm idle speed as described in 40 CFR part 1065. (d) For constant... under 40 CFR 1065.10(c) to replace full-load operation with the maximum load for which the engine...

  1. Acute testing of the rate-smoothed pacing algorithm for ventricular rate stabilization.

    PubMed

    Lee, J K; Yee, R; Braney, M; Stoop, G; Begemann, M; Dunne, C; Klein, G J; Krahn, A D; Van Hemel, N M

    1999-04-01

    We evaluated the capability of a new pacemaker-based rate-smoothing algorithm (RSA) to reduce the irregular ventricular response of AF. RSA prevents sudden decreases in rate using a modified physiological band and flywheel feature. Twelve patients (51 +/- 21 years) with hemodynamically tolerated AF of 4 months to 20 years duration were studied. Atrial and ventricular leads were connected to the external pacemaker device in the electrophysiology laboratory. Consecutive RR intervals during AF were measured at baseline and after ventricular pacing with RSA ON. Ventricular pacing with the rate smoothing algorithm reduced maximum RR intervals (1,207 +/- 299 vs 855 +/- 148 ms, P = 0.0005), with no significant change in the minimum RR interval (401 +/- 55 vs 393 +/- 74 ms, P = 0.292). A small shortening of the mean RR interval (634 +/- 153 vs 594 +/- 135 ms, P = 0.007) was seen with no change in the median RR interval (609 +/- 153 vs 595 +/- 143 ms, P = 0.388). There was a 43% reduction in RR standard deviation (145 +/- 52 vs 82 +/- 28, P = 0.0005), 49% reduction in mean absolute RR interval difference (MAD) (152 +/- 64 vs 77 +/- 34, P = 0.0005) and MAD/mean RR ratio (0.23 +/- 0.05 vs 0.13 +/- 0.04, P = 0.0005). We conclude that rate-smoothed pacing effectively reduces RR variability of AF in the acute setting.

  2. Parkinson’s Disease and the Stroop Color Word Test: Processing Speed and Interference Algorithms

    PubMed Central

    Sisco, S.; Slonena, E.; Okun, M.S.; Bowers, D.; Price, C.C.

    2016-01-01

    OBJECTIVE Processing speed alters the traditional Stroop calculations of interference. Consequently, alternative algorithms for calculating Stroop interference have been introduced to control for processing speed, and have done so in a multiple sclerosis sample. This study examined how these processing speed correction algorithms change interference scores for individuals with idiopathic Parkinson’s Disease (PD, n= 58) and non-PD peers (n= 68). METHOD Linear regressions controlling for demographics predicted group (PD vs. non-PD) differences for Jensen’s, Golden’s, relative, ratio, and residualized interference scores. To examine convergent and divergent validity, interference scores were correlated to standardized measures of processing speed and executive function. RESULTS PD - non-PD differences were found for Jensen’s interference score, but not Golden’s score, or the relative, ratio, and residualized interference scores. Jensens’ score correlated significantly with standardized processing speed but not executive function measures. Relative, ratio and residualized scores correlated with executive function but not processing speed measures. Golden’s score did not correlate with any other standardized measures. CONCLUSIONS The relative, ratio, and residualized scores were comparable for measuring Stroop interference in processing speed-impaired populations. Overall, the ratio interference score may be the most useful calculation method to control for processing speed in this population. PMID:27264121

  3. Characterizing and hindcasting ripple bedform dynamics: Field test of non-equilibrium models utilizing a fingerprint algorithm

    NASA Astrophysics Data System (ADS)

    DuVal, Carter B.; Trembanis, Arthur C.; Skarke, Adam

    2016-03-01

    Ripple bedform response to near bed forcing has been found to be asynchronous with rapidly changing hydrodynamic conditions. Recent models have attempted to account for this time variance through the introduction of a time offset between hydrodynamic forcing and seabed response with varying success. While focusing on temporal ripple evolution, spatial ripple variation has been partly neglected. With the fingerprint algorithm ripple bedform parameterization technique, spatial variation can be quickly and precisely characterized, and as such, this method is particularly useful for evaluation of ripple model spatio-temporal validity. Using time-series hydrodynamic data and synoptic acoustic imagery collected at an inner continental shelf site, this study compares an adapted time-varying ripple geometric model to observed field observations in light of the fingerprint algorithm results. Multiple equilibrium ripple predictors are tested within the time-varying model, with the algorithm results serving as the baseline geometric values. Results indicate that ripple bedforms, in the presence of rapidly changing high-energy conditions, reorganize at a slower rate than predicted by the models. Relict ripples were found to be near peak-forcing wavelengths after rapidly decaying storm events, and still present after months of sub-critical flow conditions.

  4. Test of multiscaling in a diffusion-limited-aggregation model using an off-lattice killing-free algorithm

    NASA Astrophysics Data System (ADS)

    Menshutin, Anton Yu.; Shchur, Lev N.

    2006-01-01

    We test the multiscaling issue of diffusion-limited-aggregation (DLA) clusters using a modified algorithm. This algorithm eliminates killing the particles at the death circle. Instead, we return them to the birth circle at a random relative angle taken from the evaluated distribution. In addition, we use a two-level hierarchical memory model that allows using large steps in conjunction with an off-lattice realization of the model. Our algorithm still seems to stay in the framework of the original DLA model. We present an accurate estimate of the fractal dimensions based on the data for a hundred clusters with 50 million particles each. We find that multiscaling cannot be ruled out. We also find that the fractal dimension is a weak self-averaging quantity. In addition, the fractal dimension, if calculated using the harmonic measure, is a nonmonotonic function of the cluster radius. We argue that the controversies in the data interpretation can be due to the weak self-averaging and the influence of intrinsic noise.

  5. Using Lagrangian-based process studies to test satellite algorithms of vertical carbon flux in the eastern North Pacific Ocean

    NASA Astrophysics Data System (ADS)

    Stukel, M. R.; Kahru, M.; Benitez-Nelson, C. R.; Décima, M.; Goericke, R.; Landry, M. R.; Ohman, M. D.

    2015-11-01

    The biological carbon pump is responsible for the transport of ˜5-20 Pg C yr-1 from the surface into the deep ocean but its variability is poorly understood due to an incomplete mechanistic understanding of the complex underlying planktonic processes. In fact, algorithms designed to estimate carbon export from satellite products incorporate fundamentally different assumptions about the relationships between plankton biomass, productivity, and export efficiency. To test the alternate formulations of export efficiency in remote-sensing algorithms formulated by Dunne et al. (2005), Laws et al. (2011), Henson et al. (2011), and Siegel et al. (2014), we have compiled in situ measurements (temperature, chlorophyll, primary production, phytoplankton biomass and size structure, grazing rates, net chlorophyll change, and carbon export) made during Lagrangian process studies on seven cruises in the California Current Ecosystem and Costa Rica Dome. A food-web based approach formulated by Siegel et al. (2014) performs as well or better than other empirical formulations, while simultaneously providing reasonable estimates of protozoan and mesozooplankton grazing rates. By tuning the Siegel et al. (2014) algorithm to match in situ grazing rates more accurately, we also obtain better in situ carbon export measurements. Adequate representations of food-web relationships and grazing dynamics are therefore crucial to improving the accuracy of export predictions made from satellite-derived products. Nevertheless, considerable unexplained variance in export remains and must be explored before we can reliably use remote sensing products to assess the impact of climate change on biologically mediated carbon sequestration.

  6. Model-based testing with UML applied to a roaming algorithm for bluetooth devices.

    PubMed

    Dai, Zhen Ru; Grabowski, Jens; Neukirchen, Helmut; Pals, Holger

    2004-11-01

    In late 2001, the Object Management Group issued a Request for Proposal to develop a testing profile for UML 2.0. In June 2003, the work on the UML 2.0 Testing Profile was finally adopted by the OMG. Since March 2004, it has become an official standard of the OMG. The UML 2.0 Testing Profile provides support for UML based model-driven testing. This paper introduces a methodology on how to use the testing profile in order to modify and extend an existing UML design model for test issues. The application of the methodology will be explained by applying it to an existing UML Model for a Bluetooth device.

  7. Testing earthquake prediction algorithms: Statistically significant advance prediction of the largest earthquakes in the Circum-Pacific, 1992-1997

    USGS Publications Warehouse

    Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.

    1999-01-01

    Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier

  8. Testing the robustness of the genetic algorithm on the floating building block representation

    SciTech Connect

    Lindsay, R.K.; Wu, A.S.

    1996-12-31

    Recent studies on a floating building block representation for the genetic algorithm (GA) suggest that there are many advantages to using the floating representation. This paper investigates the behavior of the GA on floating representation problems in response to three different types of pressures: (1) a reduction in the amount of genetic material available to the GA during the problem solving process, (2) functions which have negative-valued building blocks, and (3) randomizing non-coding segments. Results indicate that the GA`s performance on floating representation problems is very robust. Significant reductions in genetic material (genome length) may be made with relatively small decrease in performance. The GA can effectively solve problems with negative building blocks. Randomizing non-coding segments appears to improve rather than harm GA performance.

  9. Surface evaluation with Ronchi test by using Malacara formula, genetic algorithms, and cubic splines

    NASA Astrophysics Data System (ADS)

    Cordero-Dávila, Alberto; González-García, Jorge

    2010-08-01

    In the manufacturing process of an optical surface with rotational symmetry the ideal ronchigram is simulated and compared with the experimental ronchigram. From this comparison the technician, based on your experience, estimated the error on the surface. Quantitatively, the error on the surface can be described by a polynomial e(ρ2) and the coefficients can be estimated from data of the ronchigrams (real and ideal) to solve a system of nonlinear differential equations which are related to the Malacara formula of the transversal aberration. To avoid the problems inherent in the use of polynomials it proposed to describe the errors on the surface by means of cubic splines. The coefficients of each spline are estimated from a discrete set of errors (ρi,ei) and these are evaluated by means of genetic algorithms to reproduce the experimental ronchigrama starting from the ideal.

  10. Hardware in the Loop Testing of Continuous Control Algorithms for a Precision Formation Flying Demonstration Mission

    NASA Astrophysics Data System (ADS)

    Naasz, B. J.; Burns, R. D.; Gaylor, D.; Higinbotham, J.

    A sample mission sequence is defined for a low earth orbit demonstration of Precision Formation Flying (PFF). Various guidance navigation and control strategies are discussed for use in the PFF experiment phases. A sample PFF experiment is implemented and tested in a realistic Hardware-in-the-Loop (HWIL) simulation using the Formation Flying Test Bed (FFTB) at NASA's Goddard Space Flight Center.

  11. An algorithm to diagnose influenza infection: evaluating the clinical importance and impact on hospital costs of screening with rapid antigen detection tests.

    PubMed

    González-Del Vecchio, M; Catalán, P; de Egea, V; Rodríguez-Borlado, A; Martos, C; Padilla, B; Rodríguez-Sanchez, B; Bouza, E

    2015-06-01

    Rapid antigen detection tests (RADTs) are immunoassays that produce results in 15 min or less, have low sensitivity (50 %), but high specificity (95 %). We studied the clinical impact and laboratory savings of a diagnostic algorithm for influenza infection using RADTs as a first-step technique during the influenza season. From January 15th to March 31st 2014, we performed a diagnostic algorithm for influenza infection consisting of an RADT for all respiratory samples received in the laboratory. We studied all the patients with positive results for influenza infection, dividing them into two groups: Group A with a negative RADT but positive reference tests [reverse transcription polymerase chain reaction (RT-PCR) and/or culture] and Group B with an initial positive RADT. During the study period, we had a total of 1,156 patients with suspicion of influenza infection. Of them, 217 (19 %) had a positive result for influenza: 132 (11 %) had an initial negative RADT (Group A) and 85 (7 %) had a positive RADT (Group B). When comparing patients in Group A and Group B, we found significant differences, as follows: prescribed oseltamivir (67 % vs. 82 %; p = 0.02), initiation of oseltamivir before 24 h (89 % vs. 97 %; p = 0.03), antibiotics prescribed (89 % vs. 67 %; p = <0.01), intensive care unit (ICU) admissions after diagnosis (23 % vs. 14 %; p = 0.05), and need for supplementary oxygen (61 % vs. 47 %; p = 0.01). An influenza algorithm including RADTs as the first step improves the time of administration of proper antiviral therapy, reduces the use of antibiotics and ICU admissions, and decreases hospital costs.

  12. Global Precipitation Measurement: GPM Microwave Imager (GMI) Algorithm Development Approach

    NASA Technical Reports Server (NTRS)

    Stocker, Erich Franz

    2009-01-01

    This slide presentation reviews the approach to the development of the Global Precipitation Measurement algorithm. This presentation includes information about the responsibilities for the development of the algorithm, and the calibration. Also included is information about the orbit, and the sun angle. The test of the algorithm code will be done with synthetic data generated from the Precipitation Processing System (PPS).

  13. SU-E-T-347: Validation of the Condensed History Algorithm of Geant4 Using the Fano Test

    SciTech Connect

    Lee, H; Mathis, M; Sawakuchi, G

    2014-06-01

    Purpose: To validate the condensed history algorithm and physics of the Geant4 Monte Carlo toolkit for simulations of ionization chambers (ICs). This study is the first step to validate Geant4 for calculations of photon beam quality correction factors under the presence of a strong magnetic field for magnetic resonance guided linac system applications. Methods: The electron transport and boundary crossing algorithms of Geant4 version 9.6.p02 were tested under Fano conditions using the Geant4 example/application FanoCavity. User-defined parameters of the condensed history and multiple scattering algorithms were investigated under Fano test conditions for three scattering models (physics lists): G4UrbanMscModel95 (PhysListEmStandard-option3), G4GoudsmitSaundersonMsc (PhysListEmStandard-GS), and G4WentzelVIModel/G4CoulombScattering (PhysListEmStandard-WVI). Simulations were conducted using monoenergetic photon beams, ranging from 0.5 to 7 MeV and emphasizing energies from 0.8 to 3 MeV. Results: The GS and WVI physics lists provided consistent Fano test results (within ±0.5%) for maximum step sizes under 0.01 mm at 1.25 MeV, with improved performance at 3 MeV (within ±0.25%). The option3 physics list provided consistent Fano test results (within ±0.5%) for maximum step sizes above 1 mm. Optimal parameters for the option3 physics list were 10 km maximum step size with default values for other user-defined parameters: 0.2 dRoverRange, 0.01 mm final range, 0.04 range factor, 2.5 geometrical factor, and 1 skin. Simulations using the option3 physics list were ∼70 – 100 times faster compared to GS and WVI under optimal parameters. Conclusion: This work indicated that the option3 physics list passes the Fano test within ±0.5% when using a maximum step size of 10 km for energies suitable for IC calculations in a 6 MV spectrum without extensive computational times. Optimal user-defined parameters using the option3 physics list will be used in future IC simulations to

  14. The feasibility test of state-of-the-art face detection algorithms for vehicle occupant detection

    NASA Astrophysics Data System (ADS)

    Makrushin, Andrey; Dittmann, Jana; Vielhauer, Claus; Langnickel, Mirko; Kraetzer, Christian

    2010-01-01

    Vehicle seat occupancy detection systems are designed to prevent the deployment of airbags at unoccupied seats, thus avoiding the considerable cost imposed by the replacement of airbags. Occupancy detection can also improve passenger comfort, e.g. by activating air-conditioning systems. The most promising development perspectives are seen in optical sensing systems which have become cheaper and smaller in recent years. The most plausible way to check the seat occupancy by occupants is the detection of presence and location of heads, or more precisely, faces. This paper compares the detection performances of the three most commonly used and widely available face detection algorithms: Viola- Jones, Kienzle et al. and Nilsson et al. The main objective of this work is to identify whether one of these systems is suitable for use in a vehicle environment with variable and mostly non-uniform illumination conditions, and whether any one face detection system can be sufficient for seat occupancy detection. The evaluation of detection performance is based on a large database comprising 53,928 video frames containing proprietary data collected from 39 persons of both sexes and different ages and body height as well as different objects such as bags and rearward/forward facing child restraint systems.

  15. Investigation of the release test method for the topical application of pharmaceutical preparations: release test of cataplasm including nonsteroidal anti-inflammatory drugs using artificial sweat.

    PubMed

    Shimamura, Takeshi; Tairabune, Tomohiko; Kogo, Tetsuo; Ueda, Hideo; Numajiri, Sachihiko; Kobayashi, Daisuke; Morimoto, Yasunori

    2004-02-01

    A simple procedure for determining the in vitro release profile of a cataplasm for use in a quality control procedure has been developed. Since the disk assembly in the USP for patch dosage forms was unsuited for use in a release test due to penetration of the dissolution medium into the cataplasm from the screw part of the device and the cataplasm swelled, new holders were designed. In the new holder, a cataplasm is held in position by sandwiching it between a stainless-steel O-ring and a silicon O-ring on the stainless steel board, 2 acrylic boards hold the O-rings and the stainless steal board, and the entire assembly is placed at the bottom of the dissolution vessel. The release profile was determined using the "Paddle over Disk" method in USP26. Furthermore, in order to prevent the swelling of the cataplasm, artificial sweat was used as the dissolution medium. The release profiles of the nine marketed brands of cataplasm containing indomethacin, ketoprofen, and flurbiprofen, respectively, were determined over a 12-h period. By adjusting the ion concentration and volume of the media, and the release surface-area of the cataplasm exposed to each medium, the procedure was found to be reproducible for in vitro release characterization of nine marketed brands. This shows that this technique can be used as a quality control tool for assuring product uniformity. PMID:14757999

  16. Test and evaluation of the HIDEC engine uptrim algorithm. [Highly Integrated Digital Electronic Control for aircraft

    NASA Technical Reports Server (NTRS)

    Ray, R. J.; Myers, L. P.

    1986-01-01

    The highly integrated digital electronic control (HIDEC) program will demonstrate and evaluate the improvements in performance and mission effectiveness that result from integrated engine-airframe control systems. Performance improvements will result from an adaptive engine stall margin mode, a highly integrated mode that uses the airplane flight conditions and the resulting inlet distortion to continuously compute engine stall margin. When there is excessive stall margin, the engine is uptrimmed for more thrust by increasing engine pressure ratio (EPR). The EPR uptrim logic has been evaluated and implemente into computer simulations. Thrust improvements over 10 percent are predicted for subsonic flight conditions. The EPR uptrim was successfully demonstrated during engine ground tests. Test results verify model predictions at the conditions tested.

  17. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  18. Genetic Algorithm Based Multi-Agent System Applied to Test Generation

    ERIC Educational Resources Information Center

    Meng, Anbo; Ye, Luqing; Roy, Daniel; Padilla, Pierre

    2007-01-01

    Automatic test generating system in distributed computing context is one of the most important links in on-line evaluation system. Although the issue has been argued long since, there is not a perfect solution to it so far. This paper proposed an innovative approach to successfully addressing such issue by the seamless integration of genetic…

  19. Development and large scale benchmark testing of the PROSPECTOR_3 threading algorithm.

    PubMed

    Skolnick, Jeffrey; Kihara, Daisuke; Zhang, Yang

    2004-08-15

    This article describes the PROSPECTOR_3 threading algorithm, which combines various scoring functions designed to match structurally related target/template pairs. Each variant described was found to have a Z-score above which most identified templates have good structural (threading) alignments, Z(struct) (Z(good)). 'Easy' targets with accurate threading alignments are identified as single templates with Z > Z(good) or two templates, each with Z > Z(struct), having a good consensus structure in mutually aligned regions. 'Medium' targets have a pair of templates lacking a consensus structure, or a single template for which Z(struct) < Z < Z(good). PROSPECTOR_3 was applied to a comprehensive Protein Data Bank (PDB) benchmark composed of 1491 single domain proteins, 41-200 residues long and no more than 30% identical to any threading template. Of the proteins, 878 were found to be easy targets, with 761 having a root mean square deviation (RMSD) from native of less than 6.5 A. The average contact prediction accuracy was 46%, and on average 17.6 residue continuous fragments were predicted with RMSD values of 2.0 A. There were 606 medium targets identified, 87% (31%) of which had good structural (threading) alignments. On average, 9.1 residue, continuous fragments with RMSD of 2.5 A were predicted. Combining easy and medium sets, 63% (91%) of the targets had good threading (structural) alignments compared to native; the average target/template sequence identity was 22%. Only nine targets lacked matched templates. Moreover, PROSPECTOR_3 consistently outperforms PSIBLAST. Similar results were predicted for open reading frames (ORFS) < or =200 residues in the M. genitalium, E. coli and S. cerevisiae genomes. Thus, progress has been made in identification of weakly homologous/analogous proteins, with very high alignment coverage, both in a comprehensive PDB benchmark as well as in genomes.

  20. First results from the COST-HOME monthly benchmark dataset with temperature and precipitation data for testing homogenisation algorithms

    NASA Astrophysics Data System (ADS)

    Venema, Victor; Mestre, Olivier

    2010-05-01

    between the stations has been modelled as uncorrelated Gaussian white noise. The idealised dataset is valuable because its statistical characteristics are assumed in most homogenisation algorithms and Gaussian white noise is the signal most used for testing the algorithms. The surrogate and synthetic data represent homogeneous climate data. To this data known inhomogeneities are added: outliers, as well as break inhomogeneities and local trends. Furthermore, missing data is simulated and a global trend is added. The participants have returned around 25 contributions. Some fully automatic algorithms were applied, but most homogenisation methods need human input. For well-known algorithms, MASH, PRODIGE, SNHT, multiple contributions were returned. This allowed us to study the importance of the implementation and the operator for homogenisation, which was found to be an important factor. For more information on the COST Action on homogenisation see: http://www.homogenisation.org/ For more information on - and for downloading of - the benchmark dataset and the returned data see: http://www.meteo.uni-bonn.de/venema/themes/homogenisation/

  1. An italian multicenter study for application of a diagnostic algorithm in autoantibody testing.

    PubMed

    Bonaguri, Chiara; Melegari, Alessandra; Dall'Aglio, PierPaolo; Ballabio, Andrea; Terenziani, Paolo; Russo, Annalisa; Battistelli, Luisita; Aloe, Rosalia; Camisa, Roberta; Campaniello, Giovanna; Sartori, Elisabetta; Monica, Cesare

    2009-09-01

    The presence in the serum of specific autoantibodies, such as antinuclear antibodies (ANA), anti-double-stranded DNA (anti-dsDNA), and antiextractable nuclear antigens (anti-ENA), is one of the diagnostic criteria for autoimmune rheumatic disease, and the requests for these tests in the last few years have grown remarkably. A guideline for reducing clinically inappropriate requests in autoantibody testing (ANA, anti-dsDNA, anti-ENA) has been applied in the Parma Hospital since 2007. The results for the period January-December 2007 were compared to those of the previous period January-December 2006, and a significant reduction in the number of anti-dsDNA (23.9%) and anti-ENA (20.7%) was found. The aim of this study was to assess the applicability of a similar guideline in a wide area (Parma, Modena, Piacenza, Reggio-Emilia) with reference to the diagnosis of autoimmune rheumatic disease. This project, supported by a regional grant for innovative research projects, was started in January 2008 and consists of three different steps: (1) a study group of clinicians and laboratory physicians to evaluate the diagnostic criteria, the analytical procedures, and the number of tests performed in different hospitals; (2) developing common guidelines for autoantibody testing that takes into account the different clinical needs with the aim of improving efficiency and clinical effectiveness of diagnosis and monitoring; and (3) assessing compliance with the guidelines in the different hospitals that are evaluating the second-level test (anti-dsDNA, anti-ENA) decrease. We think that the validation of guidelines for the laboratory diagnosis of autoimmune rheumatic disease can represent a tool for improving patients' outcomes and economic efficiency.

  2. Testing multistage gain and offset trimming in a single photon counting IC with a charge sharing elimination algorithm

    NASA Astrophysics Data System (ADS)

    Krzyżanowska, A.; Gryboś, P.; Szczygieł, R.; Maj, P.

    2015-12-01

    Designing a hybrid pixel detector readout electronics operating in a single photon counting mode is a very challenging process, where many main parameters are optimized in parallel (e.g. gain, noise, and threshold dispersion). Additional requirements for a smaller pixel size with extended functionality push designers to use new deep sub-micron technologies. Minimizing the channel size is possible, however, with a decreased pixel size, the charge sharing effect becomes a more important issue. To overcome this problem, we designed an integrated circuit prototype produced in CMOS 40 nm technology, which has an extended functionality of a single pixel. A C8P1 algorithm for the charge sharing effect compensation was implemented. In the algorithm's first stage the charge is rebuilt in a signal rebuilt hub fed by the CSA (charge sensitive amplifier) outputs from four neighbouring pixels. Then, the pixel with the biggest amount of charge is chosen, after a comparison with all the adjacent ones. In order to process the data in such a complicated way, a certain architecture of a single channel was proposed, which allows for: ṡ processing the signal with the possibility of total charge reconstruction (by connecting with the adjacent pixels), ṡ a comparison of certain pixel amplitude to its 8 neighbours, ṡ the extended testability of each block inside the channel to measure CSA gain dispersion, shaper gain dispersion, threshold dispersion (including the simultaneous generation of different pulse amplitudes from different pixels), ṡ trimming all the necessary blocks for proper operation. We present a solution for multistage gain and offset trimming implemented in the IC prototype. It allows for minimization of the total charge extraction errors, minimization of threshold dispersion in the pixel matrix and minimization of errors of comparison of certain pixel pulse amplitudes with all its neighbours. The detailed architecture of a single channel is presented together

  3. An in silico algorithm for identifying stabilizing pockets in proteins: test case, the Y220C mutant of the p53 tumor suppressor protein.

    PubMed

    Bromley, Dennis; Bauer, Matthias R; Fersht, Alan R; Daggett, Valerie

    2016-09-01

    The p53 tumor suppressor protein performs a critical role in stimulating apoptosis and cell cycle arrest in response to oncogenic stress. The function of p53 can be compromised by mutation, leading to increased risk of cancer; approximately 50% of cancers are associated with mutations in the p53 gene, the majority of which are in the core DNA-binding domain. The Y220C mutation of p53, for example, destabilizes the core domain by 4 kcal/mol, leading to rapid denaturation and aggregation. The associated loss of tumor suppressor functionality is associated with approximately 75 000 new cancer cases every year. Destabilized p53 mutants can be 'rescued' and their function restored; binding of a small molecule into a pocket on the surface of mutant p53 can stabilize its wild-type structure and restore its function. Here, we describe an in silico algorithm for identifying potential rescue pockets, including the algorithm's integration with the Dynameomics molecular dynamics data warehouse and the DIVE visual analytics engine. We discuss the results of the application of the method to the Y220C p53 mutant, entailing finding a putative rescue pocket through MD simulations followed by an in silico search for stabilizing ligands that dock into the putative rescue pocket. The top three compounds from this search were tested experimentally and one of them bound in the pocket, as shown by nuclear magnetic resonance, and weakly stabilized the mutant. PMID:27503952

  4. An in silico algorithm for identifying stabilizing pockets in proteins: test case, the Y220C mutant of the p53 tumor suppressor protein.

    PubMed

    Bromley, Dennis; Bauer, Matthias R; Fersht, Alan R; Daggett, Valerie

    2016-09-01

    The p53 tumor suppressor protein performs a critical role in stimulating apoptosis and cell cycle arrest in response to oncogenic stress. The function of p53 can be compromised by mutation, leading to increased risk of cancer; approximately 50% of cancers are associated with mutations in the p53 gene, the majority of which are in the core DNA-binding domain. The Y220C mutation of p53, for example, destabilizes the core domain by 4 kcal/mol, leading to rapid denaturation and aggregation. The associated loss of tumor suppressor functionality is associated with approximately 75 000 new cancer cases every year. Destabilized p53 mutants can be 'rescued' and their function restored; binding of a small molecule into a pocket on the surface of mutant p53 can stabilize its wild-type structure and restore its function. Here, we describe an in silico algorithm for identifying potential rescue pockets, including the algorithm's integration with the Dynameomics molecular dynamics data warehouse and the DIVE visual analytics engine. We discuss the results of the application of the method to the Y220C p53 mutant, entailing finding a putative rescue pocket through MD simulations followed by an in silico search for stabilizing ligands that dock into the putative rescue pocket. The top three compounds from this search were tested experimentally and one of them bound in the pocket, as shown by nuclear magnetic resonance, and weakly stabilized the mutant.

  5. Advanced Transport Delay Compensation Algorithms: Results of Delay Measurement and Piloted Performance Tests

    NASA Technical Reports Server (NTRS)

    Guo, Liwen; Cardullo, Frank M.; Kelly, Lon C.

    2007-01-01

    This report summarizes the results of delay measurement and piloted performance tests that were conducted to assess the effectiveness of the adaptive compensator and the state space compensator for alleviating the phase distortion of transport delay in the visual system in the VMS at the NASA Langley Research Center. Piloted simulation tests were conducted to assess the effectiveness of two novel compensators in comparison to the McFarland predictor and the baseline system with no compensation. Thirteen pilots with heterogeneous flight experience executed straight-in and offset approaches, at various delay configurations, on a flight simulator where different predictors were applied to compensate for transport delay. The glideslope and touchdown errors, power spectral density of the pilot control inputs, NASA Task Load Index, and Cooper-Harper rating of the handling qualities were employed for the analyses. The overall analyses show that the adaptive predictor results in slightly poorer compensation for short added delay (up to 48 ms) and better compensation for long added delay (up to 192 ms) than the McFarland compensator. The analyses also show that the state space predictor is fairly superior for short delay and significantly superior for long delay than the McFarland compensator.

  6. SIMULATING MAGNETOHYDRODYNAMICAL FLOW WITH CONSTRAINED TRANSPORT AND ADAPTIVE MESH REFINEMENT: ALGORITHMS AND TESTS OF THE AstroBEAR CODE

    SciTech Connect

    Cunningham, Andrew J.; Frank, Adam; Varniere, Peggy; Mitran, Sorin; Jones, Thomas W.

    2009-06-15

    A description is given of the algorithms implemented in the AstroBEAR adaptive mesh-refinement code for ideal magnetohydrodynamics. The code provides several high-resolution shock-capturing schemes which are constructed to maintain conserved quantities of the flow in a finite-volume sense. Divergence-free magnetic field topologies are maintained to machine precision by collating the components of the magnetic field on a cell-interface staggered grid and utilizing the constrained transport approach for integrating the induction equations. The maintenance of magnetic field topologies on adaptive grids is achieved using prolongation and restriction operators which preserve the divergence and curl of the magnetic field across collocated grids of different resolutions. The robustness and correctness of the code is demonstrated by comparing the numerical solution of various tests with analytical solutions or previously published numerical solutions obtained by other codes.

  7. The remote sensing of ocean primary productivity - Use of a new data compilation to test satellite algorithms

    NASA Technical Reports Server (NTRS)

    Balch, William; Evans, Robert; Brown, Jim; Feldman, Gene; Mcclain, Charles; Esaias, Wayne

    1992-01-01

    Global pigment and primary productivity algorithms based on a new data compilation of over 12,000 stations occupied mostly in the Northern Hemisphere, from the late 1950s to 1988, were tested. The results showed high variability of the fraction of total pigment contributed by chlorophyll, which is required for subsequent predictions of primary productivity. Two models, which predict pigment concentration normalized to an attenuation length of euphotic depth, were checked against 2,800 vertical profiles of pigments. Phaeopigments consistently showed maxima at about one optical depth below the chlorophyll maxima. CZCS data coincident with the sea truth data were also checked. A regression of satellite-derived pigment vs ship-derived pigment had a coefficient of determination. The satellite underestimated the true pigment concentration in mesotrophic and oligotrophic waters and overestimated the pigment concentration in eutrophic waters. The error in the satellite estimate showed no trends with time between 1978 and 1986.

  8. Evaluation of the Repeatability of the Delta Q Duct Leakage Testing TechniqueIncluding Investigation of Robust Analysis Techniques and Estimates of Weather Induced Uncertainty

    SciTech Connect

    Dickerhoff, Darryl; Walker, Iain

    2008-08-01

    for the pressure station approach. Walker and Dickerhoff also included estimates of DeltaQ test repeatability based on the results of field tests where two houses were tested multiple times. The two houses were quite leaky (20-25 Air Changes per Hour at 50Pa (0.2 in. water) (ACH50)) and were located in the San Francisco Bay area. One house was tested on a calm day and the other on a very windy day. Results were also presented for two additional houses that were tested by other researchers in Minneapolis, MN and Madison, WI, that had very tight envelopes (1.8 and 2.5 ACH50). These tight houses had internal duct systems and were tested without operating the central blower--sometimes referred to as control tests. The standard deviations between the multiple tests for all four houses were found to be about 1% of the envelope air flow at 50 Pa (0.2 in. water) (Q50) that led to the suggestion of this as a rule of thumb for estimating DeltaQ uncertainty. Because DeltaQ is based on measuring envelope air flows it makes sense for uncertainty to scale with envelope leakage. However, these tests were on a limited data set and one of the objectives of the current study is to increase the number of tested houses. This study focuses on answering two questions: (1) What is the uncertainty associated with changes in weather (primarily wind) conditions during DeltaQ testing? (2) How can these uncertainties be reduced? The first question is addressing issues of repeatability. To study this five houses were tested as many times as possible over a day. Weather data was recorded on-site--including the local windspeed. The result from these five houses were combined with the two Bay Area homes from the previous studies. The variability of the tests (represented by the standard deviation) is the repeatability of the test method for that house under the prevailing weather conditions. Because the testing was performed over a day a wide range of wind speeds was achieved following typical

  9. Testing Nelder-Mead Based Repulsion Algorithms for Multiple Roots of Nonlinear Systems via a Two-Level Factorial Design of Experiments

    PubMed Central

    Fernandes, Edite M. G. P.

    2015-01-01

    This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as ‘erf’, is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. PMID:25875591

  10. Combined radiation pressure and thermal modelling of complex satellites: Algorithms and on-orbit tests

    NASA Astrophysics Data System (ADS)

    Ziebart, M.; Adhya, S.; Sibthorpe, A.; Edwards, S.; Cross, P.

    In an era of high resolution gravity field modelling the dominant error sources in spacecraft orbit determination are non-conservative spacecraft surface forces. These forces can be difficult to characterise a priori because they require detailed modelling of: spacecraft geometry and surface properties; attitude behaviour; the spatial and temporal variations of the incident radiation and particle fluxes and the interaction of these fluxes with the surfaces. The conventional approach to these problems is to build simplified box-and-wing models of the satellites and to estimate empirically factors that account for the inevitable mis-modelling. Over the last few years the authors have developed a suite of software utilities that model analytically three of the main effects: solar radiation pressure, thermal forces and the albedo/earthshine effects. The techniques are designed specifically to deal with complex spacecraft structures, no structural simplifications are made and the method can be applied to any spacecraft. Substantial quality control measures are used during computation to both avoid and trap errors. The paper presents the broad basis of the modelling techniques for each of the effects, and gives the results of recent tests applied to GPS Block IIR satellites and the low Earth orbit satellite altimeter JASON-1.

  11. Three-dimensional graphics simulator for testing mine machine computer-controlled algorithms -- phase 1 development

    SciTech Connect

    Ambrose, D.H. )

    1993-01-01

    Using three-dimensional (3-D) graphics computing to evaluate new technologies for computer-assisted mining systems illustrates how these visual techniques can redefine the way researchers look at raw scientific data. The US Bureau of Mines is using 3-D graphics computing to obtain cheaply, easily, and quickly information about the operation and design of current and proposed mechanical coal and metal-nonmetal mining systems. Bureau engineers developed a graphics simulator for a continuous miner that enables a realistic test for experimental software that controls the functions of a machine. Some of the specific simulated functions of the continuous miner are machine motion, appendage motion, machine position, and machine sensors. The simulator uses data files generated in the laboratory or mine using a computer-assisted mining machine. The data file contains information from a laser-based guidance system and a data acquisition system that records all control commands given to a computer-assisted mining machine. This report documents the first phase in developing the simulator and discusses simulator requirements, features of the initial simulator, and several examples of its application. During this endeavor, Bureau engineers discovered and appreciated the simulator's potential to assist their investigations of machine controls and navigation systems.

  12. Running GCM physics and dynamics on different grids: Algorithm and tests

    NASA Astrophysics Data System (ADS)

    Molod, A.

    2006-12-01

    The major drawback in the use of sigma coordinates in atmospheric GCMs, namely the error in the pressure gradient term near sloping terrain, leaves the use of eta coordinates an important alternative. A central disadvantage of an eta coordinate, the inability to retain fine resolution in the vertical as the surface rises above sea level, is addressed here. An `alternate grid' technique is presented which allows the tendencies of state variables due to the physical parameterizations to be computed on a vertical grid (the `physics grid') which retains fine resolution near the surface, while the remaining terms in the equations of motion are computed using an eta coordinate (the `dynamics grid') with coarser vertical resolution. As a simple test of the technique a set of perpetual equinox experiments using a simplified lower boundary condition with no land and no topography were performed. The results show that for both low and high resolution alternate grid experiments, much of the benefit of increased vertical resolution for the near surface meridional wind (and mass streamfield) can be realized by enhancing the vertical resolution of the `physics grid' in the manner described here. In addition, approximately half of the increase in zonal jet strength seen with increased vertical resolution can be realized using the `alternate grid' technique. A pair of full GCM experiments with realistic lower boundary conditions and topography were also performed. It is concluded that the use of the `alternate grid' approach offers a promising way forward to alleviate a central problem associated with the use of the eta coordinate in atmospheric GCMs.

  13. Development, refinement, and testing of a short term solar flare prediction algorithm

    NASA Technical Reports Server (NTRS)

    Smith, Jesse B., Jr.

    1993-01-01

    Progress toward performance of the tasks and accomplishing the goals set forth in the two year Research Grant included primarily analysis of digital data sets and determination of methodology associated with the analysis of the very large, unique, and complex collection of digital solar magnetic field data. The treatment of each magnetogram as a unique set of data requiring special treatment was found to be necessary. It is determined that a person familiar with the data, the analysis system, and logical, coherent outcome of the analysis must conduct each analysis, and interact with the analysis program(s) significantly sometimes many iterations for successful calibration and analysis of the data set. With this interaction, the data sets yield valuable, coherent analyses. During this period, it was also decided that only data sets taken inside heliographic longitudes (Central Meridian Distance) East and West 30 degrees (within 30 degrees of the Central Meridian of the Sun). If the total data set is then found to be numerically inadequate for the final analysis, 30 - 45 degrees Central Meridian Distance data will then be analyzed. The Optical Data storage system (MSFC observatory) was found appropriate for use both in intermediate storage of the data (preliminary to analysis), and for storage of the analyzed data sets for later parametric extraction.

  14. Revised Phase II Plan for the National Education Practice File Development Project Including: Creation; Pilot Testing; and Evaluation of a Test Practice File. Product 1.7/1.8 (Product 1.6 Appended).

    ERIC Educational Resources Information Center

    Benson, Gregory, Jr.; And Others

    A detailed work plan is presented for the conduct of Phase II activities, which are concerned with creating a pilot test file, conducting a test of it, evaluating the process and input of the file, and preparing the file management plan. Due to the outcomes of activities in Phase I, this plan was revised from an earlier outline. Included in the…

  15. Clostridium difficile testing algorithms using glutamate dehydrogenase antigen and C. difficile toxin enzyme immunoassays with C. difficile nucleic acid amplification testing increase diagnostic yield in a tertiary pediatric population.

    PubMed

    Ota, Kaede V; McGowan, Karin L

    2012-04-01

    We evaluated the performance of the rapid C. diff Quik Chek Complete's glutamate dehydrogenase antigen (GDH) and toxin A/B (CDT) tests in two algorithmic approaches for a tertiary pediatric population: algorithm 1 entailed initial testing with GDH/CDT followed by loop-mediated isothermal amplification (LAMP), and algorithm 2 entailed GDH/CDT followed by cytotoxicity neutralization assay (CCNA) for adjudication of discrepant GDH-positive/CDT-negative results. A true positive (TP) was defined as positivity by CCNA or positivity by LAMP plus another test (GDH, CDT, or the Premier C. difficile toxin A and B enzyme immunoassay [P-EIA]). A total of 141 specimens from 141 patients yielded 27 TPs and 19% prevalence. Sensitivity, specificity, positive predictive value, and negative predictive value were 56%, 100%, 100%, and 90% for P-EIA and 81%, 100%, 100%, and 96% for both algorithm 1 and algorithm 2. In summary, GDH-based algorithms detected C. difficile infections with superior sensitivity compared to P-EIA. The algorithms allowed immediate reporting of half of all TPs, but LAMP or CCNA was required to confirm the presence or absence of toxigenic C. difficile in GDH-positive/CDT-negative specimens.

  16. Modeling in the State Flow Environment to Support Launch Vehicle Verification Testing for Mission and Fault Management Algorithms in the NASA Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Berg, Peter; England, Dwight; Johnson, Stephen B.

    2016-01-01

    Analysis methods and testing processes are essential activities in the engineering development and verification of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS). Central to mission success is reliable verification of the Mission and Fault Management (M&FM) algorithms for the SLS launch vehicle (LV) flight software. This is particularly difficult because M&FM algorithms integrate and operate LV subsystems, which consist of diverse forms of hardware and software themselves, with equally diverse integration from the engineering disciplines of LV subsystems. M&FM operation of SLS requires a changing mix of LV automation. During pre-launch the LV is primarily operated by the Kennedy Space Center (KSC) Ground Systems Development and Operations (GSDO) organization with some LV automation of time-critical functions, and much more autonomous LV operations during ascent that have crucial interactions with the Orion crew capsule, its astronauts, and with mission controllers at the Johnson Space Center. M&FM algorithms must perform all nominal mission commanding via the flight computer to control LV states from pre-launch through disposal and also address failure conditions by initiating autonomous or commanded aborts (crew capsule escape from the failing LV), redundancy management of failing subsystems and components, and safing actions to reduce or prevent threats to ground systems and crew. To address the criticality of the verification testing of these algorithms, the NASA M&FM team has utilized the State Flow environment6 (SFE) with its existing Vehicle Management End-to-End Testbed (VMET) platform which also hosts vendor-supplied physics-based LV subsystem models. The human-derived M&FM algorithms are designed and vetted in Integrated Development Teams composed of design and development disciplines such as Systems Engineering, Flight Software (FSW), Safety and Mission Assurance (S&MA) and major subsystems and vehicle elements

  17. Clinical evaluation of a frozen commercially prepared microdilution panel for antifungal susceptibility testing of seven antifungal agents, including the new triazoles posaconazole, ravuconazole, and voriconazole.

    PubMed

    Pfaller, M A; Diekema, D J; Messer, S A; Boyken, L; Huynh, H; Hollis, R J

    2002-05-01

    A commercially prepared frozen broth microdilution panel (Trek Diagnostic Systems, Westlake, Ohio) was compared with a reference microdilution panel for antifungal susceptibility testing of two quality control (QC) strains and 99 clinical isolates of Candida spp. The antifungal agents tested included amphotericin B, flucytosine, fluconazole, itraconazole, posaconazole, ravuconazole, and voriconazole. Microdilution testing was performed according to NCCLS recommendations. MIC endpoints were read visually after 48 h of incubation and were assessed independently for each microdilution panel. The MICs for the QC strains were within published limits for both the reference and Trek microdilution panels. Discrepancies among MIC endpoints of no more than 2 dilutions were used to calculate the percent agreement. Acceptable levels of agreement between the Trek and reference panels were observed for all antifungal agents tested against the 99 clinical isolates. The overall agreement for each antifungal agent ranged from 96% for ravuconazole to 100% for amphotericin B. The Trek microdilution panel appears to be a viable alternative to frozen microdilution panels prepared in-house. PMID:11980944

  18. Traces of dissolved particles, including coccoliths, in the tests of agglutinated foraminifera from the Challenger Deep (10,897 m water depth, western equatorial Pacific)

    NASA Astrophysics Data System (ADS)

    Gooday, A. J.; Uematsu, K.; Kitazato, H.; Toyofuku, T.; Young, J. R.

    2010-02-01

    We examined four multilocular agglutinated foraminiferan tests from the Challenger Deep, the deepest point in the world's oceans and well below the depth at which biogenic and most detrital minerals disappear from the sediment. The specimens represent undescribed species. Three are trochamminaceans in which imprints and other traces of dissolved agglutinated particles are visible in the orange or yellowish organic test lining. In Trochamminacean sp. A, a delicate meshwork of organic cement forms ridges between the grain impressions. The remnants of test particles include organic structures identifiable as moulds of coccoliths produced by the genus Helicosphaera. Their random alignment suggests that they were agglutinated individually rather than as fragments of a coccosphere. Trochamminacean sp. C incorporates discoidal structures with a central hole; these probably represent the proximal sides of isolated distal shields of another coccolith species, possibly Hayaster perplexus. Imprints of planktonic foraminiferan test fragments are also present in both these trochamminaceans. In Trochamminacean sp. B, the test surface is densely pitted with deep, often angular imprints ranging from roughly equidimensional to rod-shaped. The surfaces are either smooth, or have prominent longitudinal striations, probably made by cleavage traces. We presume these imprints represent mineral grains of various types that subsequently dissolved. X-ray microanalyses reveal strong peaks for Ca associated with grain impressions and coccolith remains in Trochamminacean sp. C. Minor peaks for this element are associated with coccolith remains and planktonic foraminiferan imprints in Trochamminacean sp. A. These Ca peaks possibly originate from traces of calcite remaining on the test surfaces. Agglutinated particles, presumably clay minerals, survive only in the fourth specimen (' Textularia' sp.). Here, the final 4-5 chambers comprise a pavement of small, irregularly shaped grains with flat

  19. Observables of a test mass along an inclined orbit in a post-Newtonian-approximated Kerr spacetime including the linear and quadratic spin terms.

    PubMed

    Hergt, Steven; Shah, Abhay; Schäfer, Gerhard

    2013-07-12

    The orbital motion is derived for a nonspinning test mass in the relativistic, gravitational field of a rotationally deformed body not restricted to the equatorial plane or spherical orbit. The gravitational field of the central body is represented by the Kerr metric, expanded to second post-Newtonian order including the linear and quadratic spin terms. The orbital period, the intrinsic periastron advance, and the precession of the orbital plane are derived with the aid of novel canonical variables and action-based methods.

  20. Estimating Implementation and Operational Costs of an Integrated Tiered CD4 Service including Laboratory and Point of Care Testing in a Remote Health District in South Africa

    PubMed Central

    Cassim, Naseem; Coetzee, Lindi M.; Schnippel, Kathryn; Glencross, Deborah K.

    2014-01-01

    Background An integrated tiered service delivery model (ITSDM) has been proposed to provide ‘full-coverage’ of CD4 services throughout South Africa. Five tiers are described, defined by testing volumes and number of referring health-facilities. These include: (1) Tier-1/decentralized point-of-care service (POC) in a single site; Tier-2/POC-hub servicing processing <30–40 samples from 8–10 health-clinics; Tier-3/Community laboratories servicing ∼50 health-clinics, processing <150 samples/day; high-volume centralized laboratories (Tier-4 and Tier-5) processing <300 or >600 samples/day and serving >100 or >200 health-clinics, respectively. The objective of this study was to establish costs of existing and ITSDM-tiers 1, 2 and 3 in a remote, under-serviced district in South Africa. Methods Historical health-facility workload volumes from the Pixley-ka-Seme district, and the total volumes of CD4 tests performed by the adjacent district referral CD4 laboratories, linked to locations of all referring clinics and related laboratory-to-result turn-around time (LTR-TAT) data, were extracted from the NHLS Corporate-Data-Warehouse for the period April-2012 to March-2013. Tiers were costed separately (as a cost-per-result) including equipment, staffing, reagents and test consumable costs. A one-way sensitivity analyses provided for changes in reagent price, test volumes and personnel time. Results The lowest cost-per-result was noted for the existing laboratory-based Tiers- 4 and 5 ($6.24 and $5.37 respectively), but with related increased LTR-TAT of >24–48 hours. Full service coverage with TAT <6-hours could be achieved with placement of twenty-seven Tier-1/POC or eight Tier-2/POC-hubs, at a cost-per-result of $32.32 and $15.88 respectively. A single district Tier-3 laboratory also ensured ‘full service coverage’ and <24 hour LTR-TAT for the district at $7.42 per-test. Conclusion Implementing a single Tier-3/community laboratory to extend and improve delivery

  1. High-Speed Wind-Tunnel Tests of a Model of the Lockheed YP-80A Airplane Including Correlation with Flight Tests and Tests of Dive-Recovery Flaps

    NASA Technical Reports Server (NTRS)

    Cleary, Joseph W.; Gray, Lyle J.

    1947-01-01

    This report contains the results of tests of a 1/3-scale model of the Lockheed YP-90A "Shooting Star" airplane and a comparison of drag, maximum lift coefficient, and elevator angle required for level flight as measured in the wind tunnel and in flight. Included in the report are the general aerodynamic characteristics of the model and of two types of dive-recovery flaps, one at several positions along the chord on the lower surface of the wing and the other on the lower surface of the fuselage. The results show good agreement between the flight and wind-tunnel measurements at all Mach numbers. The results indicate that the YP-80A is controllable in pitch by the elevators to a Mach number of at least 0.85. The fuselage dive-recovery flaps are effective for producing a climbing moment and increasing the drag at Mach numbers up to at least 0.8. The wing dive-recovery flaps are most effective for producing a climbing moment at 0.75 Mach number. At 0.85 Mach number, their effectiveness is approximately 50 percent of the maximum. The optimum position for the wing dive-recovery flaps to produce a climbing moment is at approximately 35 percent of the chord.

  2. Planning fuel-conservative descents with or without time constraints using a small programmable calculator: Algorithm development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1983-01-01

    A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight tests flown with a T-39A (Sabreliner) airplane are presented.

  3. Development and test results of a flight management algorithm for fuel conservative descents in a time-based metered traffic environment

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Cannon, D. G.

    1980-01-01

    A simple flight management descent algorithm designed to improve the accuracy of delivering an airplane in a fuel-conservative manner to a metering fix at a time designated by air traffic control was developed and flight tested. This algorithm provides a three dimensional path with terminal area time constraints (four dimensional) for an airplane to make an idle thrust, clean configured (landing gear up, flaps zero, and speed brakes retracted) descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithm is described. The results of the flight tests flown with the Terminal Configured Vehicle airplane are presented.

  4. Cost effectiveness of non-invasive tests including duplex scanning for diagnosis of deep venous thrombosis. A prospective study carried out on 511 patients.

    PubMed

    Bendayan, P; Boccalon, H

    1991-01-01

    Recent studies have elucidated the cost-effectiveness of various diagnostic methods used to detect deep venous thrombosis (DVT) of the lower limbs. These methods include Doppler, plethysmography and labelled fibrogen tests. However, duplex scanning has recently proven to be a more reliable examination. With a view to establishing a realistic appraisal of matters as they stand, the authors have carried out a prospective study to compare the relative cost-effectiveness of purely physical examination, duplex scanning associated with strain-gauge plethysmography, contrast venography indicated for each proximal DVT, and contrast venography as a first-choice examination. 511 consecutive patients suspected of DVT of the lower limbs were examined using the various non-invasive methods cited above. 185 of the patients underwent contrast venography. When compared with those of the non-invasive tests, the results of the latter examination provided for extrapolation to the total population of 511 patients so as to better evaluate costs. We are able to conclude that physical examination alone is neither cost-effective nor risk free. Non-invasive tests, which are more reliable, provide annual savings greater than 1,500,000 FF ($ 240,000) with respect to venography. Performing venography for each proximal DVT increases spending by little: savings are again greater than 1,200,000 FF ($ 192,000).

  5. Lord-Wingersky Algorithm Version 2.0 for Hierarchical Item Factor Models with Applications in Test Scoring, Scale Alignment, and Model Fit Testing. CRESST Report 830

    ERIC Educational Resources Information Center

    Cai, Li

    2013-01-01

    Lord and Wingersky's (1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined…

  6. Development and Field-Testing of a Study Protocol, including a Web-Based Occupant Survey Tool, for Use in Intervention Studies of Indoor Environmental Quality

    SciTech Connect

    Mendell, Mark; Eliseeva, Ekaterina; Spears, Michael; Fisk, William J.

    2009-06-01

    We developed and pilot-tested an overall protocol for intervention studies to evaluate the effects of indoor environmental changes in office buildings on the health symptoms and comfort of occupants. The protocol includes a web-based survey to assess the occupant's responses, as well as specific features of study design and analysis. The pilot study, carried out on two similar floors in a single building, compared two types of ventilation system filter media. With support from the building's Facilities staff, the implementation of the filter change intervention went well. While the web-based survey tool worked well also, low overall response rates (21-34percent among the three work groups included) limited our ability to evaluate the filter intervention., The total number of questionnaires returned was low even though we extended the study from eight to ten weeks. Because another simultaneous study we conducted elsewhere using the same survey had a high response rate (>70percent), we conclude that the low response here resulted from issues specific to this pilot, including unexpected restrictions by some employing agencies on communication with occupants.

  7. Safety and anti-HIV assessments of natural vaginal cleansing products in an established topical microbicides in vitro testing algorithm

    PubMed Central

    2010-01-01

    Background At present, there is no effective vaccine or other approved product for the prevention of sexually transmitted human immunodeficiency virus type 1 (HIV-1) infection. It has been reported that women in resource-poor communities use vaginally applied citrus juices as topical microbicides. These easily accessible food products have historically been applied to prevent pregnancy and sexually transmitted diseases. The aim of this study was to evaluate the efficacy and cytotoxicity of these substances using an established topical microbicide testing algorithm. Freshly squeezed lemon and lime juice and household vinegar were tested in their original state or in pH neutralized form for efficacy and cytotoxicity in the CCR5-tropic cell-free entry and cell-associated transmission assays, CXCR4-tropic entry and fusion assays, and in a human PBMC-based anti-HIV-1 assay. These products were also tested for their effect on viability of cervico-vaginal cell lines, human cervical explant tissues, and beneficial Lactobacillus species. Results Natural lime and lemon juice and household vinegar demonstrated anti-HIV-1 activity and cytotoxicity in transformed cell lines. Neutralization of the products reduced both anti-HIV-1 activity and cytotoxicity, resulting in a low therapeutic window for both acidic and neutralized formulations. For the natural juices and vinegar, the IC50 was ≤ 3.5 (0.8-3.5)% and the TC50 ≤ 6.3 (1.0-6.3)%. All three liquid products inhibited viability of beneficial Lactobacillus species associated with vaginal health. Comparison of three different toxicity endpoints in the cervical HeLa cell line revealed that all three products affected membrane integrity, cytosolic enzyme release, and dehydrogenase enzyme activity in living cells. The juices and vinegar also exerted strong cytotoxicity in cervico-vaginal cell lines, mainly due to their acidic pH. In human cervical explant tissues, treatment with 5% lemon or lime juice or 6% vinegar induced

  8. A new single nucleotide polymorphism in CAPN1 extends the current tenderness marker test to include cattle of Bos indicus, Bos taurus, and crossbred descent.

    PubMed

    White, S N; Casas, E; Wheeler, T L; Shackelford, S D; Koohmaraie, M; Riley, D G; Chase, C C; Johnson, D D; Keele, J W; Smith, T P L

    2005-09-01

    The three objectives of this study were to 1) test for the existence of beef tenderness markers in the CAPN1 gene segregating in Brahman cattle; 2) test existing CAPN1 tenderness markers in indicus-influenced crossbred cattle; and 3) produce a revised marker system for use in cattle of all subspecies backgrounds. Previously, two SNP in the CAPN1 gene have been described that could be used to guide selection in Bos taurus cattle (designated Markers 316 and 530), but neither marker segregates at high frequency in Brahman cattle. In this study, we examined three additional SNP in CAPN1 to determine whether variation in this gene could be associated with tenderness in a large, multisire American Brahman population. One marker (termed 4751) was associated with shear force on postmortem d 7 (P < 0.01), 14 (P = 0.015), and 21 (P < 0.001) in this population, demonstrating that genetic variation important for tenderness segregates in Bos indicus cattle at or near CAPN1. Marker 4751 also was associated with shear force (P < 0.01) in the same large, multisire population of cattle of strictly Bos taurus descent that was used to develop the previously reported SNP (referred to as the Germplasm Evaluation [GPE] Cycle 7 population), indicating the possibility that one marker could have wide applicability in cattle of all subspecies backgrounds. To test this hypothesis, Marker 4751 was tested in a third large, multisire cattle population of crossbred subspecies descent (including sire breeds of Brangus, Beefmaster, Bonsmara, Romosinuano, Hereford, and Angus referred to as the GPE Cycle 8 population). The highly significant association of Marker 4751 with shear force in this population (P < 0.001) confirms the usefulness of Marker 4751 in cattle of all subspecies backgrounds, including Bos taurus, Bos indicus, and crossbred descent. This wide applicability adds substantial value over previously released Markers 316 and 530. However, Marker 316, which had previously been shown to be

  9. Testing the GLAaS algorithm for dose measurements on low- and high-energy photon beams using an amorphous silicon portal imager

    SciTech Connect

    Nicolini, Giorgia; Fogliata, Antonella; Vanetti, Eugenio; Clivio, Alessandro; Vetterli, Daniel; Cozzi, Luca

    2008-02-15

    The GLAaS algorithm for pretreatment intensity modulation radiation therapy absolute dose verification based on the use of amorphous silicon detectors, as described in Nicolini et al. [G. Nicolini, A. Fogliata, E. Vanetti, A. Clivio, and L. Cozzi, Med. Phys. 33, 2839-2851 (2006)], was tested under a variety of experimental conditions to investigate its robustness, the possibility of using it in different clinics and its performance. GLAaS was therefore tested on a low-energy Varian Clinac (6 MV) equipped with an amorphous silicon Portal Vision PV-aS500 with electronic readout IAS2 and on a high-energy Clinac (6 and 15 MV) equipped with a PV-aS1000 and IAS3 electronics. Tests were performed for three calibration conditions: A: adding buildup on the top of the cassette such that SDD-SSD=d{sub max} and comparing measurements with corresponding doses computed at d{sub max}, B: without adding any buildup on the top of the cassette and considering only the intrinsic water-equivalent thickness of the electronic portal imaging devices device (0.8 cm), and C: without adding any buildup on the top of the cassette but comparing measurements against doses computed at d{sub max}. This procedure is similar to that usually applied when in vivo dosimetry is performed with solid state diodes without sufficient buildup material. Quantitatively, the gamma index ({gamma}), as described by Low et al. [D. A. Low, W. B. Harms, S. Mutic, and J. A. Purdy, Med. Phys. 25, 656-660 (1998)], was assessed. The {gamma} index was computed for a distance to agreement (DTA) of 3 mm. The dose difference {delta}D was considered as 2%, 3%, and 4%. As a measure of the quality of results, the fraction of field area with gamma larger than 1 (%FA) was scored. Results over a set of 50 test samples (including fields from head and neck, breast, prostate, anal canal, and brain cases) and from the long-term routine usage, demonstrated the robustness and stability of GLAaS. In general, the mean values of %FA

  10. HPTN 071 (PopART): A Cluster-Randomized Trial of the Population Impact of an HIV Combination Prevention Intervention Including Universal Testing and Treatment: Mathematical Model

    PubMed Central

    Cori, Anne; Ayles, Helen; Beyers, Nulda; Schaap, Ab; Floyd, Sian; Sabapathy, Kalpana; Eaton, Jeffrey W.; Hauck, Katharina; Smith, Peter; Griffith, Sam; Moore, Ayana; Donnell, Deborah; Vermund, Sten H.; Fidler, Sarah; Hayes, Richard; Fraser, Christophe

    2014-01-01

    Background The HPTN 052 trial confirmed that antiretroviral therapy (ART) can nearly eliminate HIV transmission from successfully treated HIV-infected individuals within couples. Here, we present the mathematical modeling used to inform the design and monitoring of a new trial aiming to test whether widespread provision of ART is feasible and can substantially reduce population-level HIV incidence. Methods and Findings The HPTN 071 (PopART) trial is a three-arm cluster-randomized trial of 21 large population clusters in Zambia and South Africa, starting in 2013. A combination prevention package including home-based voluntary testing and counseling, and ART for HIV positive individuals, will be delivered in arms A and B, with ART offered universally in arm A and according to national guidelines in arm B. Arm C will be the control arm. The primary endpoint is the cumulative three-year HIV incidence. We developed a mathematical model of heterosexual HIV transmission, informed by recent data on HIV-1 natural history. We focused on realistically modeling the intervention package. Parameters were calibrated to data previously collected in these communities and national surveillance data. We predict that, if targets are reached, HIV incidence over three years will drop by >60% in arm A and >25% in arm B, relative to arm C. The considerable uncertainty in the predicted reduction in incidence justifies the need for a trial. The main drivers of this uncertainty are possible community-level behavioral changes associated with the intervention, uptake of testing and treatment, as well as ART retention and adherence. Conclusions The HPTN 071 (PopART) trial intervention could reduce HIV population-level incidence by >60% over three years. This intervention could serve as a paradigm for national or supra-national implementation. Our analysis highlights the role mathematical modeling can play in trial development and monitoring, and more widely in evaluating the impact of treatment

  11. Corrective Action Investigation Plan for Corrective Action Unit 410: Waste Disposal Trenches, Tonopah Test Range, Nevada, Revision 0 (includes ROTCs 1, 2, and 3)

    SciTech Connect

    NNSA /NV

    2002-07-16

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Operations Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 410 under the Federal Facility Agreement and Consent Order. Corrective Action Unit 410 is located on the Tonopah Test Range (TTR), which is included in the Nevada Test and Training Range (formerly the Nellis Air Force Range) approximately 140 miles northwest of Las Vegas, Nevada. This CAU is comprised of five Corrective Action Sites (CASs): TA-19-002-TAB2, Debris Mound; TA-21-003-TANL, Disposal Trench; TA-21-002-TAAL, Disposal Trench; 09-21-001-TA09, Disposal Trenches; 03-19-001, Waste Disposal Site. This CAU is being investigated because contaminants may be present in concentrations that could potentially pose a threat to human health and/or the environment, and waste may have been disposed of with out appropriate controls. Four out of five of these CASs are the result of weapons testing and disposal activities at the TTR, and they are grouped together for site closure based on the similarity of the sites (waste disposal sites and trenches). The fifth CAS, CAS 03-19-001, is a hydrocarbon spill related to activities in the area. This site is grouped with this CAU because of the location (TTR). Based on historical documentation and process know-ledge, vertical and lateral migration routes are possible for all CASs. Migration of contaminants may have occurred through transport by infiltration of precipitation through surface soil which serves as a driving force for downward migration of contaminants. Land-use scenarios limit future use of these CASs to industrial activities. The suspected contaminants of potential concern which have been identified are volatile organic compounds; semivolatile organic compounds; high explosives; radiological constituents including depleted uranium

  12. Algorithmic approach for methyl-CpG binding protein 2 (MECP2) gene testing in patients with neurodevelopmental disabilities.

    PubMed

    Sanmann, Jennifer N; Schaefer, G Bradley; Buehler, Bruce A; Sanger, Warren G

    2012-03-01

    Methyl-CpG binding protein 2 gene (MECP2) testing is indicated for patients with numerous clinical presentations, including Rett syndrome (classic and atypical), unexplained neonatal encephalopathy, Angelman syndrome, nonspecific mental retardation, autism (females), and an X-linked family history of developmental delay. Because of this complexity, a gender-specific approach for comprehensive MECP2 gene testing is described. Briefly, sequencing of exons 1 to 4 of MECP2 is recommended for patients with a Rett syndrome phenotype, unexplained neonatal encephalopathy, an Angelman syndrome phenotype (with negative 15q11-13 analysis), nonspecific mental retardation, or autism (females). Additional testing for large-scale MECP2 deletions is recommended for patients with Rett syndrome or Angelman syndrome phenotypes (with negative 15q11-13 analysis) following negative sequencing. Alternatively, testing for large-scale MECP2 duplications is recommended for males presenting with mental retardation, an X-linked family history of developmental delay, and a significant proportion of previously described clinical features (particularly a history of recurrent respiratory infections).

  13. Application of fusion algorithms for computer aided detection and classification of bottom mines to synthetic aperture sonar test data

    NASA Astrophysics Data System (ADS)

    Ciany, Charles M.; Zurawski, William C.

    2006-05-01

    Over the past several years, Raytheon Company has adapted its Computer Aided Detection/Computer-Aided Classification (CAD/CAC) algorithm to process side-scan sonar imagery taken in both the Very Shallow Water (VSW) and Shallow Water (SW) operating environments. This paper describes the further adaptation of this CAD/CAC algorithm to process Synthetic Aperture Sonar (SAS) image data taken by an Autonomous Underwater Vehicle (AUV). The tuning of the CAD/CAC algorithm for the vehicle's sonar is described, the resulting classifier performance is presented, and the fusion of the classifier outputs with those of another CAD/CAC processor is evaluated. The fusion algorithm accepts the classification confidence levels and associated contact locations from the different CAD/CAC algorithms, clusters the contacts based on the distance between their locations, and then declares a valid target when a clustered contact passes a prescribed fusion criterion. Three different fusion criteria are evaluated: the first based on thresholding the sum of the confidence factors for the clustered contacts, the second based on simple binary combinations of the multiple CAD/CAC processor outputs, and the third based on the Fisher Discriminant. The resulting performance of the three fusion algorithms is compared, and the overall performance benefit of a significant reduction of false alarms at high correct classification probabilities is quantified.

  14. Germline MLH1 and MSH2 mutational spectrum including frequent large genomic aberrations in Hungarian hereditary non-polyposis colorectal cancer families: Implications for genetic testing

    PubMed Central

    Papp, Janos; Kovacs, Marietta E; Olah, Edith

    2007-01-01

    AIM: To analyze the prevalence of germline MLH1 and MSH2 gene mutations and evaluate the clinical characteristics of Hungarian hereditary non-polyposis colorectal cancer (HNPCC) families. METHODS: Thirty-six kindreds were tested for mutations using conformation sensitive gel electrophoreses, direct sequencing and also screening for genomic rearrangements applying multiplex ligation-dependent probe amplification (MLPA). RESULTS: Eighteen germline mutations (50%) were identified, 9 in MLH1 and 9 in MSH2. Sixteen of these sequence alterations were considered pathogenic, the remaining two were non-conservative missense alterations occurring at highly conserved functional motifs. The majority of the definite pathogenic mutations (81%, 13/16) were found in families fulfilling the stringent Amsterdam I/II criteria, including three rearrangements revealed by MLPA (two in MSH2 and one in MLH1). However, in three out of sixteen HNPCC-suspected families (19%), a disease-causing alteration could be revealed. Furthermore, nine mutations described here are novel, and none of the sequence changes were found in more than one family. CONCLUSION: Our study describes for the first time the prevalence and spectrum of germline mismatch repair gene mutations in Hungarian HNPCC and suspected-HNPCC families. The results presented here suggest that clinical selection criteria should be relaxed and detection of genomic rearrangements should be included in genetic screening in this population. PMID:17569143

  15. Corrective Action Investigation Plan for Corrective Action Unit 204: Storage Bunkers, Nevada Test Site, Nevada (December 2002, Revision No.: 0), Including Record of Technical Change No. 1

    SciTech Connect

    NNSA /NSO

    2002-12-12

    The Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Operations Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 204 under the Federal Facility Agreement and Consent Order. Corrective Action Unit 204 is located on the Nevada Test Site approximately 65 miles northwest of Las Vegas, Nevada. This CAU is comprised of six Corrective Action Sites (CASs) which include: 01-34-01, Underground Instrument House Bunker; 02-34-01, Instrument Bunker; 03-34-01, Underground Bunker; 05-18-02, Chemical Explosives Storage; 05-33-01, Kay Blockhouse; 05-99-02, Explosive Storage Bunker. Based on site history, process knowledge, and previous field efforts, contaminants of potential concern for Corrective Action Unit 204 collectively include radionuclides, beryllium, high explosives, lead, polychlorinated biphenyls, total petroleum hydrocarbons, silver, warfarin, and zinc phosphide. The primary question for the investigation is: ''Are existing data sufficient to evaluate appropriate corrective actions?'' To address this question, resolution of two decision statements is required. Decision I is to ''Define the nature of contamination'' by identifying any contamination above preliminary action levels (PALs); Decision II is to ''Determine the extent of contamination identified above PALs. If PALs are not exceeded, the investigation is completed. If PALs are exceeded, then Decision II must be resolved. In addition, data will be obtained to support waste management decisions. Field activities will include radiological land area surveys, geophysical surveys to identify any subsurface metallic and nonmetallic debris, field screening for applicable contaminants of potential concern, collection and analysis of surface and subsurface soil samples from biased locations, and step-out sampling to define the extent of

  16. Improved error estimates of a discharge algorithm for remotely sensed river measurements: Test cases on Sacramento and Garonne Rivers

    NASA Astrophysics Data System (ADS)

    Yoon, Yeosang; Garambois, Pierre-André; Paiva, Rodrigo C. D.; Durand, Michael; Roux, Hélène; Beighley, Edward

    2016-01-01

    We present an improvement to a previously presented algorithm that used a Bayesian Markov Chain Monte Carlo method for estimating river discharge from remotely sensed observations of river height, width, and slope. We also present an error budget for discharge calculations from the algorithm. The algorithm may be utilized by the upcoming Surface Water and Ocean Topography (SWOT) mission. We present a detailed evaluation of the method using synthetic SWOT-like observations (i.e., SWOT and AirSWOT, an airborne version of SWOT). The algorithm is evaluated using simulated AirSWOT observations over the Sacramento and Garonne Rivers that have differing hydraulic characteristics. The algorithm is also explored using SWOT observations over the Sacramento River. SWOT and AirSWOT height, width, and slope observations are simulated by corrupting the "true" hydraulic modeling results with instrument error. Algorithm discharge root mean square error (RMSE) was 9% for the Sacramento River and 15% for the Garonne River for the AirSWOT case using expected observation error. The discharge uncertainty calculated from Manning's equation was 16.2% and 17.1%, respectively. For the SWOT scenario, the RMSE and uncertainty of the discharge estimate for the Sacramento River were 15% and 16.2%, respectively. A method based on the Kalman filter to correct errors of discharge estimates was shown to improve algorithm performance. From the error budget, the primary source of uncertainty was the a priori uncertainty of bathymetry and roughness parameters. Sensitivity to measurement errors was found to be a function of river characteristics. For example, Steeper Garonne River is less sensitive to slope errors than the flatter Sacramento River.

  17. An interactive ontology-driven information system for simulating background radiation and generating scenarios for testing special nuclear materials detection algorithms

    DOE PAGES

    Sorokine, Alexandre; Schlicher, Bob G.; Ward, Richard C.; Wright, Michael C.; Kruse, Kara L.; Bhaduri, Budhendra; Slepoy, Alexander

    2015-05-22

    This paper describes an original approach to generating scenarios for the purpose of testing the algorithms used to detect special nuclear materials (SNM) that incorporates the use of ontologies. Separating the signal of SNM from the background requires sophisticated algorithms. To assist in developing such algorithms, there is a need for scenarios that capture a very wide range of variables affecting the detection process, depending on the type of detector being used. To provide such a cpability, we developed an ontology-driven information system (ODIS) for generating scenarios that can be used in creating scenarios for testing of algorithms for SNMmore » detection. The ontology-driven scenario generator (ODSG) is an ODIS based on information supplied by subject matter experts and other documentation. The details of the creation of the ontology, the development of the ontology-driven information system, and the design of the web user interface (UI) are presented along with specific examples of scenarios generated using the ODSG. We demonstrate that the paradigm behind the ODSG is capable of addressing the problem of semantic complexity at both the user and developer levels. Compared to traditional approaches, an ODIS provides benefits such as faithful representation of the users' domain conceptualization, simplified management of very large and semantically diverse datasets, and the ability to handle frequent changes to the application and the UI. Furthermore, the approach makes possible the generation of a much larger number of specific scenarios based on limited user-supplied information« less

  18. An interactive ontology-driven information system for simulating background radiation and generating scenarios for testing special nuclear materials detection algorithms

    SciTech Connect

    Sorokine, Alexandre; Schlicher, Bob G.; Ward, Richard C.; Wright, Michael C.; Kruse, Kara L.; Bhaduri, Budhendra; Slepoy, Alexander

    2015-05-22

    This paper describes an original approach to generating scenarios for the purpose of testing the algorithms used to detect special nuclear materials (SNM) that incorporates the use of ontologies. Separating the signal of SNM from the background requires sophisticated algorithms. To assist in developing such algorithms, there is a need for scenarios that capture a very wide range of variables affecting the detection process, depending on the type of detector being used. To provide such a cpability, we developed an ontology-driven information system (ODIS) for generating scenarios that can be used in creating scenarios for testing of algorithms for SNM detection. The ontology-driven scenario generator (ODSG) is an ODIS based on information supplied by subject matter experts and other documentation. The details of the creation of the ontology, the development of the ontology-driven information system, and the design of the web user interface (UI) are presented along with specific examples of scenarios generated using the ODSG. We demonstrate that the paradigm behind the ODSG is capable of addressing the problem of semantic complexity at both the user and developer levels. Compared to traditional approaches, an ODIS provides benefits such as faithful representation of the users' domain conceptualization, simplified management of very large and semantically diverse datasets, and the ability to handle frequent changes to the application and the UI. Furthermore, the approach makes possible the generation of a much larger number of specific scenarios based on limited user-supplied information

  19. Political violence and child adjustment in Northern Ireland: Testing pathways in a social-ecological model including single-and two-parent families.

    PubMed

    Cummings, E Mark; Schermerhorn, Alice C; Merrilees, Christine E; Goeke-Morey, Marcie C; Shirlow, Peter; Cairns, Ed

    2010-07-01

    Moving beyond simply documenting that political violence negatively impacts children, we tested a social-ecological hypothesis for relations between political violence and child outcomes. Participants were 700 mother-child (M = 12.1 years, SD = 1.8) dyads from 18 working-class, socially deprived areas in Belfast, Northern Ireland, including single- and two-parent families. Sectarian community violence was associated with elevated family conflict and children's reduced security about multiple aspects of their social environment (i.e., family, parent-child relations, and community), with links to child adjustment problems and reductions in prosocial behavior. By comparison, and consistent with expectations, links with negative family processes, child regulatory problems, and child outcomes were less consistent for nonsectarian community violence. Support was found for a social-ecological model for relations between political violence and child outcomes among both single- and two-parent families, with evidence that emotional security and adjustment problems were more negatively affected in single-parent families. The implications for understanding social ecologies of political violence and children's functioning are discussed.

  20. Fast training algorithms for multilayer neural nets.

    PubMed

    Brent, R P

    1991-01-01

    An algorithm that is faster than back-propagation and for which it is not necessary to specify the number of hidden units in advance is described. The relationship with other fast pattern-recognition algorithms, such as algorithms based on k-d trees, is discussed. The algorithm has been implemented and tested on artificial problems, such as the parity problem, and on real problems arising in speech recognition. Experimental results, including training times and recognition accuracy, are given. Generally, the algorithm achieves accuracy as good as or better than nets trained using back-propagation. Accuracy is comparable to that for the nearest-neighbor algorithm, which is slower and requires more storage space.

  1. SEBAL-A: A remote sensing ET algorithm that accounts for advection with limited data. Part II: Test for transferability

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Because the Surface Energy Balance Algorithm for Land (SEBAL) tends to underestimate ET under conditions of advection, the model was modified by incorporating an advection component as part of the energy usable for crop evapotranspiration (ET). The modification involved the estimation of advected en...

  2. Flight tests of three-dimensional path-redefinition algorithms for transition from Radio Navigation (RNAV) to Microwave Landing System (MLS) navigation when flying an aircraft on autopilot

    NASA Technical Reports Server (NTRS)

    Hueschen, Richard M.

    1988-01-01

    This report contains results of flight tests for three path update algorithms designed to provide smooth transition for an aircraft guidance system from DME, VORTAC, and barometric navaids to the more precise MLS by modifying the desired 3-D flight path. The first algorithm, called Zero Cross Track, eliminates the discontinuity in cross-track and altitude error at transition by designating the first valid MLS aircraft position as the desired first waypoint, while retaining all subsequent waypoints. The discontinuity in track angle is left unaltered. The second, called Tangent Path, also eliminates the discontinuity in cross-track and altitude errors and chooses a new desired heading to be tangent to the next oncoming circular arc turn. The third, called Continued Track, eliminates the discontinuity in cross-track, altitude, and track angle errors by accepting the current MLS position and track angle as the desired ones and recomputes the location of the next waypoint. The flight tests were conducted on the Transportation Systems Research Vehicle, a small twin-jet transport aircraft modified for research under the Advanced Transport Operating Systems program at Langley Research Center. The flight tests showed that the algorithms provided a smooth transition to MLS.

  3. Blockage and flow studies of a generalized test apparatus including various wing configurations in the Langley 7-inch Mach 7 Pilot Tunnel

    NASA Technical Reports Server (NTRS)

    Albertson, C. W.

    1982-01-01

    A 1/12th scale model of the Curved Surface Test Apparatus (CSTA), which will be used to study aerothermal loads and evaluate Thermal Protection Systems (TPS) on a fuselage-type configuration in the Langley 8-Foot High Temperature Structures Tunnel (8 ft HTST), was tested in the Langley 7-Inch Mach 7 Pilot Tunnel. The purpose of the tests was to study the overall flow characteristics and define an envelope for testing the CSTA in the 8 ft HTST. Wings were tested on the scaled CSTA model to select a wing configuration with the most favorable characteristics for conducting TPS evaluations for curved and intersecting surfaces. The results indicate that the CSTA and selected wing configuration can be tested at angles of attack up to 15.5 and 10.5 degrees, respectively. The base pressure for both models was at the expected low level for most test conditions. Results generally indicate that the CSTA and wing configuration will provide a useful test bed for aerothermal pads and thermal structural concept evaluation over a broad range of flow conditions in the 8 ft HTST.

  4. Item Selection in Computerized Adaptive Testing: Improving the a-Stratified Design with the Sympson-Hetter Algorithm

    ERIC Educational Resources Information Center

    Leung, Chi-Keung; Chang, Hua-Hua; Hau, Kit-Tai

    2002-01-01

    Item exposure control, test-overlap minimization, and the efficient use of item pool are some of the important issues in computerized adaptive testing (CAT) designs. The overexposure of some items and high test-overlap rate may cause both item and test security problems. Previously these problems associated with the maximum information (Max-I)…

  5. The evaluation of the OSGLR algorithm for restructurable controls

    NASA Technical Reports Server (NTRS)

    Bonnice, W. F.; Wagner, E.; Hall, S. R.; Motyka, P.

    1986-01-01

    The detection and isolation of commercial aircraft control surface and actuator failures using the orthogonal series generalized likelihood ratio (OSGLR) test was evaluated. The OSGLR algorithm was chosen as the most promising algorithm based on a preliminary evaluation of three failure detection and isolation (FDI) algorithms (the detection filter, the generalized likelihood ratio test, and the OSGLR test) and a survey of the literature. One difficulty of analytic FDI techniques and the OSGLR algorithm in particular is their sensitivity to modeling errors. Therefore, methods of improving the robustness of the algorithm were examined with the incorporation of age-weighting into the algorithm being the most effective approach, significantly reducing the sensitivity of the algorithm to modeling errors. The steady-state implementation of the algorithm based on a single cruise linear model was evaluated using a nonlinear simulation of a C-130 aircraft. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling the linear models used by the algorithm on dynamic pressure and flap deflection was also considered. Since simply scheduling the linear models over the entire flight envelope is unlikely to be adequate, scheduling of the steady-state implentation of the algorithm was briefly investigated.

  6. Rapid Diagnostic Tests for Dengue Virus Infection in Febrile Cambodian Children: Diagnostic Accuracy and Incorporation into Diagnostic Algorithms

    PubMed Central

    Carter, Michael J.; Emary, Kate R.; Moore, Catherine E.; Parry, Christopher M.; Sona, Soeng; Putchhat, Hor; Reaksmey, Sin; Chanpheaktra, Ngoun; Stoesser, Nicole; Dobson, Andrew D. M.; Day, Nicholas P. J.; Kumar, Varun; Blacksell, Stuart D.

    2015-01-01

    Background Dengue virus (DENV) infection is prevalent across tropical regions and may cause severe disease. Early diagnosis may improve supportive care. We prospectively assessed the Standard Diagnostics (Korea) BIOLINE Dengue Duo DENV rapid diagnostic test (RDT) to NS1 antigen and anti-DENV IgM (NS1 and IgM) in children in Cambodia, with the aim of improving the diagnosis of DENV infection. Methodology and principal findings We enrolled children admitted to hospital with non-localised febrile illnesses during the 5-month DENV transmission season. Clinical and laboratory variables, and DENV RDT results were recorded at admission. Children had blood culture and serological and molecular tests for common local pathogens, including reference laboratory DENV NS1 antigen and IgM assays. 337 children were admitted with non-localised febrile illness over 5 months. 71 (21%) had DENV infection (reference assay positive). Sensitivity was 58%, and specificity 85% for RDT NS1 and IgM combined. Conditional inference framework analysis showed the additional value of platelet and white cell counts for diagnosis of DENV infection. Variables associated with diagnosis of DENV infection were not associated with critical care admission (70 children, 21%) or mortality (19 children, 6%). Known causes of mortality were melioidosis (4), other sepsis (5), and malignancy (1). 22 (27%) children with a positive DENV RDT had a treatable other infection. Conclusions The DENV RDT had low sensitivity for the diagnosis of DENV infection. The high co-prevalence of infections in our cohort indicates the need for a broad microbiological assessment of non-localised febrile illness in these children. PMID:25710684

  7. Beyond U(crit): matching swimming performance tests to the physiological ecology of the animal, including a new fish 'drag strip'.

    PubMed

    Nelson, J A; Gotwalt, P S; Reidy, S P; Webber, D M

    2002-10-01

    Locomotor performance of animals is of considerable interest from management, physiological, ecological and evolutionary perspectives. Yet, despite the extensive commercial exploitation of fishes and interest in the health of various fish stocks, the relationships between performance capacity, natural selection, ecology and physiology are poorly known for fishes. One reason may be the technical challenges faced when trying to measure various locomotor capacities in aquatic species, but we will argue that the slow pace of developing new species-appropriate swim tests is also hindering progress. A technique developed for anadromous salmonids (the U(crit) procedure) has dominated the fish exercise physiology field and, while accounting for major advances in the field, has often been used arbitrarily. Here we propose criteria swimming tests should adhere to and report on several attempts to match swimming tests to the physiological ecology of the animal. Sprint performance measured with a laser diode/photocell timed 'drag strip' is a new method employing new technology and is reported on in some detail. A second new test involves accelerating water past the fish at a constant rate in a traditional swim tunnel/respirometer. These two performance tests were designed to better understand the biology of a bentho-pelagic marine fish, the Atlantic cod (Gadus morhua). Finally, we report on a modified incremental velocity test that was developed to better understand the biology of the blacknose dace (Rhinichthys atratulus), a Nearctic, lotic cyprinid.

  8. A General Tank Test of NACA Model 11-C Flying-boat Hull, Including the Effect of Changing the Plan Form of the Step

    NASA Technical Reports Server (NTRS)

    Dawson, John R

    1935-01-01

    The results of a general tank test model 11-C, a conventional pointed afterbody type of flying-boat hull, are given in tables and curves. These results are compared with the results of tests on model 11-A, from which model 11-C was derived, and it is found that the resistance of model 11-C is somewhat greater. The effect of changing the plan form of the step on model 11-C is shown from the results of tests made with three swallow-tail and three pointed steps formed by altering the original step of the model. These results show only minor differences from the results obtained with the original model.

  9. Comparison of options for reduction of noise in the test section of the NASA Langley 4x7m wind tunnel, including reduction of nozzle area

    NASA Technical Reports Server (NTRS)

    Hayden, R. E.

    1984-01-01

    The acoustically significant features of the NASA 4X7m wind tunnel and the Dutch-German DNW low speed tunnel are compared to illustrate the reasons for large differences in background noise in the open jet test sections of the two tunnels. Also introduced is the concept of reducing test section noise levels through fan and turning vane source reductions which can be brought about by reducing the nozzle cross sectional area, and thus the circuit mass flow for a particular exit velocity. The costs and benefits of treating sources, paths, and changing nozzle geometry are reviewed.

  10. Use of an Aptitude Test in University Entrance--A Validity Study: Updated Analyses of Higher Education Destinations, Including 2007 Entrants

    ERIC Educational Resources Information Center

    Kirkup, Catherine; Wheater, Rebecca; Morrison, Jo; Durbin, Ben

    2010-01-01

    In 2005, the National Foundation for Educational Research (NFER) was commissioned to evaluate the potential value of using an aptitude test as an additional tool in the selection of candidates for admission to higher education (HE). This five-year study is co-funded by the National Foundation for Educational Research (NFER), the Department for…

  11. Political Violence and Child Adjustment in Northern Ireland: Testing Pathways in a Social-Ecological Model Including Single- and Two-Parent Families

    ERIC Educational Resources Information Center

    Cummings, E. Mark; Schermerhorn, Alice C.; Merrilees, Christine E.; Goeke-Morey, Marcie C.; Shirlow, Peter; Cairns, Ed

    2010-01-01

    Moving beyond simply documenting that political violence negatively impacts children, we tested a social-ecological hypothesis for relations between political violence and child outcomes. Participants were 700 mother-child (M = 12.1 years, SD = 1.8) dyads from 18 working-class, socially deprived areas in Belfast, Northern Ireland, including…

  12. Non-invasive tests in animal models and humans: a new paradigm for assessing efficacy of biologics including prebiotics and probiotics.

    PubMed

    Butler, R N

    2008-01-01

    Newer biological agents that are designed to have multiple effects on a host require better ways to determine both their safety and toxicity. Indeed ecologically potent factors such as agents that can alter the gut milieu and change host responses are now being realized as a viable alternative to more focused pharmaceuticals. Even in the pharmaceutical arena there is a growing awareness of the preventative and therapeutic potential of alternative agents. Probiotics and prebiotics amongst other agents fall into this category and can have both direct and indirect effects on the pathogenesis and progress of disease. This review details some of the new approaches using non-invasive tests to enable firstly a better definition of a stressed through to a damaged gastrointestinal mucosa. They constitute ways to apply dynamic function testing in animal models and humans to provide reference points to which other measurements can be related e.g. altered circulating cytokines, altered gene expression. As such this phenotypic scaffold, alone and combined with newer molecular parameters, will improve our understanding of the interaction of luminal factors within the alimentary tract and the impact that these have on physiologically challenged mucosa and in disease both at the gastrointestinal level and also in remote organs. Practically, the dynamic function tests, primarily breath tests, can now be used as diagnostic and prognostic indicators of the efficacy of new biologics such as probiotics and prebiotics that in part elicit their effects by altering the ecology of particular regions of the intestine. PMID:18537657

  13. Design and performance testing of an avalanche photodiode receiver with multiplication gain control algorithm for intersatellite laser communication

    NASA Astrophysics Data System (ADS)

    Yu, Xiaonan; Tong, Shoufeng; Dong, Yan; Song, Yansong; Hao, Shicong; Lu, Jing

    2016-06-01

    An avalanche photodiode (APD) receiver for intersatellite laser communication links is proposed and its performance is experimentally demonstrated. In the proposed system, a series of analog circuits are used not only to adjust the temperature and control the bias voltage but also to monitor the current and recover the clock from the communication data. In addition, the temperature compensation and multiplication gain control algorithm are embedded in the microcontroller to improve the performance of the receiver. As shown in the experiment, with the change of communication rate from 10 to 2000 Mbps, the detection sensitivity of the APD receiver varies from -47 to -34 dBm. Moreover, due to the existence of the multiplication gain control algorithm, the dynamic range of the APD receiver is effectively improved, while the dynamic range at 10, 100, and 1000 Mbps is 38.7, 37.7, and 32.8 dB, respectively. As a result, the experimental results agree well with the theoretical predictions, and the receiver will improve the flexibility of the intersatellite links without increasing the cost.

  14. Implementation and testing of a real-time 3-component phase picking program for Earthworm using the CECM algorithm

    NASA Astrophysics Data System (ADS)

    Baker, B. I.; Friberg, P. A.

    2014-12-01

    Modern seismic networks typically deploy three component (3C) sensors, but still fail to utilize all of the information available in the seismograms when performing automated phase picking for real-time event location. In most cases a variation on a short term over long term average threshold detector is used for picking and then an association program is used to assign phase types to the picks. However, the 3C waveforms from an earthquake contain an abundance of information related to the P and S phases in both their polarization and energy partitioning. An approach that has been overlooked and has demonstrated encouraging results is the Component Energy Comparison Method (CECM) by Nagano et al. as published in Geophysics 1989. CECM is well suited to being used in real-time because the calculation is not computationally intensive. Furthermore, the CECM method has fewer tuning variables (3) than traditional pickers in Earthworm such as the Rex Allen algorithm (N=18) or even the Anthony Lomax Filter Picker module (N=5). In addition to computing the CECM detector we study the detector sensitivity by rotating the signal into principle components as well as estimating the P phase onset from a curvature function describing the CECM as opposed to the CECM itself. We present our results implementing this algorithm in a real-time module for Earthworm and show the improved phase picks as compared to the traditional single component pickers using Earthworm.

  15. Design and performance testing of an avalanche photodiode receiver with multiplication gain control algorithm for intersatellite laser communication

    NASA Astrophysics Data System (ADS)

    Yu, Xiaonan; Tong, Shoufeng; Dong, Yan; Song, Yansong; Hao, Shicong; Lu, Jing

    2016-06-01

    An avalanche photodiode (APD) receiver for intersatellite laser communication links is proposed and its performance is experimentally demonstrated. In the proposed system, a series of analog circuits are used not only to adjust the temperature and control the bias voltage but also to monitor the current and recover the clock from the communication data. In addition, the temperature compensation and multiplication gain control algorithm are embedded in the microcontroller to improve the performance of the receiver. As shown in the experiment, with the change of communication rate from 10 to 2000 Mbps, the detection sensitivity of the APD receiver varies from -47 to -34 dBm. Moreover, due to the existence of the multiplication gain control algorithm, the dynamic range of the APD receiver is effectively improved, while the dynamic range at 10, 100, and 1000 Mbps is 38.7, 37.7, and 32.8 dB, respectively. As a result, the experimental results agree well with the theoretical predictions, and the receiver will improve the flexibility of the intersatellite links without increasing the cost.

  16. POSE Algorithms for Automated Docking

    NASA Technical Reports Server (NTRS)

    Heaton, Andrew F.; Howard, Richard T.

    2011-01-01

    POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data.

  17. Hardware-In-The-Loop Testing of Continuous Control Algorithms for a Precision Formation Flying Demonstration Mission

    NASA Technical Reports Server (NTRS)

    Naasz, Bo J.; Burns, Richard D.; Gaylor, David; Higinbotham, John

    2004-01-01

    A sample mission sequence is defined for a low earth orbit demonstration of Precision Formation Flying (PFF). Various guidance navigation and control strategies are discussed for use in the PFF experiment phases. A sample PFF experiment is implemented and tested in a realistic Hardware-in-the-Loop (HWIL) simulation using the Formation Flying Test Bed (FFTB) at NASA's Goddard Space Flight Center.

  18. A general tank test of a model of the hull of the Pem-1 flying boat including a special working chart for the determination of hull performance

    NASA Technical Reports Server (NTRS)

    Dawson, John R

    1938-01-01

    The results of a general tank test of a 1/6 full-size model of the hull of the Pem-1 flying boat (N.A.C.A. model 18) are given in non-dimensional form. In addition to the usual curves, the results are presented in a new form that makes it possible to apply them more conveniently than in the forms previously used. The resistance was compared with that of N.A.C.A. models 11-C and 26(Sikorsky S-40) and was found to be generally less than the resistance of either.

  19. Sourcebook of locations of geophysical surveys in tunnels and horizontal holes including results of seismic-refraction surveys: Rainier Mesa, Aqueduct Mesa, and Area 16, Nevada Test Site

    SciTech Connect

    Carroll, R.D.; Kibler, J.E.

    1983-01-01

    Seismic refraction surveys have been obtained sporadically in tunnels in zeolitized tuff at the Nevada Test Site since the late 1950's. Commencing in 1967 and continuing to date (1982), extensive measurements of shear- and compressional-wave velocities have been made in five tunnel complexes in Rainier and Aqueduct Mesas and in one tunnel complex in Shoshone Mountain. The results of these surveys to 1980 are compiled in this report. In addition, extensive horizontal drilling was initiated in 1967 in connection with geologic exploration in these tunnel complexes for sites for nuclear weapons tests. Seismic and electrical surveys were conducted in the majority of these holes. The type and location of these tunnel and borehole surveys are indexed in this report. Synthesis of the seismic refraction data indicates a mean compressional-wave velocity near the nuclear device point (WP) of 23 tunnel events of 2430 m/s (7970 f/s) with a range of 1846 to 2753 m/s (6060 to 9030 f/s). The mean shear-wave velocity of 17 tunnel events is 1276 m/s (4190 f/s) with a range of 1140 to 1392 m/s (3740 to 4570 f/s). Experience indicates that these velocity variations are due chiefly to the extent of fracturing and (or) the presence of partially saturated rock in the region of the survey.

  20. Sourcebook of locations of geophysical surveys in tunnels and horizontal holes, including results of seismic refraction surveys, Rainier Mesa, Aqueduct Mesa, and Area 16, Nevada Test Site

    USGS Publications Warehouse

    Carroll, R.D.; Kibler, J.E.

    1983-01-01

    Seismic refraction surveys have been obtained sporadically in tunnels in zeolitized tuff at the Nevada Test Site since the late 1950's. Commencing in 1967 and continuing to date (1982), .extensive measurements of shear- and compressional-wave velocities have been made in five tunnel complexes in Rainier and Aqueduct Mesas and in one tunnel complex in Shoshone Mountain. The results of these surveys to 1980 are compiled in this report. In addition, extensive horizontal drilling was initiated in 1967 in connection with geologic exploration in these tunnel complexes for sites for nuclear weapons tests. Seismic and electrical surveys were conducted in the majority of these holes. The type and location of these tunnel and borehole surveys are indexed in this report. Synthesis of the seismic refraction data indicates a mean compressional-wave velocity near the nuclear device point (WP) of 23 tunnel events of 2,430 m/s (7,970 f/s) with a range of 1,846-2,753 m/s (6,060-9,030 f/s). The mean shear-wave velocity of 17 tunnel events is 1,276 m/s (4,190 f/s) with a range of 1,140-1,392 m/s (3,740-4,570 f/s). Experience indicates that these velocity variations are due chiefly to the extent of fracturing and (or) the presence of partially saturated rock in the region of the survey.

  1. Corrective Action Investigation Plan for Corrective Action Unit 529: Area 25 Contaminated Materials, Nevada Test Site, Nevada, Rev. 0, Including Record of Technical Change No. 1

    SciTech Connect

    U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office

    2003-02-26

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 529, Area 25 Contaminated Materials, Nevada Test Site (NTS), Nevada, under the Federal Facility Agreement and Consent Order. CAU 529 consists of one Corrective Action Site (25-23-17). For the purpose of this investigation, the Corrective Action Site has been divided into nine parcels based on the separate and distinct releases. A conceptual site model was developed for each parcel to address the translocation of contaminants from each release. The results of this investigation will be used to support a defensible evaluation of corrective action alternatives in the corrective action decision document.

  2. An aerial radiological survey of the Tonopah Test Range including Clean Slate 1,2,3, Roller Coaster, decontamination area, Cactus Springs Ranch target areas. Central Nevada

    SciTech Connect

    Proctor, A.E.; Hendricks, T.J.

    1995-08-01

    An aerial radiological survey was conducted of major sections of the Tonopah Test Range (TTR) in central Nevada from August through October 1993. The survey consisted of aerial measurements of both natural and man-made gamma radiation emanating from the terrestrial surface. The initial purpose of the survey was to locate depleted uranium (detecting {sup 238}U) from projectiles which had impacted on the TTR. The examination of areas near Cactus Springs Ranch (located near the western boundary of the TTR) and an animal burial area near the Double Track site were secondary objectives. When more widespread than expected {sup 241}Am contamination was found around the Clean Slates sites, the survey was expanded to cover the area surrounding the Clean Slates and also the Double Track site. Results are reported as radiation isopleths superimposed on aerial photographs of the area.

  3. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  4. A depth-averaged debris-flow model that includes the effects of evolving dilatancy: II. Numerical predictions and experimental tests.

    USGS Publications Warehouse

    George, David L.; Iverson, Richard M.

    2014-01-01

    We evaluate a new depth-averaged mathematical model that is designed to simulate all stages of debris-flow motion, from initiation to deposition. A companion paper shows how the model’s five governing equations describe simultaneous evolution of flow thickness, solid volume fraction, basal pore-fluid pressure, and two components of flow momentum. Each equation contains a source term that represents the influence of state-dependent granular dilatancy. Here we recapitulate the equations and analyze their eigenstructure to show that they form a hyperbolic system with desirable stability properties. To solve the equations we use a shock-capturing numerical scheme with adaptive mesh refinement, implemented in an open-source software package we call D-Claw. As tests of D-Claw, we compare model output with results from two sets of large-scale debris-flow experiments. One set focuses on flow initiation from landslides triggered by rising pore-water pressures, and the other focuses on downstream flow dynamics, runout, and deposition. D-Claw performs well in predicting evolution of flow speeds, thicknesses, and basal pore-fluid pressures measured in each type of experiment. Computational results illustrate the critical role of dilatancy in linking coevolution of the solid volume fraction and pore-fluid pressure, which mediates basal Coulomb friction and thereby regulates debris-flow dynamics.

  5. Corrective Action Investigation Plan for Corrective Action Unit 536: Area 3 Release Site, Nevada Test Site, Nevada (Rev. 0 / June 2003), Including Record of Technical Change No. 1

    SciTech Connect

    2003-06-27

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office's approach to collect the data necessary to evaluate corrective action alternatives (CAAs) appropriate for the closure of Corrective Action Unit (CAU) 536: Area 3 Release Site, Nevada Test Site, Nevada, under the Federal Facility Agreement and Consent Order. Corrective Action Unit 536 consists of a single Corrective Action Site (CAS): 03-44-02, Steam Jenny Discharge. The CAU 536 site is being investigated because existing information on the nature and extent of possible contamination is insufficient to evaluate and recommend corrective action alternatives for CAS 03-44-02. The additional information will be obtained by conducting a corrective action investigation (CAI) prior to evaluating CAAs and selecting the appropriate corrective action for this CAS. The results of this field investigation are to be used to support a defensible evaluation of corrective action alternatives in the corrective action decision document. Record of Technical Change No. 1 is dated 3-2004.

  6. Corrective Action Investigation Plan for Corrective Action Unit 516: Septic Systems and Discharge Points, Nevada Test Site, Nevada, Rev. 0, Including Record of Technical Change No. 1

    SciTech Connect

    2003-04-28

    This Corrective Action Investigation Plan (CAIP) contains the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Sites Office's (NNSA/NSO's) approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 516, Septic Systems and Discharge Points, Nevada Test Site (NTS), Nevada, under the Federal Facility Agreement and Consent Order. CAU 516 consists of six Corrective Action Sites: 03-59-01, Building 3C-36 Septic System; 03-59-02, Building 3C-45 Septic System; 06-51-01, Sump Piping, 06-51-02, Clay Pipe and Debris; 06-51-03, Clean Out Box and Piping; and 22-19-04, Vehicle Decontamination Area. Located in Areas 3, 6, and 22 of the NTS, CAU 516 is being investigated because disposed waste may be present without appropriate controls, and hazardous and/or radioactive constituents may be present or migrating at concentrations and locations that could potentially pose a threat to human health and the environment. Existing information and process knowledge on the expected nature and extent of contamination of CAU 516 are insufficient to select preferred corrective action alternatives; therefore, additional information will be obtained by conducting a corrective action investigation. The results of this field investigation will support a defensible evaluation of corrective action alternatives in the corrective action decision document. Record of Technical Change No. 1 is dated 3/2004.

  7. Control Algorithms and Simulated Environment Developed and Tested for Multiagent Robotics for Autonomous Inspection of Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Wong, Edmond

    2005-01-01

    The NASA Glenn Research Center and academic partners are developing advanced multiagent robotic control algorithms that will enable the autonomous inspection and repair of future propulsion systems. In this application, on-wing engine inspections will be performed autonomously by large groups of cooperative miniature robots that will traverse the surfaces of engine components to search for damage. The eventual goal is to replace manual engine inspections that require expensive and time-consuming full engine teardowns and allow the early detection of problems that would otherwise result in catastrophic component failures. As a preliminary step toward the long-term realization of a practical working system, researchers are developing the technology to implement a proof-of-concept testbed demonstration. In a multiagent system, the individual agents are generally programmed with relatively simple controllers that define a limited set of behaviors. However, these behaviors are designed in such a way that, through the localized interaction among individual agents and between the agents and the environment, they result in self-organized, emergent group behavior that can solve a given complex problem, such as cooperative inspection. One advantage to the multiagent approach is that it allows for robustness and fault tolerance through redundancy in task handling. In addition, the relatively simple agent controllers demand minimal computational capability, which in turn allows for greater miniaturization of the robotic agents.

  8. Updated treatment algorithm of pulmonary arterial hypertension.

    PubMed

    Galiè, Nazzareno; Corris, Paul A; Frost, Adaani; Girgis, Reda E; Granton, John; Jing, Zhi Cheng; Klepetko, Walter; McGoon, Michael D; McLaughlin, Vallerie V; Preston, Ioana R; Rubin, Lewis J; Sandoval, Julio; Seeger, Werner; Keogh, Anne

    2013-12-24

    The demands on a pulmonary arterial hypertension (PAH) treatment algorithm are multiple and in some ways conflicting. The treatment algorithm usually includes different types of recommendations with varying degrees of scientific evidence. In addition, the algorithm is required to be comprehensive but not too complex, informative yet simple and straightforward. The type of information in the treatment algorithm are heterogeneous including clinical, hemodynamic, medical, interventional, pharmacological and regulatory recommendations. Stakeholders (or users) including physicians from various specialties and with variable expertise in PAH, nurses, patients and patients' associations, healthcare providers, regulatory agencies and industry are often interested in the PAH treatment algorithm for different reasons. These are the considerable challenges faced when proposing appropriate updates to the current evidence-based treatment algorithm.The current treatment algorithm may be divided into 3 main areas: 1) general measures, supportive therapy, referral strategy, acute vasoreactivity testing and chronic treatment with calcium channel blockers; 2) initial therapy with approved PAH drugs; and 3) clinical response to the initial therapy, combination therapy, balloon atrial septostomy, and lung transplantation. All three sections will be revisited highlighting information newly available in the past 5 years and proposing updates where appropriate. The European Society of Cardiology grades of recommendation and levels of evidence will be adopted to rank the proposed treatments. PMID:24355643

  9. Mapping of Schistosomiasis and Soil-Transmitted Helminths in Namibia: The First Large-Scale Protocol to Formally Include Rapid Diagnostic Tests

    PubMed Central

    Sousa-Figueiredo, José Carlos; Stanton, Michelle C.; Katokele, Stark; Arinaitwe, Moses; Adriko, Moses; Balfour, Lexi; Reiff, Mark; Lancaster, Warren; Noden, Bruce H.; Bock, Ronnie; Stothard, J. Russell

    2015-01-01

    Background Namibia is now ready to begin mass drug administration of praziquantel and albendazole against schistosomiasis and soil-transmitted helminths, respectively. Although historical data identifies areas of transmission of these neglected tropical diseases (NTDs), there is a need to update epidemiological data. For this reason, Namibia adopted a new protocol for mapping of schistosomiasis and geohelminths, formally integrating rapid diagnostic tests (RDTs) for infections and morbidity. In this article, we explain the protocol in detail, and introduce the concept of ‘mapping resolution’, as well as present results and treatment recommendations for northern Namibia. Methods/Findings/Interpretation This new protocol allowed a large sample to be surveyed (N = 17 896 children from 299 schools) at relatively low cost (7 USD per person mapped) and very quickly (28 working days). All children were analysed by RDTs, but only a sub-sample was also diagnosed by light microscopy. Overall prevalence of schistosomiasis in the surveyed areas was 9.0%, highly associated with poorer access to potable water (OR = 1.5, P<0.001) and defective (OR = 1.2, P<0.001) or absent sanitation infrastructure (OR = 2.0, P<0.001). Overall prevalence of geohelminths, more particularly hookworm infection, was 12.2%, highly associated with presence of faecal occult blood (OR = 1.9, P<0.001). Prevalence maps were produced and hot spots identified to better guide the national programme in drug administration, as well as targeted improvements in water, sanitation and hygiene. The RDTs employed (circulating cathodic antigen and microhaematuria for Schistosoma mansoni and S. haematobium, respectively) performed well, with sensitivities above 80% and specificities above 95%. Conclusion/Significance This protocol is cost-effective and sensitive to budget limitations and the potential economic and logistical strains placed on the national Ministries of Health. Here we present a high resolution map

  10. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal

  11. Algorithms and analysis for underwater vehicle plume tracing.

    SciTech Connect

    Byrne, Raymond Harry; Savage, Elizabeth L.; Hurtado, John Edward; Eskridge, Steven E.

    2003-07-01

    The goal of this research was to develop and demonstrate cooperative 3-D plume tracing algorithms for miniature autonomous underwater vehicles. Applications for this technology include Lost Asset and Survivor Location Systems (L-SALS) and Ship-in-Port Patrol and Protection (SP3). This research was a joint effort that included Nekton Research, LLC, Sandia National Laboratories, and Texas A&M University. Nekton Research developed the miniature autonomous underwater vehicles while Sandia and Texas A&M developed the 3-D plume tracing algorithms. This report describes the plume tracing algorithm and presents test results from successful underwater testing with pseudo-plume sources.

  12. Design Science Research toward Designing/Prototyping a Repeatable Model for Testing Location Management (LM) Algorithms for Wireless Networking

    ERIC Educational Resources Information Center

    Peacock, Christopher

    2012-01-01

    The purpose of this research effort was to develop a model that provides repeatable Location Management (LM) testing using a network simulation tool, QualNet version 5.1 (2011). The model will provide current and future protocol developers a framework to simulate stable protocol environments for development. This study used the Design Science…

  13. Quadrupole Alignment and Trajectory Correction for Future Linear Colliders: SLC Tests of a Dispersion-Free Steering Algorithm

    SciTech Connect

    Assmann, R

    2004-06-08

    The feasibility of future linear colliders depends on achieving very tight alignment and steering tolerances. All proposals (NLC, JLC, CLIC, TESLA and S-BAND) currently require a total emittance growth in the main linac of less than 30-100% [1]. This should be compared with a 100% emittance growth in the much smaller SLC linac [2]. Major advances in alignment and beam steering techniques beyond those used in the SLC are necessary for the next generation of linear colliders. In this paper, we present an experimental study of quadrupole alignment with a dispersion-free steering algorithm. A closely related method (wakefield-free steering) takes into account wakefield effects [3]. However, this method can not be studied at the SLC. The requirements for future linear colliders lead to new and unconventional ideas about alignment and beam steering. For example, no dipole correctors are foreseen for the standard trajectory correction in the NLC [4]; beam steering will be done by moving the quadrupole positions with magnet movers. This illustrates the close symbiosis between alignment, beam steering and beam dynamics that will emerge. It is no longer possible to consider the accelerator alignment as static with only a few surveys and realignments per year. The alignment in future linear colliders will be a dynamic process in which the whole linac, with thousands of beam-line elements, is aligned in a few hours or minutes, while the required accuracy of about 5 pm for the NLC quadrupole alignment [4] is a factor of 20 higher than in existing accelerators. The major task in alignment and steering is the accurate determination of the optimum beam-line position. Ideally one would like all elements to be aligned along a straight line. However, this is not practical. Instead a ''smooth curve'' is acceptable as long as its wavelength is much longer than the betatron wavelength of the accelerated beam. Conventional alignment methods are limited in accuracy by errors in the survey

  14. Outband Sensing-Based Dynamic Frequency Selection (DFS) Algorithm without Full DFS Test in IEEE 802.11h Protocol

    NASA Astrophysics Data System (ADS)

    Jeung, Jaemin; Jeong, Seungmyeong; Lim, Jaesung

    We propose an outband sensing-based IEEE 802.11h protocol without a full dynamic frequency selection (DFS) test. This scheme has two features. Firstly, every station performs a cooperative outband sensing, instead of inband sensing during a quiet period. And secondly, as soon as a current channel becomes bad, every station immediately hops to a good channel using the result of outband sensing. Simulation shows the proposed scheme increases network throughput against the legacy IEEE 802.11h.

  15. Description of nuclear systems with a self-consistent configuration-mixing approach: Theory, algorithm, and application to the 12C test nucleus

    NASA Astrophysics Data System (ADS)

    Robin, C.; Pillet, N.; Peña Arteaga, D.; Berger, J.-F.

    2016-02-01

    Background: Although self-consistent multiconfiguration methods have been used for decades to address the description of atomic and molecular many-body systems, only a few trials have been made in the context of nuclear structure. Purpose: This work aims at the development of such an approach to describe in a unified way various types of correlations in nuclei in a self-consistent manner where the mean-field is improved as correlations are introduced. The goal is to reconcile the usually set-apart shell-model and self-consistent mean-field methods. Method: This approach is referred to as "variational multiparticle-multihole configuration mixing method." It is based on a double variational principle which yields a set of two coupled equations that determine at the same time the expansion coefficients of the many-body wave function and the single-particle states. The solution of this problem is obtained by building a doubly iterative numerical algorithm. Results: The formalism is derived and discussed in a general context, starting from a three-body Hamiltonian. Links to existing many-body techniques such as the formalism of Green's functions are established. First applications are done using the two-body D1S Gogny effective force. The numerical procedure is tested on the 12C nucleus to study the convergence features of the algorithm in different contexts. Ground-state properties as well as single-particle quantities are analyzed, and the description of the first 2+ state is examined. Conclusions: The self-consistent multiparticle-multihole configuration mixing method is fully applied for the first time to the description of a test nucleus. This study makes it possible to validate our numerical algorithm and leads to encouraging results. To test the method further, we will realize in the second article of this series a systematic description of more nuclei and observables obtained by applying the newly developed numerical procedure with the same Gogny force. As

  16. Atmospheric Correction of Ocean Color Imagery: Test of the Spectral Optimization Algorithm with the Sea-Viewing Wide Field-of-View Sensor.

    PubMed

    Chomko, R M; Gordon, H R

    2001-06-20

    We implemented the spectral optimization algorithm [SOA; Appl. Opt. 37, 5560 (1998)] in an image-processing environment and tested it with Sea-viewing Wide Field-of-View Sensor (SeaWiFS) imagery from the Middle Atlantic Bight and the Sargasso Sea. We compared the SOA and the standard SeaWiFS algorithm on two days that had significantly different atmospheric turbidities but, because of the location and time of the year, nearly the same water properties. The SOA-derived pigment concentration showed excellent continuity over the two days, with the relative difference in pigments exceeding 10% only in regions that are characteristic of high advection. The continuity in the derived water-leaving radiances at 443 and 555 nm was also within ~10%. There was no obvious correlation between the relative differences in pigments and the aerosol concentration. In contrast, standard processing showed poor continuity in derived pigments over the two days, with the relative differences correlating strongly with atmospheric turbidity. SOA-derived atmospheric parameters suggested that the retrieved ocean and atmospheric reflectances were decoupled on the more turbid day. On the clearer day, for which the aerosol concentration was so low that relatively large changes in aerosol properties resulted in only small changes in aerosol reflectance, water patterns were evident in the aerosol properties. This result implies that SOA-derived atmospheric parameters cannot be accurate in extremely clear atmospheres.

  17. Using the TokSys Modeling and Simulation Environment to Design, Test and Implement Plasma Control Algorithms on DIII-D

    NASA Astrophysics Data System (ADS)

    Hyatt, A. W.; Welander, A. S.; Eidietis, N. W.; Lanctot, M. J.; Humphreys, D. A.

    2014-10-01

    The DIII-D tokamak has 18 independent poloidal field (PF) shaping coils and an independent Ohmic transformer coil system. This gives great plasma shaping flexibility and freedom but requires a complex control capability that imposes some form of constraint so that a given plasma shape and specification leads to uniquely determined PF shaping currents. One such constraint used is to connect most PF coils in parallel to a common bus, forcing the sum of those PF current to be zero. This constraint has many benefits, but also leads to instability where adjacent PF coils of opposite current can mutually increase, leading to local shape distortion when using the standard shape control algorithms. We will give examples of improved control algorithms that were extensively tested using the TokSys simulation suite available at DIII-D and then successfully implemented in practice on DIII-D. In one case using TokSys simulations to develop a control solution for a long sought plasma equilibrium saved several days of expensive tokamak operation time. Work supported by the US Department of Energy under DE-FC02-04ER54698.

  18. The Soil Moisture Active Passive Mission (SMAP) Science Data Products: Results of Testing with Field Experiment and Algorithm Testbed Simulation Environment Data

    NASA Technical Reports Server (NTRS)

    Entekhabi, Dara; Njoku, Eni E.; O'Neill, Peggy E.; Kellogg, Kent H.; Entin, Jared K.

    2010-01-01

    Talk outline 1. Derivation of SMAP basic and applied science requirements from the NRC Earth Science Decadal Survey applications 2. Data products and latencies 3. Algorithm highlights 4. SMAP Algorithm Testbed 5. SMAP Working Groups and community engagement

  19. Semioptimal practicable algorithmic cooling

    NASA Astrophysics Data System (ADS)

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-04-01

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon’s entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  20. Automatic design of decision-tree algorithms with evolutionary algorithms.

    PubMed

    Barros, Rodrigo C; Basgalupp, Márcio P; de Carvalho, André C P L F; Freitas, Alex A

    2013-01-01

    This study reports the empirical analysis of a hyper-heuristic evolutionary algorithm that is capable of automatically designing top-down decision-tree induction algorithms. Top-down decision-tree algorithms are of great importance, considering their ability to provide an intuitive and accurate knowledge representation for classification problems. The automatic design of these algorithms seems timely, given the large literature accumulated over more than 40 years of research in the manual design of decision-tree induction algorithms. The proposed hyper-heuristic evolutionary algorithm, HEAD-DT, is extensively tested using 20 public UCI datasets and 10 microarray gene expression datasets. The algorithms automatically designed by HEAD-DT are compared with traditional decision-tree induction algorithms, such as C4.5 and CART. Experimental results show that HEAD-DT is capable of generating algorithms which are significantly more accurate than C4.5 and CART.

  1. Exact Algorithms for Coloring Graphs While Avoiding Monochromatic Cycles

    NASA Astrophysics Data System (ADS)

    Talla Nobibon, Fabrice; Hurkens, Cor; Leus, Roel; Spieksma, Frits C. R.

    We consider the problem of deciding whether a given directed graph can be vertex partitioned into two acyclic subgraphs. Applications of this problem include testing rationality of collective consumption behavior, a subject in micro-economics. We identify classes of directed graphs for which the problem is easy and prove that the existence of a constant factor approximation algorithm is unlikely for an optimization version which maximizes the number of vertices that can be colored using two colors while avoiding monochromatic cycles. We present three exact algorithms, namely an integer-programming algorithm based on cycle identification, a backtracking algorithm, and a branch-and-check algorithm. We compare these three algorithms both on real-life instances and on randomly generated graphs. We find that for the latter set of graphs, every algorithm solves instances of considerable size within few seconds; however, the CPU time of the integer-programming algorithm increases with the number of vertices in the graph while that of the two other procedures does not. For every algorithm, we also study empirically the transition from a high to a low probability of YES answer as function of a parameter of the problem. For real-life instances, the integer-programming algorithm fails to solve the largest instance after one hour while the other two algorithms solve it in about ten minutes.

  2. 34 CFR 303.15 - Include; including.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 2 2010-07-01 2010-07-01 false Include; including. 303.15 Section 303.15 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF SPECIAL EDUCATION AND REHABILITATIVE SERVICES, DEPARTMENT OF EDUCATION EARLY INTERVENTION PROGRAM FOR INFANTS AND TODDLERS...

  3. Soil moisture estimation by airborne active and passive microwave remote sensing: A test-bed for SMAP fusion algorithms

    NASA Astrophysics Data System (ADS)

    Montzka, Carsten; Bogena, Heye; Jagdhuber, Thomas; Hajnsek, Irena; Horn, Ralf; Reigber, Andreas; Hasan, Sayeh; Rüdiger, Christoph; Jaeger, Marc; Vereecken, Harry

    2014-05-01

    The objective of the NASA Soil Moisture Active & Passive (SMAP) mission is to provide global measurements of soil moisture and its freeze/thaw state. The SMAP launch is currently planned for 2014-2015. The SMAP measurement approach is to integrate L-band radar and L-band radiometer as a single observation system combining the respective strengths of active and passive remote sensing for enhanced soil moisture mapping. The radar and radiometer measurements can be effectively combined to derive soil moisture maps that approach the accuracy of radiometer-only retrievals, but with a higher resolution (being able to approach the radar resolution under some conditions). Aircraft and tower-based instruments will be a key part of the SMAP validation program. Here, we present an airborne campaign in the Rur catchment in Germany, in which the passive L-band system Polarimetric L-band Multi-beam Radiometer (PLMR2) and the active L-band system DLR F-SAR were flown on six dates in 2013. The flights covered the full heterogeneity of the area under investigation, i.e. all types of land cover and experimental monitoring sites. These data are used as a test-bed for the analysis of existing and development of new active-passive fusion techniques. A synergistic use of the two signals can help to decouple soil moisture effects from the effects of vegetation (or roughness) in a better way than in the case of a single instrument. In this study, we present and evaluate three approaches for the fusion of active and passive microwave records for an enhanced representation of the soil moisture status: i) estimation of soil moisture by passive sensor data and subsequent disaggregation by active sensor backscatter data, ii) disaggregation of passive microwave brightness temperature by active microwave backscatter and subsequent inversion to soil moisture, and iii) fusion of two single-source soil moisture products from radar and radiometer.

  4. Identifying Risk Factors for Recent HIV Infection in Kenya Using a Recent Infection Testing Algorithm: Results from a Nationally Representative Population-Based Survey

    PubMed Central

    Kim, Andrea A.; Parekh, Bharat S.; Umuro, Mamo; Galgalo, Tura; Bunnell, Rebecca; Makokha, Ernest; Dobbs, Trudy; Murithi, Patrick; Muraguri, Nicholas; De Cock, Kevin M.; Mermin, Jonathan

    2016-01-01

    Introduction A recent infection testing algorithm (RITA) that can distinguish recent from long-standing HIV infection can be applied to nationally representative population-based surveys to characterize and identify risk factors for recent infection in a country. Materials and Methods We applied a RITA using the Limiting Antigen Avidity Enzyme Immunoassay (LAg) on stored HIV-positive samples from the 2007 Kenya AIDS Indicator Survey. The case definition for recent infection included testing recent on LAg and having no evidence of antiretroviral therapy use. Multivariate analysis was conducted to determine factors associated with recent and long-standing infection compared to HIV-uninfected persons. All estimates were weighted to adjust for sampling probability and nonresponse. Results Of 1,025 HIV-antibody-positive specimens, 64 (6.2%) met the case definition for recent infection and 961 (93.8%) met the case definition for long-standing infection. Compared to HIV-uninfected individuals, factors associated with higher adjusted odds of recent infection were living in Nairobi (adjusted odds ratio [AOR] 11.37; confidence interval [CI] 2.64–48.87) and Nyanza (AOR 4.55; CI 1.39–14.89) provinces compared to Western province; being widowed (AOR 8.04; CI 1.42–45.50) or currently married (AOR 6.42; CI 1.55–26.58) compared to being never married; having had ≥ 2 sexual partners in the last year (AOR 2.86; CI 1.51–5.41); not using a condom at last sex in the past year (AOR 1.61; CI 1.34–1.93); reporting a sexually transmitted infection (STI) diagnosis or symptoms of STI in the past year (AOR 1.97; CI 1.05–8.37); and being aged <30 years with: 1) HSV-2 infection (AOR 8.84; CI 2.62–29.85), 2) male genital ulcer disease (AOR 8.70; CI 2.36–32.08), or 3) lack of male circumcision (AOR 17.83; CI 2.19–144.90). Compared to HIV-uninfected persons, factors associated with higher adjusted odds of long-standing infection included living in Coast (AOR 1.55; CI 1.04–2

  5. Sampling Within k-Means Algorithm to Cluster Large Datasets

    SciTech Connect

    Bejarano, Jeremy; Bose, Koushiki; Brannan, Tyler; Thomas, Anita; Adragni, Kofi; Neerchal, Nagaraj; Ostrouchov, George

    2011-08-01

    Due to current data collection technology, our ability to gather data has surpassed our ability to analyze it. In particular, k-means, one of the simplest and fastest clustering algorithms, is ill-equipped to handle extremely large datasets on even the most powerful machines. Our new algorithm uses a sample from a dataset to decrease runtime by reducing the amount of data analyzed. We perform a simulation study to compare our sampling based k-means to the standard k-means algorithm by analyzing both the speed and accuracy of the two methods. Results show that our algorithm is significantly more efficient than the existing algorithm with comparable accuracy. Further work on this project might include a more comprehensive study both on more varied test datasets as well as on real weather datasets. This is especially important considering that this preliminary study was performed on rather tame datasets. Also, these datasets should analyze the performance of the algorithm on varied values of k. Lastly, this paper showed that the algorithm was accurate for relatively low sample sizes. We would like to analyze this further to see how accurate the algorithm is for even lower sample sizes. We could find the lowest sample sizes, by manipulating width and confidence level, for which the algorithm would be acceptably accurate. In order for our algorithm to be a success, it needs to meet two benchmarks: match the accuracy of the standard k-means algorithm and significantly reduce runtime. Both goals are accomplished for all six datasets analyzed. However, on datasets of three and four dimension, as the data becomes more difficult to cluster, both algorithms fail to obtain the correct classifications on some trials. Nevertheless, our algorithm consistently matches the performance of the standard algorithm while becoming remarkably more efficient with time. Therefore, we conclude that analysts can use our algorithm, expecting accurate results in considerably less time.

  6. Space-Based Near-Infrared CO2 Measurements: Testing the Orbiting Carbon Observatory Retrieval Algorithm and Validation Concept Using SCIAMACHY Observations over Park Falls, Wisconsin

    NASA Technical Reports Server (NTRS)

    Bosch, H.; Toon, G. C.; Sen, B.; Washenfelder, R. A.; Wennberg, P. O.; Buchwitz, M.; deBeek, R.; Burrows, J. P.; Crisp, D.; Christi, M.; Connor, B. J.; Natraj, V.; Yung, Y. L.

    2006-01-01

    test of the OCO retrieval algorithm and validation concept using NIR spectra measured from space. Finally, we argue that significant improvements in precision and accuracy could be obtained from a dedicated CO2 instrument such as OCO, which has much higher spectral and spatial resolutions than SCIAMACHY. These measurements would then provide critical data for improving our understanding of the carbon cycle and carbon sources and sinks.

  7. A flight management algorithm and guidance for fuel-conservative descents in a time-based metered air traffic environment: Development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1984-01-01

    A simple airborne flight management descent algorithm designed to define a flight profile subject to the constraints of using idle thrust, a clean airplane configuration (landing gear up, flaps zero, and speed brakes retracted), and fixed-time end conditions was developed and flight tested in the NASA TSRV B-737 research airplane. The research test flights, conducted in the Denver ARTCC automated time-based metering LFM/PD ATC environment, demonstrated that time guidance and control in the cockpit was acceptable to the pilots and ATC controllers and resulted in arrival of the airplane over the metering fix with standard deviations in airspeed error of 6.5 knots, in altitude error of 23.7 m (77.8 ft), and in arrival time accuracy of 12 sec. These accuracies indicated a good representation of airplane performance and wind modeling. Fuel savings will be obtained on a fleet-wide basis through a reduction of the time error dispersions at the metering fix and on a single-airplane basis by presenting the pilot with guidance for a fuel-efficient descent.

  8. CYCLOPS: A mobile robotic platform for testing and validating image processing and autonomous navigation algorithms in support of artificial vision prostheses.

    PubMed

    Fink, Wolfgang; Tarbell, Mark A

    2009-12-01

    While artificial vision prostheses are quickly becoming a reality, actual testing time with visual prosthesis carriers is at a premium. Moreover, it is helpful to have a more realistic functional approximation of a blind subject. Instead of a normal subject with a healthy retina looking at a low-resolution (pixelated) image on a computer monitor or head-mounted display, a more realistic approximation is achieved by employing a subject-independent mobile robotic platform that uses a pixelated view as its sole visual input for navigation purposes. We introduce CYCLOPS: an AWD, remote controllable, mobile robotic platform that serves as a testbed for real-time image processing and autonomous navigation systems for the purpose of enhancing the visual experience afforded by visual prosthesis carriers. Complete with wireless Internet connectivity and a fully articulated digital camera with wireless video link, CYCLOPS supports both interactive tele-commanding via joystick, and autonomous self-commanding. Due to its onboard computing capabilities and extended battery life, CYCLOPS can perform complex and numerically intensive calculations, such as image processing and autonomous navigation algorithms, in addition to interfacing to additional sensors. Its Internet connectivity renders CYCLOPS a worldwide accessible testbed for researchers in the field of artificial vision systems. CYCLOPS enables subject-independent evaluation and validation of image processing and autonomous navigation systems with respect to the utility and efficiency of supporting and enhancing visual prostheses, while potentially reducing to a necessary minimum the need for valuable testing time with actual visual prosthesis carriers. PMID:19651459

  9. CYCLOPS: A mobile robotic platform for testing and validating image processing and autonomous navigation algorithms in support of artificial vision prostheses.

    PubMed

    Fink, Wolfgang; Tarbell, Mark A

    2009-12-01

    While artificial vision prostheses are quickly becoming a reality, actual testing time with visual prosthesis carriers is at a premium. Moreover, it is helpful to have a more realistic functional approximation of a blind subject. Instead of a normal subject with a healthy retina looking at a low-resolution (pixelated) image on a computer monitor or head-mounted display, a more realistic approximation is achieved by employing a subject-independent mobile robotic platform that uses a pixelated view as its sole visual input for navigation purposes. We introduce CYCLOPS: an AWD, remote controllable, mobile robotic platform that serves as a testbed for real-time image processing and autonomous navigation systems for the purpose of enhancing the visual experience afforded by visual prosthesis carriers. Complete with wireless Internet connectivity and a fully articulated digital camera with wireless video link, CYCLOPS supports both interactive tele-commanding via joystick, and autonomous self-commanding. Due to its onboard computing capabilities and extended battery life, CYCLOPS can perform complex and numerically intensive calculations, such as image processing and autonomous navigation algorithms, in addition to interfacing to additional sensors. Its Internet connectivity renders CYCLOPS a worldwide accessible testbed for researchers in the field of artificial vision systems. CYCLOPS enables subject-independent evaluation and validation of image processing and autonomous navigation systems with respect to the utility and efficiency of supporting and enhancing visual prostheses, while potentially reducing to a necessary minimum the need for valuable testing time with actual visual prosthesis carriers.

  10. Experiments with conjugate gradient algorithms for homotopy curve tracking

    NASA Technical Reports Server (NTRS)

    Irani, Kashmira M.; Ribbens, Calvin J.; Watson, Layne T.; Kamat, Manohar P.; Walker, Homer F.

    1991-01-01

    There are algorithms for finding zeros or fixed points of nonlinear systems of equations that are globally convergent for almost all starting points, i.e., with probability one. The essence of all such algorithms is the construction of an appropriate homotopy map and then tracking some smooth curve in the zero set of this homotopy map. HOMPACK is a mathematical software package implementing globally convergent homotopy algorithms with three different techniques for tracking a homotopy zero curve, and has separate routines for dense and sparse Jacobian matrices. The HOMPACK algorithms for sparse Jacobian matrices use a preconditioned conjugate gradient algorithm for the computation of the kernel of the homotopy Jacobian matrix, a required linear algebra step for homotopy curve tracking. Here, variants of the conjugate gradient algorithm are implemented in the context of homotopy curve tracking and compared with Craig's preconditioned conjugate gradient method used in HOMPACK. The test problems used include actual large scale, sparse structural mechanics problems.

  11. Analysis of image thresholding segmentation algorithms based on swarm intelligence

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Lu, Kai; Gao, Yinghui; Yang, Bo

    2013-03-01

    Swarm intelligence-based image thresholding segmentation algorithms are playing an important role in the research field of image segmentation. In this paper, we briefly introduce the theories of four existing image segmentation algorithms based on swarm intelligence including fish swarm algorithm, artificial bee colony, bacteria foraging algorithm and particle swarm optimization. Then some image benchmarks are tested in order to show the differences of the segmentation accuracy, time consumption, convergence and robustness for Salt & Pepper noise and Gaussian noise of these four algorithms. Through these comparisons, this paper gives qualitative analyses for the performance variance of the four algorithms. The conclusions in this paper would give a significant guide for the actual image segmentation.

  12. Testing a Variety of Encryption Technologies

    SciTech Connect

    Henson, T J

    2001-04-09

    Review and test speeds of various encryption technologies using Entrust Software. Multiple encryption algorithms are included in the product. Algorithms tested were IDEA, CAST, DES, and RC2. Test consisted of taking a 7.7 MB Word document file which included complex graphics and timing encryption, decryption and signing. Encryption is discussed in the GIAC Kickstart section: Information Security: The Big Picture--Part VI.

  13. HEATR project: ATR algorithm parallelization

    NASA Astrophysics Data System (ADS)

    Deardorf, Catherine E.

    1998-09-01

    High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.

  14. Performance of fusion algorithms for computer-aided detection and classification of mines in very shallow water obtained from testing in navy Fleet Battle Exercise-Hotel 2000

    NASA Astrophysics Data System (ADS)

    Ciany, Charles M.; Zurawski, William; Kerfoot, Ian

    2001-10-01

    The performance of Computer Aided Detection/Computer Aided Classification (CAD/CAC) Fusion algorithms on side-scan sonar images was evaluated using data taken at the Navy's's Fleet Battle Exercise-Hotel held in Panama City, Florida, in August 2000. A 2-of-3 binary fusion algorithm is shown to provide robust performance. The algorithm accepts the classification decisions and associated contact locations form three different CAD/CAC algorithms, clusters the contacts based on Euclidian distance, and then declares a valid target when a clustered contact is declared by at least 2 of the 3 individual algorithms. This simple binary fusion provided a 96 percent probability of correct classification at a false alarm rate of 0.14 false alarms per image per side. The performance represented a 3.8:1 reduction in false alarms over the best performing single CAD/CAC algorithm, with no loss in probability of correct classification.

  15. Evaluation of the expected moments algorithm and a multiple low-outlier test for flood frequency analysis at streamgaging stations in Arizona

    USGS Publications Warehouse

    Paretti, Nicholas V.; Kennedy, Jeffrey R.; Cohn, Timothy A.

    2014-01-01

    Flooding is among the costliest natural disasters in terms of loss of life and property in Arizona, which is why the accurate estimation of flood frequency and magnitude is crucial for proper structural design and accurate floodplain mapping. Current guidelines for flood frequency analysis in the United States are described in Bulletin 17B (B17B), yet since B17B’s publication in 1982 (Interagency Advisory Committee on Water Data, 1982), several improvements have been proposed as updates for future guidelines. Two proposed updates are the Expected Moments Algorithm (EMA) to accommodate historical and censored data, and a generalized multiple Grubbs-Beck (MGB) low-outlier test. The current guidelines use a standard Grubbs-Beck (GB) method to identify low outliers, changing the determination of the moment estimators because B17B uses a conditional probability adjustment to handle low outliers while EMA censors the low outliers. B17B and EMA estimates are identical if no historical information or censored or low outliers are present in the peak-flow data. EMA with MGB (EMA-MGB) test was compared to the standard B17B (B17B-GB) method for flood frequency analysis at 328 streamgaging stations in Arizona. The methods were compared using the relative percent difference (RPD) between annual exceedance probabilities (AEPs), goodness-of-fit assessments, random resampling procedures, and Monte Carlo simulations. The AEPs were calculated and compared using both station skew and weighted skew. Streamgaging stations were classified by U.S. Geological Survey (USGS) National Water Information System (NWIS) qualification codes, used to denote historical and censored peak-flow data, to better understand the effect that nonstandard flood information has on the flood frequency analysis for each method. Streamgaging stations were also grouped according to geographic flood regions and analyzed separately to better understand regional differences caused by physiography and climate. The B

  16. Fractal Landscape Algorithms for Environmental Simulations

    NASA Astrophysics Data System (ADS)

    Mao, H.; Moran, S.

    2014-12-01

    Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.

  17. Operational algorithm development and refinement approaches

    NASA Astrophysics Data System (ADS)

    Ardanuy, Philip E.

    2003-11-01

    Next-generation polar and geostationary systems, such as the National Polar-orbiting Operational Environmental Satellite System (NPOESS) and the Geostationary Operational Environmental Satellite (GOES)-R, will deploy new generations of electro-optical reflective and emissive capabilities. These will include low-radiometric-noise, improved spatial resolution multi-spectral and hyperspectral imagers and sounders. To achieve specified performances (e.g., measurement accuracy, precision, uncertainty, and stability), and best utilize the advanced space-borne sensing capabilities, a new generation of retrieval algorithms will be implemented. In most cases, these advanced algorithms benefit from ongoing testing and validation using heritage research mission algorithms and data [e.g., the Earth Observing System (EOS)] Moderate-resolution Imaging Spectroradiometer (MODIS) and Shuttle Ozone Limb Scattering Experiment (SOLSE)/Limb Ozone Retreival Experiment (LORE). In these instances, an algorithm's theoretical basis is not static, but rather improves with time. Once frozen, an operational algorithm can "lose ground" relative to research analogs. Cost/benefit analyses provide a basis for change management. The challenge is in reconciling and balancing the stability, and "comfort," that today"s generation of operational platforms provide (well-characterized, known, sensors and algorithms) with the greatly improved quality, opportunities, and risks, that the next generation of operational sensors and algorithms offer. By using the best practices and lessons learned from heritage/groundbreaking activities, it is possible to implement an agile process that enables change, while managing change. This approach combines a "known-risk" frozen baseline with preset completion schedules with insertion opportunities for algorithm advances as ongoing validation activities identify and repair areas of weak performance. This paper describes an objective, adaptive implementation roadmap that

  18. TVFMCATS. Time Variant Floating Mean Counting Algorithm

    SciTech Connect

    Huffman, R.K.

    1999-05-01

    This software was written to test a time variant floating mean counting algorithm. The algorithm was developed by Westinghouse Savannah River Company and a provisional patent has been filed on the algorithm. The test software was developed to work with the Val Tech model IVB prototype version II count rate meter hardware. The test software was used to verify the algorithm developed by WSRC could be correctly implemented with the vendor`s hardware.

  19. Time Variant Floating Mean Counting Algorithm

    SciTech Connect

    Huffman, Russell Kevin

    1999-06-03

    This software was written to test a time variant floating mean counting algorithm. The algorithm was developed by Westinghouse Savannah River Company and a provisional patent has been filed on the algorithm. The test software was developed to work with the Val Tech model IVB prototype version II count rate meter hardware. The test software was used to verify the algorithm developed by WSRC could be correctly implemented with the vendor''s hardware.

  20. Investigation of registration algorithms for the automatic tile processing system

    NASA Technical Reports Server (NTRS)

    Tamir, Dan E.

    1995-01-01

    The Robotic Tile Inspection System (RTPS), under development in NASA-KSC, is expected to automate the processes of post-flight re-water-proofing and the process of inspection of the Shuttle heat absorbing tiles. An important task of the robot vision sub-system is to register the 'real-world' coordinates with the coordinates of the robot model of the Shuttle tiles. The model coordinates relate to a tile data-base and pre-flight tile-images. In the registration process, current (post-flight) images are aligned with pre-flight images to detect the rotation and translation displacement required for the coordinate systems rectification. The research activities performed this summer included study and evaluation of the registration algorithm that is currently implemented by the RTPS, as well as, investigation of the utility of other registration algorithms. It has been found that the current algorithm is not robust enough. This algorithm has a success rate of less than 80% and is, therefore, not suitable for complying with the requirements of the RTPS. Modifications to the current algorithm has been developed and tested. These modifications can improve the performance of the registration algorithm in a significant way. However, this improvement is not sufficient to satisfy system requirements. A new algorithm for registration has been developed and tested. This algorithm presented very high degree of robustness with success rate of 96%.

  1. Application of fusion algorithms for computer-aided detection and classification of bottom mines to shallow water test data from the battle space preparation autonomous underwater vehicle (BPAUV)

    NASA Astrophysics Data System (ADS)

    Ciany, Charles M.; Zurawski, William; Dobeck, Gerald J.

    2003-09-01

    Over the past several years, Raytheon Company has adapted its Computer Aided Detection/Computer-Aided Classification (CAD/CAC)algorithm to process side-scan sonar imagery taken in both the Very Shallow Water (VSW) and Shallow Water (SW) operating environments. This paper describes the further adaptation of this CAD/CAC algorithm to process SW side-scan image data taken by the Battle Space Preparation Autonomous Underwater Vehicle (BPAUV), a vehicle made by Bluefin Robotics. The tuning of the CAD/CAC algorithm for the vehicle's sonar is described, the resulting classifier performance is presented, and the fusion of the classifier outputs with those of three other CAD/CAC processors is evaluated. The fusion algorithm accepts the classification confidence levels and associated contact locations from the four different CAD/CAC algorithms, clusters the contacts based on the distance between their locations, and then declares a valid target when a clustered contact passes a prescribed fusion criterion. Four different fusion criteria are evaluated: the first based on thresholding the sum of the confidence factors for the clustered contacts, the second and third based on simple and constrained binary combinations of the multiple CAD/CAC processor outputs, and the fourth based on the Fisher Discriminant. The resulting performance of the four fusion algorithms is compared, and the overall performance benefit of a significant reduction of false alarms at high correct classification probabilities is quantified. The optimal Fisher fusion algorithm yields a 90% probability of correct classification at a false alarm probability of 0.0062 false alarms per image per side, a 34:1 reduction in false alarms relative to the best performing single CAD/CAC algorithm.

  2. LEED I/V determination of the structure of a MoO3 monolayer on Au(111): Testing the performance of the CMA-ES evolutionary strategy algorithm, differential evolution, a genetic algorithm and tensor LEED based structural optimization

    NASA Astrophysics Data System (ADS)

    Primorac, E.; Kuhlenbeck, H.; Freund, H.-J.

    2016-07-01

    The structure of a thin MoO3 layer on Au(111) with a c(4 × 2) superstructure was studied with LEED I/V analysis. As proposed previously (Quek et al., Surf. Sci. 577 (2005) L71), the atomic structure of the layer is similar to that of a MoO3 single layer as found in regular α-MoO3. The layer on Au(111) has a glide plane parallel to the short unit vector of the c(4 × 2) unit cell and the molybdenum atoms are bridge-bonded to two surface gold atoms with the structure of the gold surface being slightly distorted. The structural refinement of the structure was performed with the CMA-ES evolutionary strategy algorithm which could reach a Pendry R-factor of ∼ 0.044. In the second part the performance of CMA-ES is compared with that of the differential evolution method, a genetic algorithm and the Powell optimization algorithm employing I/V curves calculated with tensor LEED.

  3. The clinical algorithm nosology: a method for comparing algorithmic guidelines.

    PubMed

    Pearson, S D; Margolis, C Z; Davis, S; Schreier, L K; Gottlieb, L K

    1992-01-01

    Concern regarding the cost and quality of medical care has led to a proliferation of competing clinical practice guidelines. No technique has been described for determining objectively the degree of similarity between alternative guidelines for the same clinical problem. The authors describe the development of the Clinical Algorithm Nosology (CAN), a new method to compare one form of guideline: the clinical algorithm. The CAN measures overall design complexity independent of algorithm content, qualitatively describes the clinical differences between two alternative algorithms, and then scores the degree of similarity between them. CAN algorithm design-complexity scores correlated highly with clinicians' estimates of complexity on an ordinal scale (r = 0.86). Five pairs of clinical algorithms addressing three topics (gallstone lithotripsy, thyroid nodule, and sinusitis) were selected for interrater reliability testing of the CAN clinical-similarity scoring system. Raters categorized the similarity of algorithm pathways in alternative algorithms as "identical," "similar," or "different." Interrater agreement was achieved on 85/109 scores (80%), weighted kappa statistic, k = 0.73. It is concluded that the CAN is a valid method for determining the structural complexity of clinical algorithms, and a reliable method for describing differences and scoring the similarity between algorithms for the same clinical problem. In the future, the CAN may serve to evaluate the reliability of algorithm development programs, and to support providers and purchasers in choosing among alternative clinical guidelines.

  4. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  5. The challenges of implementing and testing two signal processing algorithms for high rep-rate Coherent Doppler Lidar for wind sensing

    NASA Astrophysics Data System (ADS)

    Abdelazim, S.; Santoro, D.; Arend, M.; Moshary, F.; Ahmed, S.

    2015-05-01

    In this paper, we present two signal processing algorithms implemented using the FPGA. The first algorithm involves explicate time gating of received signals that correspond to a desired spatial resolution, performing a Fast Fourier Transform (FFT) calculation on each individual time gate, taking the square modulus of the FFT to form a power spectrum and then accumulating these power spectra for 10k return signals. The second algorithm involves calculating the autocorrelation of the backscattered signals and then accumulating the autocorrelation for 10k pulses. Efficient implementation of each of these two signal processing algorithms on an FPGA is challenging because it requires there to be tradeoffs between retaining the full data word width, managing the amount of on chip memory used and respecting the constraints imposed by the data width of the FPGA. A description of the approach used to manage these tradeoffs for each of the two signal processing algorithms are presented and explained in this article. Results of atmospheric measurements obtained through these two embedded programming techniques are also presented.

  6. Pump apparatus including deconsolidator

    SciTech Connect

    Sonwane, Chandrashekhar; Saunders, Timothy; Fitzsimmons, Mark Andrew

    2014-10-07

    A pump apparatus includes a particulate pump that defines a passage that extends from an inlet to an outlet. A duct is in flow communication with the outlet. The duct includes a deconsolidator configured to fragment particle agglomerates received from the passage.

  7. Application of Modified Differential Evolution Algorithm to Magnetotelluric and Vertical Electrical Sounding Data

    NASA Astrophysics Data System (ADS)

    Mingolo, Nusharin; Sarakorn, Weerachai

    2016-04-01

    In this research, the Modified Differential Evolution (DE) algorithm is proposed and applied to the Magnetotelluric (MT) and Vertical Electrical sounding (VES) data to reveal the reasonable resistivity structure. The common processes of DE algorithm, including initialization, mutation and crossover, are modified by introducing both new control parameters and some constraints to obtain the fitting-reasonable resistivity model. The validity and efficiency of our developed modified DE algorithm is tested on both synthetic and real observed data. Our developed DE algorithm is also compared to the well-known OCCAM's algorithm for real case of MT data. For the synthetic case, our modified DE algorithm with appropriate control parameters can reveal the reasonable-fitting models when compared to the original synthetic models. For the real data case, the resistivity structures revealed by our algorithm are closed to those obtained by OCCAM's inversion, but our obtained structures reveal layers more apparently.

  8. Optical modulator including grapene

    DOEpatents

    Liu, Ming; Yin, Xiaobo; Zhang, Xiang

    2016-06-07

    The present invention provides for a one or more layer graphene optical modulator. In a first exemplary embodiment the optical modulator includes an optical waveguide, a nanoscale oxide spacer adjacent to a working region of the waveguide, and a monolayer graphene sheet adjacent to the spacer. In a second exemplary embodiment, the optical modulator includes at least one pair of active media, where the pair includes an oxide spacer, a first monolayer graphene sheet adjacent to a first side of the spacer, and a second monolayer graphene sheet adjacent to a second side of the spacer, and at least one optical waveguide adjacent to the pair.

  9. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  10. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  11. Improving DTI tractography by including diagonal tract propagation.

    PubMed

    Taylor, Paul A; Cho, Kuan-Hung; Lin, Ching-Po; Biswal, Bharat B

    2012-01-01

    Tractography algorithms have been developed to reconstruct likely WM pathways in the brain from diffusion tensor imaging (DTI) data. In this study, an elegant and simple means for improving existing tractography algorithms is proposed by allowing tracts to propagate through diagonal trajectories between voxels, instead of only rectilinearly to their facewise neighbors. A series of tests (using both real and simulated data sets) are utilized to show several benefits of this new approach. First, the inclusion of diagonal tract propagation decreases the dependence of an algorithm on the arbitrary orientation of coordinate axes and therefore reduces numerical errors associated with that bias (which are also demonstrated here). Moreover, both quantitatively and qualitatively, including diagonals decreases overall noise sensitivity of results and leads to significantly greater efficiency in scanning protocols; that is, the obtained tracts converge much more quickly (i.e., in a smaller amount of scanning time) to those of data sets with high SNR and spatial resolution. Importantly, the inclusion of diagonal propagation adds essentially no appreciable time of calculation or computational costs to standard methods. This study focuses on the widely-used streamline tracking method, FACT (fiber assessment by continuous tracking), and the modified method is termed "FACTID" (FACT including diagonals). PMID:22970125

  12. Power spectral estimation algorithms

    NASA Technical Reports Server (NTRS)

    Bhatia, Manjit S.

    1989-01-01

    Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.

  13. HPTN 071 (PopART): Rationale and design of a cluster-randomised trial of the population impact of an HIV combination prevention intervention including universal testing and treatment – a study protocol for a cluster randomised trial

    PubMed Central

    2014-01-01

    Background Effective interventions to reduce HIV incidence in sub-Saharan Africa are urgently needed. Mathematical modelling and the HIV Prevention Trials Network (HPTN) 052 trial results suggest that universal HIV testing combined with immediate antiretroviral treatment (ART) should substantially reduce incidence and may eliminate HIV as a public health problem. We describe the rationale and design of a trial to evaluate this hypothesis. Methods/Design A rigorously-designed trial of universal testing and treatment (UTT) interventions is needed because: i) it is unknown whether these interventions can be delivered to scale with adequate uptake; ii) there are many uncertainties in the models such that the population-level impact of these interventions is unknown; and ii) there are potential adverse effects including sexual risk disinhibition, HIV-related stigma, over-burdening of health systems, poor adherence, toxicity, and drug resistance. In the HPTN 071 (PopART) trial, 21 communities in Zambia and South Africa (total population 1.2 m) will be randomly allocated to three arms. Arm A will receive the full PopART combination HIV prevention package including annual home-based HIV testing, promotion of medical male circumcision for HIV-negative men, and offer of immediate ART for those testing HIV-positive; Arm B will receive the full package except that ART initiation will follow current national guidelines; Arm C will receive standard of care. A Population Cohort of 2,500 adults will be randomly selected in each community and followed for 3 years to measure the primary outcome of HIV incidence. Based on model projections, the trial will be well-powered to detect predicted effects on HIV incidence and secondary outcomes. Discussion Trial results, combined with modelling and cost data, will provide short-term and long-term estimates of cost-effectiveness of UTT interventions. Importantly, the three-arm design will enable assessment of how much could be achieved by

  14. How Are Mate Preferences Linked with Actual Mate Selection? Tests of Mate Preference Integration Algorithms Using Computer Simulations and Actual Mating Couples

    PubMed Central

    Conroy-Beam, Daniel; Buss, David M.

    2016-01-01

    Prior mate preference research has focused on the content of mate preferences. Yet in real life, people must select mates among potentials who vary along myriad dimensions. How do people incorporate information on many different mate preferences in order to choose which partner to pursue? Here, in Study 1, we compare seven candidate algorithms for integrating multiple mate preferences in a competitive agent-based model of human mate choice evolution. This model shows that a Euclidean algorithm is the most evolvable solution to the problem of selecting fitness-beneficial mates. Next, across three studies of actual couples (Study 2: n = 214; Study 3: n = 259; Study 4: n = 294) we apply the Euclidean algorithm toward predicting mate preference fulfillment overall and preference fulfillment as a function of mate value. Consistent with the hypothesis that mate preferences are integrated according to a Euclidean algorithm, we find that actual mates lie close in multidimensional preference space to the preferences of their partners. Moreover, this Euclidean preference fulfillment is greater for people who are higher in mate value, highlighting theoretically-predictable individual differences in who gets what they want. These new Euclidean tools have important implications for understanding real-world dynamics of mate selection. PMID:27276030

  15. How Are Mate Preferences Linked with Actual Mate Selection? Tests of Mate Preference Integration Algorithms Using Computer Simulations and Actual Mating Couples.

    PubMed

    Conroy-Beam, Daniel; Buss, David M

    2016-01-01

    Prior mate preference research has focused on the content of mate preferences. Yet in real life, people must select mates among potentials who vary along myriad dimensions. How do people incorporate information on many different mate preferences in order to choose which partner to pursue? Here, in Study 1, we compare seven candidate algorithms for integrating multiple mate preferences in a competitive agent-based model of human mate choice evolution. This model shows that a Euclidean algorithm is the most evolvable solution to the problem of selecting fitness-beneficial mates. Next, across three studies of actual couples (Study 2: n = 214; Study 3: n = 259; Study 4: n = 294) we apply the Euclidean algorithm toward predicting mate preference fulfillment overall and preference fulfillment as a function of mate value. Consistent with the hypothesis that mate preferences are integrated according to a Euclidean algorithm, we find that actual mates lie close in multidimensional preference space to the preferences of their partners. Moreover, this Euclidean preference fulfillment is greater for people who are higher in mate value, highlighting theoretically-predictable individual differences in who gets what they want. These new Euclidean tools have important implications for understanding real-world dynamics of mate selection. PMID:27276030

  16. Test of the Semi-Analytical Case 1 and Gelbstoff Case 2 SeaWiFS Algorithm with a Global Data Set

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1997-01-01

    The algorithm-development activities at USF during the second half of 1997 have concentrated on data collection and theoretical modeling. Six abstracts were submitted for presentation at the AGU conference in San Diego, California during February 9-13, 1998. Four papers were submitted to JGR and Applied Optics for publication.

  17. How Are Mate Preferences Linked with Actual Mate Selection? Tests of Mate Preference Integration Algorithms Using Computer Simulations and Actual Mating Couples.

    PubMed

    Conroy-Beam, Daniel; Buss, David M

    2016-01-01

    Prior mate preference research has focused on the content of mate preferences. Yet in real life, people must select mates among potentials who vary along myriad dimensions. How do people incorporate information on many different mate preferences in order to choose which partner to pursue? Here, in Study 1, we compare seven candidate algorithms for integrating multiple mate preferences in a competitive agent-based model of human mate choice evolution. This model shows that a Euclidean algorithm is the most evolvable solution to the problem of selecting fitness-beneficial mates. Next, across three studies of actual couples (Study 2: n = 214; Study 3: n = 259; Study 4: n = 294) we apply the Euclidean algorithm toward predicting mate preference fulfillment overall and preference fulfillment as a function of mate value. Consistent with the hypothesis that mate preferences are integrated according to a Euclidean algorithm, we find that actual mates lie close in multidimensional preference space to the preferences of their partners. Moreover, this Euclidean preference fulfillment is greater for people who are higher in mate value, highlighting theoretically-predictable individual differences in who gets what they want. These new Euclidean tools have important implications for understanding real-world dynamics of mate selection.

  18. Lightning detection and exposure algorithms for smartphones

    NASA Astrophysics Data System (ADS)

    Wang, Haixin; Shao, Xiaopeng; Wang, Lin; Su, Laili; Huang, Yining

    2015-05-01

    This study focuses on the key theory of lightning detection, exposure and the experiments. Firstly, the algorithm based on differential operation between two adjacent frames is selected to remove the lightning background information and extract lighting signal, and the threshold detection algorithm is applied to achieve the purpose of precise detection of lightning. Secondly, an algorithm is proposed to obtain scene exposure value, which can automatically detect external illumination status. Subsequently, a look-up table could be built on the basis of the relationships between the exposure value and average image brightness to achieve rapid automatic exposure. Finally, based on a USB 3.0 industrial camera including a CMOS imaging sensor, a set of hardware test platform is established and experiments are carried out on this platform to verify the performances of the proposed algorithms. The algorithms can effectively and fast capture clear lightning pictures such as special nighttime scenes, which will provide beneficial supporting to the smartphone industry, since the current exposure methods in smartphones often lost capture or induce overexposed or underexposed pictures.

  19. Parallel Clustering Algorithms for Structured AMR

    SciTech Connect

    Gunney, B T; Wissink, A M; Hysom, D A

    2005-10-26

    We compare several different parallel implementation approaches for the clustering operations performed during adaptive gridding operations in patch-based structured adaptive mesh refinement (SAMR) applications. Specifically, we target the clustering algorithm of Berger and Rigoutsos (BR91), which is commonly used in many SAMR applications. The baseline for comparison is a simplistic parallel extension of the original algorithm that works well for up to O(10{sup 2}) processors. Our goal is a clustering algorithm for machines of up to O(10{sup 5}) processors, such as the 64K-processor IBM BlueGene/Light system. We first present an algorithm that avoids the unneeded communications of the simplistic approach to improve the clustering speed by up to an order of magnitude. We then present a new task-parallel implementation to further reduce communication wait time, adding another order of magnitude of improvement. The new algorithms also exhibit more favorable scaling behavior for our test problems. Performance is evaluated on a number of large scale parallel computer systems, including a 16K-processor BlueGene/Light system.

  20. Corrective Action Investigation Plan for Corrective Action Unit 254: Area 25 R-MAD Decontamination Facility, Nevada Test Site, Nevada (includes ROTC No. 1, date 01/25/1999)

    SciTech Connect

    DOE /NV

    1999-07-29

    This Corrective Action Investigation Plan contains the US Department of Energy, Nevada Operations Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 254 under the Federal Facility Agreement and Consent Order. Corrective Action Unit 254 consists of Corrective Action Site (CAS) 25-23-06, Decontamination Facility. Located in Area 25 at the Nevada Test Site (NTS), CAU 254 was used between 1963 through 1973 for the decontamination of test-car hardware and tooling used in the Nuclear Rocket Development Station program. The CAS is composed of a fenced area measuring approximately 119 feet by 158 feet that includes Building 3126, an associated aboveground storage tank, a potential underground storage area, two concrete decontamination pads, a generator, two sumps, and a storage yard. Based on site history, the scope of this plan is to resolve the problem statement identified during the Data Quality Objectives process that decontamination activities at this CAU site may have resulted in the release of contaminants of concern (COCs) onto building surfaces, down building drains to associated leachfields, and to soils associated with two concrete decontamination pads located outside the building. Therefore, the scope of the corrective action field investigation will involve soil sampling at biased and random locations in the yard using a direct-push method, scanning and static radiological surveys, and laboratory analyses of all soil/building samples. Historical information provided by former NTS employees indicates that solvents and degreasers may have been used in the decontamination processes; therefore, potential COCs include volatile/semivolatile organic compounds, Resource Conservation and Recovery Act metals, petroleum hydrocarbons, polychlorinated biphenyls, pesticides, asbestos, gamma-emitting radionuclides, plutonium, uranium, and strontium-90. The results of this

  1. The VITRO Score (Von Willebrand Factor Antigen/Thrombocyte Ratio) as a New Marker for Clinically Significant Portal Hypertension in Comparison to Other Non-Invasive Parameters of Fibrosis Including ELF Test

    PubMed Central

    Hametner, Stephanie; Ferlitsch, Arnulf; Ferlitsch, Monika; Etschmaier, Alexandra; Schöfl, Rainer; Ziachehabi, Alexander; Maieron, Andreas

    2016-01-01

    Background Clinically significant portal hypertension (CSPH), defined as hepatic venous pressure gradient (HVPG) ≥10 mmHg, causes major complications. HVPG is not always available, so a non-invasive tool to diagnose CSPH would be useful. VWF-Ag can be used to diagnose. Using the VITRO score (the VWF-Ag/platelet ratio) instead of VWF-Ag itself improves the diagnostic accuracy of detecting cirrhosis/ fibrosis in HCV patients. Aim This study tested the diagnostic accuracy of VITRO score detecting CSPH compared to HVPG measurement. Methods All patients underwent HVPG testing and were categorised as CSPH or no CSPH. The following patient data were determined: CPS, D’Amico stage, VITRO score, APRI and transient elastography (TE). Results The analysis included 236 patients; 170 (72%) were male, and the median age was 57.9 (35.2–76.3; 95% CI). Disease aetiology included ALD (39.4%), HCV (23.4%), NASH (12.3%), other (8.1%) and unknown (11.9%). The CPS showed 140 patients (59.3%) with CPS A; 56 (23.7%) with CPS B; and 18 (7.6%) with CPS C. 136 patients (57.6%) had compensated and 100 (42.4%) had decompensated cirrhosis; 83.9% had HVPG ≥10 mmHg. The VWF-Ag and the VITRO score increased significantly with worsening HVPG categories (P<0.0001). ROC analysis was performed for the detection of CSPH and showed AUC values of 0.92 for TE, 0.86 for VITRO score, 0.79 for VWF-Ag, 0.68 for ELF and 0.62 for APRI. Conclusion The VITRO score is an easy way to diagnose CSPH independently of CPS in routine clinical work and may improve the management of patients with cirrhosis. PMID:26895398

  2. Implementing a self-structuring data learning algorithm

    NASA Astrophysics Data System (ADS)

    Graham, James; Carson, Daniel; Ternovskiy, Igor

    2016-05-01

    In this paper, we elaborate on what we did to implement our self-structuring data learning algorithm. To recap, we are working to develop a data learning algorithm that will eventually be capable of goal driven pattern learning and extrapolation of more complex patterns from less complex ones. At this point we have developed a conceptual framework for the algorithm, but have yet to discuss our actual implementation and the consideration and shortcuts we needed to take to create said implementation. We will elaborate on our initial setup of the algorithm and the scenarios we used to test our early stage algorithm. While we want this to be a general algorithm, it is necessary to start with a simple scenario or two to provide a viable development and testing environment. To that end, our discussion will be geared toward what we include in our initial implementation and why, as well as what concerns we may have. In the future, we expect to be able to apply our algorithm to a more general approach, but to do so within a reasonable time, we needed to pick a place to start.

  3. Listening to Include

    ERIC Educational Resources Information Center

    Veck, Wayne

    2009-01-01

    This paper attempts to make important connections between listening and inclusive education and the refusal to listen and exclusion. Two lines of argument are advanced. First, if educators and learners are to include each other within their educational institutions as unique individuals, then they will need to listen attentively to each other.…

  4. Multisensor data fusion algorithm development

    SciTech Connect

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  5. On algorithmic rate-coded AER generation.

    PubMed

    Linares-Barranco, Alejandro; Jimenez-Moreno, Gabriel; Linares-Barranco, Bernabé; Civit-Balcells, Antón

    2006-05-01

    This paper addresses the problem of converting a conventional video stream based on sequences of frames into the spike event-based representation known as the address-event-representation (AER). In this paper we concentrate on rate-coded AER. The problem is addressed as an algorithmic problem, in which different methods are proposed, implemented and tested through software algorithms. The proposed algorithms are comparatively evaluated according to different criteria. Emphasis is put on the potential of such algorithms for a) doing the frame-based to event-based representation in real time, and b) that the resulting event streams ressemble as much as possible those generated naturally by rate-coded address-event VLSI chips, such as silicon AER retinae. It is found that simple and straightforward algorithms tend to have high potential for real time but produce event distributions that differ considerably from those obtained in AER VLSI chips. On the other hand, sophisticated algorithms that yield better event distributions are not efficient for real time operations. The methods based on linear-feedback-shift-register (LFSR) pseudorandom number generation is a good compromise, which is feasible for real time and yield reasonably well distributed events in time. Our software experiments, on a 1.6-GHz Pentium IV, show that at 50% AER bus load the proposed algorithms require between 0.011 and 1.14 ms per 8 bit-pixel per frame. One of the proposed LFSR methods is implemented in real time hardware using a prototyping board that includes a VirtexE 300 FPGA. The demonstration hardware is capable of transforming frames of 64 x 64 pixels of 8-bit depth at a frame rate of 25 frames per second, producing spike events at a peak rate of 10(7) events per second. PMID:16722179

  6. The Xmath Integration Algorithm

    ERIC Educational Resources Information Center

    Bringslid, Odd

    2009-01-01

    The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…

  7. Fast voxel and polygon ray-tracing algorithms in intensity modulated radiation therapy treatment planning

    SciTech Connect

    Fox, Christopher; Romeijn, H. Edwin; Dempsey, James F.

    2006-05-15

    We present work on combining three algorithms to improve ray-tracing efficiency in radiation therapy dose computation. The three algorithms include: An improved point-in-polygon algorithm, incremental voxel ray tracing algorithm, and stereographic projection of beamlets for voxel truncation. The point-in-polygon and incremental voxel ray-tracing algorithms have been used in computer graphics and nuclear medicine applications while the stereographic projection algorithm was developed by our group. These algorithms demonstrate significant improvements over the current standard algorithms in peer reviewed literature, i.e., the polygon and voxel ray-tracing algorithms of Siddon for voxel classification (point-in-polygon testing) and dose computation, respectively, and radius testing for voxel truncation. The presented polygon ray-tracing technique was tested on 10 intensity modulated radiation therapy (IMRT) treatment planning cases that required the classification of between 0.58 and 2.0 million voxels on a 2.5 mm isotropic dose grid into 1-4 targets and 5-14 structures represented as extruded polygons (a.k.a. Siddon prisms). Incremental voxel ray tracing and voxel truncation employing virtual stereographic projection was tested on the same IMRT treatment planning cases where voxel dose was required for 230-2400 beamlets using a finite-size pencil-beam algorithm. Between a 100 and 360 fold cpu time improvement over Siddon's method was observed for the polygon ray-tracing algorithm to perform classification of voxels for target and structure membership. Between a 2.6 and 3.1 fold reduction in cpu time over current algorithms was found for the implementation of incremental ray tracing. Additionally, voxel truncation via stereographic projection was observed to be 11-25 times faster than the radial-testing beamlet extent approach and was further improved 1.7-2.0 fold through point-classification using the method of translation over the cross product technique.

  8. An Introduction to the Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Tian, Jian-quan; Miao, Dan-min; Zhu, Xia; Gong, Jing-jing

    2007-01-01

    Computerized adaptive testing (CAT) has unsurpassable advantages over traditional testing. It has become the mainstream in large scale examinations in modern society. This paper gives a brief introduction to CAT including differences between traditional testing and CAT, the principles of CAT, psychometric theory and computer algorithms of CAT, the…

  9. A Frequency-Domain Substructure System Identification Algorithm

    NASA Technical Reports Server (NTRS)

    Blades, Eric L.; Craig, Roy R., Jr.

    1996-01-01

    A new frequency-domain system identification algorithm is presented for system identification of substructures, such as payloads to be flown aboard the Space Shuttle. In the vibration test, all interface degrees of freedom where the substructure is connected to the carrier structure are either subjected to active excitation or are supported by a test stand with the reaction forces measured. The measured frequency-response data is used to obtain a linear, viscous-damped model with all interface-degree of freedom entries included. This model can then be used to validate analytical substructure models. This procedure makes it possible to obtain not only the fixed-interface modal data associated with a Craig-Bampton substructure model, but also the data associated with constraint modes. With this proposed algorithm, multiple-boundary-condition tests are not required, and test-stand dynamics is accounted for without requiring a separate modal test or finite element modeling of the test stand. Numerical simulations are used in examining the algorithm's ability to estimate valid reduced-order structural models. The algorithm's performance when frequency-response data covering narrow and broad frequency bandwidths is used as input is explored. Its performance when noise is added to the frequency-response data and the use of different least squares solution techniques are also examined. The identified reduced-order models are also compared for accuracy with other test-analysis models and a formulation for a Craig-Bampton test-analysis model is also presented.

  10. A Rotor Tip Vortex Tracing Algorithm for Image Post-Processing

    NASA Technical Reports Server (NTRS)

    Overmeyer, Austin D.

    2015-01-01

    A neurite tracing algorithm, originally developed for medical image processing, was used to trace the location of the rotor tip vortex in density gradient flow visualization images. The tracing algorithm was applied to several representative test images to form case studies. The accuracy of the tracing algorithm was compared to two current methods including a manual point and click method and a cross-correlation template method. It is shown that the neurite tracing algorithm can reduce the post-processing time to trace the vortex by a factor of 10 to 15 without compromising the accuracy of the tip vortex location compared to other methods presented in literature.

  11. Approximate string matching algorithms for limited-vocabulary OCR output correction

    NASA Astrophysics Data System (ADS)

    Lasko, Thomas A.; Hauser, Susan E.

    2000-12-01

    Five methods for matching words mistranslated by optical character recognition to their most likely match in a reference dictionary were tested on data from the archives of the National Library of Medicine. The methods, including an adaptation of the cross correlation algorithm, the generic edit distance algorithm, the edit distance algorithm with a probabilistic substitution matrix, Bayesian analysis, and Bayesian analysis on an actively thinned reference dictionary were implemented and their accuracy rates compared. Of the five, the Bayesian algorithm produced the most correct matches (87%), and had the advantage of producing scores that have a useful and practical interpretation.

  12. An innovative localisation algorithm for railway vehicles

    NASA Astrophysics Data System (ADS)

    Allotta, B.; D'Adamio, P.; Malvezzi, M.; Pugi, L.; Ridolfi, A.; Rindi, A.; Vettori, G.

    2014-11-01

    . The estimation strategy has good performance also under degraded adhesion conditions and could be put on board of high-speed railway vehicles; it represents an accurate and reliable solution. The IMU board is tested via a dedicated Hardware in the Loop (HIL) test rig: it includes an industrial robot able to replicate the motion of the railway vehicle. Through the generated experimental outputs the performances of the innovative localisation algorithm have been evaluated: the HIL test rig permitted to test the proposed algorithm, avoiding expensive (in terms of time and cost) on-track tests, obtaining encouraging results. In fact, the preliminary results show a significant improvement of the position and speed estimation performances compared to those obtained with SCMT algorithms, currently in use on the Italian railway network.

  13. The systems biology simulation core algorithm

    PubMed Central

    2013-01-01

    Background With the increasing availability of high dimensional time course data for metabolites, genes, and fluxes, the mathematical description of dynamical systems has become an essential aspect of research in systems biology. Models are often encoded in formats such as SBML, whose structure is very complex and difficult to evaluate due to many special cases. Results This article describes an efficient algorithm to solve SBML models that are interpreted in terms of ordinary differential equations. We begin our consideration with a formal representation of the mathematical form of the models and explain all parts of the algorithm in detail, including several preprocessing steps. We provide a flexible reference implementation as part of the Systems Biology Simulation Core Library, a community-driven project providing a large collection of numerical solvers and a sophisticated interface hierarchy for the definition of custom differential equation systems. To demonstrate the capabilities of the new algorithm, it has been tested with the entire SBML Test Suite and all models of BioModels Database. Conclusions The formal description of the mathematics behind the SBML format facilitates the implementation of the algorithm within specifically tailored programs. The reference implementation can be used as a simulation backend for Java™-based programs. Source code, binaries, and documentation can be freely obtained under the terms of the LGPL version 3 from http://simulation-core.sourceforge.net. Feature requests, bug reports, contributions, or any further discussion can be directed to the mailing list simulation-core-development@lists.sourceforge.net. PMID:23826941

  14. FORTRAN Algorithm for Image Processing

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Hull, David R.

    1987-01-01

    FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.

  15. Surveillance test bed for SDIO

    NASA Astrophysics Data System (ADS)

    Wesley, Michael; Osterheld, Robert; Kyser, Jeff; Farr, Michele; Vandergriff, Linda J.

    1991-08-01

    The Surveillance Test Bed (STB) is a program under development for the Strategic Defense Initiative Organization (SDIO). Its most salient features are (1) the integration of high fidelity backgrounds and optical signal processing models with algorithms for sensor tasking, bulk filtering, track/correlation and discrimination and (2) the integration of radar and optical estimates for track and discrimination. Backgrounds include induced environments such as nuclear events, fragments and debris, and natural environments, such as earth limb, zodiacal light, stars, sun and moon. At the highest level of fidelity, optical emulation hardware combines environmental information with threat information to produce detector samples for signal processing algorithms/hardware under test. Simulation of visible sensors and radars model measurement degradation due to the various environmental effects. The modeled threat is composed of multiple object classes. The number of discrimination classes are further increased by inclusion of fragments, debris and stars. High fidelity measurements will be used to drive bulk filtering algorithms that seek to reject fragments and debris and, in the case of optical sensors, stars. The output of the bulk filters will be used to drive track/correlation algorithms. Track algorithm output will include sequences of measurements that have been degraded by backgrounds, closely spaced objects (CSOs), signal processing errors, bulk filtering errors and miscorrelations; these measurements will be presented as input to the discrimination algorithms. The STB will implement baseline IR track file editing and IR and radar feature extraction and classification algorithms. The baseline will also include data fusion algorithms which will allow the combination of discrimination estimates from multiple sensors, including IR and radar; alternative discrimination algorithms may be substituted for the baseline after STB completion.

  16. Object-oriented algorithmic laboratory for ordering sparse matrices

    SciTech Connect

    Kumfert, G K

    2000-05-01

    generate the global ordering. Our software laboratory, ''Spinole'', implements state-of-the-art ordering algorithms for sparse matrices and graphs. We have used it to examine and augment the behavior of existing algorithms and test new ones. Its 40,000+ lilies of C++ code includes a base library test drivers, sample applications, and interfaces to C, C++, Matlab, and PETSc. Spinole is freely available and can be built on a variety of UNIX platforms as well as WindowsNT.

  17. STAR Algorithm Integration Team - Facilitating operational algorithm development

    NASA Astrophysics Data System (ADS)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  18. Synthesis of Greedy Algorithms Using Dominance Relations

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2010-01-01

    Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.

  19. DETECTION OF SUBSURFACE FACILITIES INCLUDING NON-METALLIC PIPE

    SciTech Connect

    Mr. Herb Duvoisin

    2003-05-26

    CyTerra has leveraged our unique, shallow buried plastic target detection technology developed under US Army contracts into deeper buried subsurface facilities and including nonmetallic pipe detection. This Final Report describes a portable, low-cost, real-time, and user-friendly subsurface plastic pipe detector (LULU- Low Cost Utility Location Unit) that relates to the goal of maintaining the integrity and reliability of the nation's natural gas transmission and distribution network by preventing third party damage, by detecting potential infringements. Except for frequency band and antenna size, the LULU unit is almost identical to those developed for the US Army. CyTerra designed, fabricated, and tested two frequency stepped GPR systems, spanning the frequencies of importance (200 to 1600 MHz), one low and one high frequency system. Data collection and testing was done at a variety of locations (selected for soil type variations) on both targets of opportunity and selected buried targets. We developed algorithms and signal processing techniques that provide for the automatic detection of the buried utility lines. The real time output produces a sound as the radar passes over the utility line alerting the operator to the presence of a buried object. Our unique, low noise/high performance RF hardware, combined with our field tested detection algorithms, represents an important advancement toward achieving the DOE potential infringement goal.

  20. Evaluation of the expected moments algorithm and a multiple low-outlier test for flood frequency analysis at streamgaging stations in Arizona

    USGS Publications Warehouse

    Paretti, Nicholas V.; Kennedy, Jeffrey R.; Cohn, Timothy A.

    2014-01-01

    Flooding is among the costliest natural disasters in terms of loss of life and property in Arizona, which is why the accurate estimation of flood frequency and magnitude is crucial for proper structural design and accurate floodplain mapping. Current guidelines for flood frequency analysis in the United States are described in Bulletin 17B (B17B), yet since B17B’s publication in 1982 (Interagency Advisory Committee on Water Data, 1982), several improvements have been proposed as updates for future guidelines. Two proposed updates are the Expected Moments Algorithm (EMA) to accommodate historical and censored data, and a generalized multiple Grubbs-Beck (MGB) low-outlier test. The current guidelines use a standard Grubbs-Beck (GB) method to identify low outliers, changing the determination of the moment estimators because B17B uses a conditional probability adjustment to handle low outliers while EMA censors the low outliers. B17B and EMA estimates are identical if no historical information or censored or low outliers are present in the peak-flow data. EMA with MGB (EMA-MGB) test was compared to the standard B17B (B17B-GB) method for flood frequency analysis at 328 streamgaging stations in Arizona. The methods were compared using the relative percent difference (RPD) between annual exceedance probabilities (AEPs), goodness-of-fit assessments, random resampling procedures, and Monte Carlo simulations. The AEPs were calculated and compared using both station skew and weighted skew. Streamgaging stations were classified by U.S. Geological Survey (USGS) National Water Information System (NWIS) qualification codes, used to denote historical and censored peak-flow data, to better understand the effect that nonstandard flood information has on the flood frequency analysis for each method. Streamgaging stations were also grouped according to geographic flood regions and analyzed separately to better understand regional differences caused by physiography and climate. The B

  1. An adaptive algorithm for noise rejection.

    PubMed

    Lovelace, D E; Knoebel, S B

    1978-01-01

    An adaptive algorithm for the rejection of noise artifact in 24-hour ambulatory electrocardiographic recordings is described. The algorithm is based on increased amplitude distortion or increased frequency of fluctuations associated with an episode of noise artifact. The results of application of the noise rejection algorithm on a high noise population of test tapes are discussed.

  2. Water flow algorithm decision support tool for travelling salesman problem

    NASA Astrophysics Data System (ADS)

    Kamarudin, Anis Aklima; Othman, Zulaiha Ali; Sarim, Hafiz Mohd

    2016-08-01

    This paper discuss about the role of Decision Support Tool in Travelling Salesman Problem (TSP) for helping the researchers who doing research in same area will get the better result from the proposed algorithm. A study has been conducted and Rapid Application Development (RAD) model has been use as a methodology which includes requirement planning, user design, construction and cutover. Water Flow Algorithm (WFA) with initialization technique improvement is used as the proposed algorithm in this study for evaluating effectiveness against TSP cases. For DST evaluation will go through usability testing conducted on system use, quality of information, quality of interface and overall satisfaction. Evaluation is needed for determine whether this tool can assists user in making a decision to solve TSP problems with the proposed algorithm or not. Some statistical result shown the ability of this tool in term of helping researchers to conduct the experiments on the WFA with improvements TSP initialization.

  3. Genetic Bee Colony (GBC) algorithm: A new gene selection method for microarray cancer classification.

    PubMed

    Alshamlan, Hala M; Badr, Ghada H; Alohali, Yousef A

    2015-06-01

    Naturally inspired evolutionary algorithms prove effectiveness when used for solving feature selection and classification problems. Artificial Bee Colony (ABC) is a relatively new swarm intelligence method. In this paper, we propose a new hybrid gene selection method, namely Genetic Bee Colony (GBC) algorithm. The proposed algorithm combines the used of a Genetic Algorithm (GA) along with Artificial Bee Colony (ABC) algorithm. The goal is to integrate the advantages of both algorithms. The proposed algorithm is applied to a microarray gene expression profile in order to select the most predictive and informative genes for cancer classification. In order to test the accuracy performance of the proposed algorithm, extensive experiments were conducted. Three binary microarray datasets are use, which include: colon, leukemia, and lung. In addition, another three multi-class microarray datasets are used, which are: SRBCT, lymphoma, and leukemia. Results of the GBC algorithm are compared with our recently proposed technique: mRMR when combined with the Artificial Bee Colony algorithm (mRMR-ABC). We also compared the combination of mRMR with GA (mRMR-GA) and Particle Swarm Optimization (mRMR-PSO) algorithms. In addition, we compared the GBC algorithm with other related algorithms that have been recently published in the literature, using all benchmark datasets. The GBC algorithm shows superior performance as it achieved the highest classification accuracy along with the lowest average number of selected genes. This proves that the GBC algorithm is a promising approach for solving the gene selection problem in both binary and multi-class cancer classification.

  4. Test generation and fault detection for VLSI PPL circuits

    SciTech Connect

    Amin, A.A.M.

    1987-01-01

    The problem of design for testability of PPL logic circuits is addressed. A test-generation package was developed which utilizes the special features of PPL logic to generate high fault coverage test vectors at a reduced computational cost. The test strategy assumes that one of the scan design techniques is used. A new methodology for test-vectors compaction without compromising the fault coverage is also proposed. A fault-oriented test-generation algorithm combined with a heuristic test-generation algorithm are the essential ingredients of this package. The fault-oriented algorithm uses a modified D-algorithm which includes look-ahead features and a new seven-valued logic to improve the average speed of the test-generation process. Fault coverages in the 90% range were obtained using the test sequences generated by this package.

  5. PSC algorithm description

    NASA Technical Reports Server (NTRS)

    Nobbs, Steven G.

    1995-01-01

    An overview of the performance seeking control (PSC) algorithm and details of the important components of the algorithm are given. The onboard propulsion system models, the linear programming optimization, and engine control interface are described. The PSC algorithm receives input from various computers on the aircraft including the digital flight computer, digital engine control, and electronic inlet control. The PSC algorithm contains compact models of the propulsion system including the inlet, engine, and nozzle. The models compute propulsion system parameters, such as inlet drag and fan stall margin, which are not directly measurable in flight. The compact models also compute sensitivities of the propulsion system parameters to change in control variables. The engine model consists of a linear steady state variable model (SSVM) and a nonlinear model. The SSVM is updated with efficiency factors calculated in the engine model update logic, or Kalman filter. The efficiency factors are used to adjust the SSVM to match the actual engine. The propulsion system models are mathematically integrated to form an overall propulsion system model. The propulsion system model is then optimized using a linear programming optimization scheme. The goal of the optimization is determined from the selected PSC mode of operation. The resulting trims are used to compute a new operating point about which the optimization process is repeated. This process is continued until an overall (global) optimum is reached before applying the trims to the controllers.

  6. Corrective Action Investigation Plan for Corrective Action Unit 5: Landfills, Nevada Test Site, Nevada (Rev. No.: 0) includes Record of Technical Change No. 1 (dated 9/17/2002)

    SciTech Connect

    IT Corporation, Las Vegas, NV

    2002-05-28

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Operations Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 5 under the Federal Facility Agreement and Consent Order. Corrective Action Unit 5 consists of eight Corrective Action Sites (CASs): 05-15-01, Sanitary Landfill; 05-16-01, Landfill; 06-08-01, Landfill; 06-15-02, Sanitary Landfill; 06-15-03, Sanitary Landfill; 12-15-01, Sanitary Landfill; 20-15-01, Landfill; 23-15-03, Disposal Site. Located between Areas 5, 6, 12, 20, and 23 of the Nevada Test Site (NTS), CAU 5 consists of unlined landfills used in support of disposal operations between 1952 and 1992. Large volumes of solid waste were produced from the projects which used the CAU 5 landfills. Waste disposed in these landfills may be present without appropriate controls (i.e., use restrictions, adequate cover) and hazardous and/or radioactive constituents may be present at concentrations and locations that could potentially pose a threat to human health and/or the environment. During the 1992 to 1995 time frame, the NTS was used for various research and development projects including nuclear weapons testing. Instead of managing solid waste at one or two disposal sites, the practice on the NTS was to dispose of solid waste in the vicinity of the project. A review of historical documentation, process knowledge, personal interviews, and inferred activities associated with this CAU identified the following as potential contaminants of concern: volatile organic compounds, semivolatile organic compounds, polychlorinated biphenyls, pesticides, petroleum hydrocarbons (diesel- and gasoline-range organics), Resource Conservation and Recovery Act Metals, plus nickel and zinc. A two-phase approach has been selected to collect information and generate data to satisfy needed resolution criteria

  7. Corrective Action Investigation Plan for Corrective Action Unit 165: Areas 25 and 26 Dry Well and Washdown Areas, Nevada Test Site, Nevada (including Record of Technical Change Nos. 1, 2, and 3) (January 2002, Rev. 0)

    SciTech Connect

    U.S. Department of Energy, National Nuclear Security Administration Nevada Operations Office

    2002-01-09

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Operations Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 165 under the Federal Facility Agreement and Consent Order. Corrective Action Unit 165 consists of eight Corrective Action Sites (CASs): CAS 25-20-01, Lab Drain Dry Well; CAS 25-51-02, Dry Well; CAS 25-59-01, Septic System; CAS 26-59-01, Septic System; CAS 25-07-06, Train Decontamination Area; CAS 25-07-07, Vehicle Washdown; CAS 26-07-01, Vehicle Washdown Station; and CAS 25-47-01, Reservoir and French Drain. All eight CASs are located in the Nevada Test Site, Nevada. Six of these CASs are located in Area 25 facilities and two CASs are located in Area 26 facilities. The eight CASs at CAU 165 consist of dry wells, septic systems, decontamination pads, and a reservoir. The six CASs in Area 25 are associated with the Nuclear Rocket Development Station that operated from 1958 to 1973. The two CASs in Area 26 are associated with facilities constructed for Project Pluto, a series of nuclear reactor tests conducted between 1961 to 1964 to develop a nuclear-powered ramjet engine. Based on site history, the scope of this plan will be a two-phased approach to investigate the possible presence of hazardous and/or radioactive constituents at concentrations that could potentially pose a threat to human health and the environment. The Phase I analytical program for most CASs will include volatile organic compounds, semivolatile organic compounds, Resource Conservation and Recovery Act metals, total petroleum hydrocarbons, polychlorinated biphenyls, and radionuclides. If laboratory data obtained from the Phase I investigation indicates the presence of contaminants of concern, the process will continue with a Phase II investigation to define the extent of contamination. Based on the results of

  8. Water quality change detection: multivariate algorithms

    NASA Astrophysics Data System (ADS)

    Klise, Katherine A.; McKenna, Sean A.

    2006-05-01

    In light of growing concern over the safety and security of our nation's drinking water, increased attention has been focused on advanced monitoring of water distribution systems. The key to these advanced monitoring systems lies in the combination of real time data and robust statistical analysis. Currently available data streams from sensors provide near real time information on water quality. Combining these data streams with change detection algorithms, this project aims to develop automated monitoring techniques that will classify real time data and denote anomalous water types. Here, water quality data in 1 hour increments over 3000 hours at 4 locations are used to test multivariate algorithms to detect anomalous water quality events. The algorithms use all available water quality sensors to measure deviation from expected water quality. Simulated anomalous water quality events are added to the measured data to test three approaches to measure this deviation. These approaches include multivariate distance measures to 1) the previous observation, 2) the closest observation in multivariate space, and 3) the closest cluster of previous water quality observations. Clusters are established using kmeans classification. Each approach uses a moving window of previous water quality measurements to classify the current measurement as normal or anomalous. Receiver Operating Characteristic (ROC) curves test the ability of each approach to discriminate between normal and anomalous water quality using a variety of thresholds and simulated anomalous events. These analyses result in a better understanding of the deviation from normal water quality that is necessary to sound an alarm.

  9. PCA-LBG-based algorithms for VQ codebook generation

    NASA Astrophysics Data System (ADS)

    Tsai, Jinn-Tsong; Yang, Po-Yuan

    2015-04-01

    Vector quantisation (VQ) codebooks are generated by combining principal component analysis (PCA) algorithms with Linde-Buzo-Gray (LBG) algorithms. All training vectors are grouped according to the projected values of the principal components. The PCA-LBG-based algorithms include (1) PCA-LBG-Median, which selects the median vector of each group, (2) PCA-LBG-Centroid, which adopts the centroid vector of each group, and (3) PCA-LBG-Random, which randomly selects a vector of each group. The LBG algorithm finds a codebook based on the better vectors sent to an initial codebook by the PCA. The PCA performs an orthogonal transformation to convert a set of potentially correlated variables into a set of variables that are not linearly correlated. Because the orthogonal transformation efficiently distinguishes test image vectors, the proposed PCA-LBG-based algorithm is expected to outperform conventional algorithms in designing VQ codebooks. The experimental results confirm that the proposed PCA-LBG-based algorithms indeed obtain better results compared to existing methods reported in the literature.

  10. Classification of postoperative cardiac patients: comparative evaluation of four algorithms.

    PubMed

    Artioli, E; Avanzolini, G; Barbini, P; Cevenini, G; Gnudi, G

    1991-12-01

    Four classification algorithms based on Bayes' rule for minimum error are compared by evaluating their ability to recognize high- and normal-risk cardio-surgical patients. These algorithms differ in the modelling of the probability density function (pdf) for each class and include: (a) two parametric algorithms based on the assumption of normal pdf; (b) two non-parametric algorithms using Parzen multidimensional approximation of pdf with normal kernels. In each case, classes with both equal and different covariance matrices were considered. A set of 200 patients in the 6 h immediately following cardiac surgery has been used to test the performance of the algorithms. For each patient the three measured variables most effective in representing the difference between the two classes were considered. We found that the two algorithms which explicitly incorporate the information on the different sample covariance between the physiological variables existing in the two classes generally provide better recognition of high- and normal-risk patients. Of these two algorithms the parametric one appears extremely attractive for practical applications, since it exhibits slightly better performance in spite of its great simplicity.

  11. Subsurface Residence Times as an Algorithm for Aquifer Sensitivity Mapping: testing the concept with analytic element ground water models in the Contentnea Creek Basin, North Carolina, USA

    NASA Astrophysics Data System (ADS)

    Kraemer, S. R.

    2002-05-01

    The objective of this research is to test the utility of simple functions of spatially integrated and temporally averaged ground water residence times in shallow groundwatersheds with field observations and detailed computer simulations. The residence time of water in the subsurface is arguably a surrogate of aquifer sensitivity to contamination --- short contact time in subsurface media may result in reduced contaminant assimilation prior to discharge to a well or stream. Residence time is an established criterion for the delineation of wellhead protection areas. The residence time of water may also have application in assessing the connection between landscape and fair weather loadings of non-point source pollution to streams, such as the drainage of nitrogen-nitrate from agricultural fields as base flow. The field setting of this study includes a hierarchy of catchments in the Contentnea Creek basin (2600 km2) of North Carolina, USA, centered on the intensive coastal plain field study site at Lizzie, NC (1.2+km^2), run by the US Geological Survey and the NC Department of Environment and Natural Resources of Raleigh, NC. Analytic element models are used to define the advective flow field and regional boundary conditions. The issues of conceptual model complexity are explored using the multi-layer object oriented analytic element model Tim, and by embedding the finite difference model MODFLOW within the analytic element model GFLOW copyright. The models are compared to observations of hydraulic head, base flow separations, and aquifer geochemistry and age dating evidence. The resulting insights are captured and mapped across the basin as zones of average aquifer residence time using ArcView copyright GIS tools. Preliminary results and conclusions will be presented. Mention of commercial software does not constitute endorsement or recommendation for use.

  12. A universal symmetry detection algorithm.

    PubMed

    Maurer, Peter M

    2015-01-01

    Research on symmetry detection focuses on identifying and detecting new types of symmetry. The paper presents an algorithm that is capable of detecting any type of permutation-based symmetry, including many types for which there are no existing algorithms. General symmetry detection is library-based, but symmetries that can be parameterized, (i.e. total, partial, rotational, and dihedral symmetry), can be detected without using libraries. In many cases it is faster than existing techniques. Furthermore, it is simpler than most existing techniques, and can easily be incorporated into existing software. The algorithm can also be used with virtually any type of matrix-based symmetry, including conjugate symmetry.

  13. Corrective Action Investigation Plan for Corrective Action Unit 214: Bunkers and Storage Areas Nevada Test Site, Nevada: Revision 0, Including Record of Technical Change No. 1 and No. 2

    SciTech Connect

    U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office

    2003-05-16

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 214 under the Federal Facility Agreement and Consent Order. Located in Areas 5, 11, and 25 of the Nevada Test Site, CAU 214 consists of nine Corrective Action Sites (CASs): 05-99-01, Fallout Shelters; 11-22-03, Drum; 25-99-12, Fly Ash Storage; 25-23-01, Contaminated Materials; 25-23-19, Radioactive Material Storage; 25-99-18, Storage Area; 25-34-03, Motor Dr/Gr Assembly (Bunker); 25-34-04, Motor Dr/Gr Assembly (Bunker); and 25-34-05, Motor Dr/Gr Assembly (Bunker). These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives (CAAs). The suspected contaminants and critical analyte s for CAU 214 include oil (total petroleum hydrocarbons-diesel-range organics [TPH-DRO], polychlorinated biphenyls [PCBs]), pesticides (chlordane, heptachlor, 4,4-DDT), barium, cadmium, chronium, lubricants (TPH-DRO, TPH-gasoline-range organics [GRO]), and fly ash (arsenic). The land-use zones where CAU 214 CASs are located dictate that future land uses will be limited to nonresidential (i.e., industrial) activities. The results of this field investigation will support a defensible evaluation of viable corrective action alternatives that will be presented in the corrective action decision document.

  14. Greedy heuristic algorithm for solving series of eee components classification problems*

    NASA Astrophysics Data System (ADS)

    Kazakovtsev, A. L.; Antamoshkin, A. N.; Fedosov, V. V.

    2016-04-01

    Algorithms based on using the agglomerative greedy heuristics demonstrate precise and stable results for clustering problems based on k- means and p-median models. Such algorithms are successfully implemented in the processes of production of specialized EEE components for using in space systems which include testing each EEE device and detection of homogeneous production batches of the EEE components based on results of the tests using p-median models. In this paper, authors propose a new version of the genetic algorithm with the greedy agglomerative heuristic which allows solving series of problems. Such algorithm is useful for solving the k-means and p-median clustering problems when the number of clusters is unknown. Computational experiments on real data show that the preciseness of the result decreases insignificantly in comparison with the initial genetic algorithm for solving a single problem.

  15. Accuracy and efficiency of algorithms for the demarcation of bacterial ecotypes from DNA sequence data.

    PubMed

    Francisco, Juan Carlos; Cohan, Frederick M; Krizanc, Danny

    2014-01-01

    Identification of closely related, ecologically distinct populations of bacteria would benefit microbiologists working in many fields including systematics, epidemiology and biotechnology. Several laboratories have recently developed algorithms aimed at demarcating such 'ecotypes'. We examine the ability of four of these algorithms to correctly identify ecotypes from sequence data. We tested the algorithms on synthetic sequences, with known history and habitat associations, generated under the stable ecotype model and on data from Bacillus strains isolated from Death Valley where previous work has confirmed the existence of multiple ecotypes. We found that one of the algorithms (ecotype simulation) performs significantly better than the others (AdaptML, GMYC, BAPS) in both instances. Unfortunately, it was also shown to be the least efficient of the four. While ecotype simulation is the most accurate, it is by a large margin the slowest of the algorithms tested. Attempts at improving its efficiency are underway.

  16. A General Algorithm for Reusing Krylov Subspace Information. I. Unsteady Navier-Stokes

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Vuik, C.; Lucas, Peter; vanGijzen, Martin; Bijl, Hester

    2010-01-01

    A general algorithm is developed that reuses available information to accelerate the iterative convergence of linear systems with multiple right-hand sides A x = b (sup i), which are commonly encountered in steady or unsteady simulations of nonlinear equations. The algorithm is based on the classical GMRES algorithm with eigenvector enrichment but also includes a Galerkin projection preprocessing step and several novel Krylov subspace reuse strategies. The new approach is applied to a set of test problems, including an unsteady turbulent airfoil, and is shown in some cases to provide significant improvement in computational efficiency relative to baseline approaches.

  17. Anomalies detection in hyperspectral imagery using projection pursuit algorithm

    NASA Astrophysics Data System (ADS)

    Achard, Veronique; Landrevie, Anthony; Fort, Jean Claude

    2004-11-01

    Hyperspectral imagery provides detailed spectral information on the observed scene which enhances detection possibility, in particular for subpixel targets. In this context, we have developed and compared several anomaly detection algorithms based on a projection pursuit approach. The projection pursuit is performed either on the ACP or on the MNF (Minimum Noise Fraction) components. Depending on the method, the best axes of the eigenvectors basis are directly selected, or a genetic algorithm is used in order to optimize the projections. Two projection index (PI) have been tested: the kurtosis and the skewness. These different approaches have been tested on Aviris and Hymap hyperspectral images, in which subpixel targets have been included by simulation. The proportion of target in pixels varies from 50% to 10% of the surface. The results are presented and discussed. The performance of our detection algorithm is very satisfactory for target surfaces until 10% of the pixel.

  18. NOSS altimeter algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.

    1982-01-01

    A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.

  19. Contact solution algorithms

    NASA Technical Reports Server (NTRS)

    Tielking, John T.

    1989-01-01

    Two algorithms for obtaining static contact solutions are described in this presentation. Although they were derived for contact problems involving specific structures (a tire and a solid rubber cylinder), they are sufficiently general to be applied to other shell-of-revolution and solid-body contact problems. The shell-of-revolution contact algorithm is a method of obtaining a point load influence coefficient matrix for the portion of shell surface that is expected to carry a contact load. If the shell is sufficiently linear with respect to contact loading, a single influence coefficient matrix can be used to obtain a good approximation of the contact pressure distribution. Otherwise, the matrix will be updated to reflect nonlinear load-deflection behavior. The solid-body contact algorithm utilizes a Lagrange multiplier to include the contact constraint in a potential energy functional. The solution is found by applying the principle of minimum potential energy. The Lagrange multiplier is identified as the contact load resultant for a specific deflection. At present, only frictionless contact solutions have been obtained with these algorithms. A sliding tread element has been developed to calculate friction shear force in the contact region of the rolling shell-of-revolution tire model.

  20. Understanding Algorithms in Different Presentations

    ERIC Educational Resources Information Center

    Csernoch, Mária; Biró, Piroska; Abari, Kálmán; Máth, János

    2015-01-01

    Within the framework of the Testing Algorithmic and Application Skills project we tested first year students of Informatics at the beginning of their tertiary education. We were focusing on the students' level of understanding in different programming environments. In the present paper we provide the results from the University of Debrecen, the…

  1. Cubit Adaptive Meshing Algorithm Library

    2004-09-01

    CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMAL’s triangle meshing uses a 3D space advancing front method, the quadmore » meshing algorithm is based upon Sandia’s patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.« less

  2. Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.

    2005-01-01

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.

  3. The Economic Benefits of Personnel Selection Using Ability Tests: A State of the Art Review Including a Detailed Analysis of the Dollar Benefit of U.S. Employment Service Placements and a Critique of the Low-Cutoff Method of Test Use. USES Test Research Report No. 47.

    ERIC Educational Resources Information Center

    Hunter, John E.

    The economic impact of optimal selection using ability tests is far higher than is commonly known. For small organizations, dollar savings from higher productivity can run into millions of dollars a year. This report estimates the potential savings to the Federal Government as an employer as being 15.61 billion dollars per year if tests were given…

  4. New knowledge-based genetic algorithm for excavator boom structural optimization

    NASA Astrophysics Data System (ADS)

    Hua, Haiyan; Lin, Shuwen

    2014-03-01

    Due to the insufficiency of utilizing knowledge to guide the complex optimal searching, existing genetic algorithms fail to effectively solve excavator boom structural optimization problem. To improve the optimization efficiency and quality, a new knowledge-based real-coded genetic algorithm is proposed. A dual evolution mechanism combining knowledge evolution with genetic algorithm is established to extract, handle and utilize the shallow and deep implicit constraint knowledge to guide the optimal searching of genetic algorithm circularly. Based on this dual evolution mechanism, knowledge evolution and population evolution can be connected by knowledge influence operators to improve the configurability of knowledge and genetic operators. Then, the new knowledge-based selection operator, crossover operator and mutation operator are proposed to integrate the optimal process knowledge and domain culture to guide the excavator boom structural optimization. Eight kinds of testing algorithms, which include different genetic operators, are taken as examples to solve the structural optimization of a medium-sized excavator boom. By comparing the results of optimization, it is shown that the algorithm including all the new knowledge-based genetic operators can more remarkably improve the evolutionary rate and searching ability than other testing algorithms, which demonstrates the effectiveness of knowledge for guiding optimal searching. The proposed knowledge-based genetic algorithm by combining multi-level knowledge evolution with numerical optimization provides a new effective method for solving the complex engineering optimization problem.

  5. A novel pseudoderivative-based mutation operator for real-coded adaptive genetic algorithms

    PubMed Central

    Kanwal, Maxinder S; Ramesh, Avinash S; Huang, Lauren A

    2013-01-01

    Recent development of large databases, especially those in genetics and proteomics, is pushing the development of novel computational algorithms that implement rapid and accurate search strategies. One successful approach has been to use artificial intelligence and methods, including pattern recognition (e.g. neural networks) and optimization techniques (e.g. genetic algorithms). The focus of this paper is on optimizing the design of genetic algorithms by using an adaptive mutation rate that is derived from comparing the fitness values of successive generations. We propose a novel pseudoderivative-based mutation rate operator designed to allow a genetic algorithm to escape local optima and successfully continue to the global optimum. Once proven successful, this algorithm can be implemented to solve real problems in neurology and bioinformatics. As a first step towards this goal, we tested our algorithm on two 3-dimensional surfaces with multiple local optima, but only one global optimum, as well as on the N-queens problem, an applied problem in which the function that maps the curve is implicit. For all tests, the adaptive mutation rate allowed the genetic algorithm to find the global optimal solution, performing significantly better than other search methods, including genetic algorithms that implement fixed mutation rates. PMID:24627784

  6. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  7. LAHS: A novel harmony search algorithm based on learning automata

    NASA Astrophysics Data System (ADS)

    Enayatifar, Rasul; Yousefi, Moslem; Abdullah, Abdul Hanan; Darus, Amer Nordin

    2013-12-01

    This study presents a learning automata-based harmony search (LAHS) for unconstrained optimization of continuous problems. The harmony search (HS) algorithm performance strongly depends on the fine tuning of its parameters, including the harmony consideration rate (HMCR), pitch adjustment rate (PAR) and bandwidth (bw). Inspired by the spur-in-time responses in the musical improvisation process, learning capabilities are employed in the HS to select these parameters based on spontaneous reactions. An extensive numerical investigation is conducted on several well-known test functions, and the results are compared with the HS algorithm and its prominent variants, including the improved harmony search (IHS), global-best harmony search (GHS) and self-adaptive global-best harmony search (SGHS). The numerical results indicate that the LAHS is more efficient in finding optimum solutions and outperforms the existing HS algorithm variants.

  8. SeaWiFS Science Algorithm Flow Chart

    NASA Technical Reports Server (NTRS)

    Darzi, Michael

    1998-01-01

    This flow chart describes the baseline science algorithms for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Data Processing System (SDPS). As such, it includes only processing steps used in the generation of the operational products that are archived by NASA's Goddard Space Flight Center (GSFC) Distributed Active Archive Center (DAAC). It is meant to provide the reader with a basic understanding of the scientific algorithm steps applied to SeaWiFS data. It does not include non-science steps, such as format conversions, and places the greatest emphasis on the geophysical calculations of the level-2 processing. Finally, the flow chart reflects the logic sequences and the conditional tests of the software so that it may be used to evaluate the fidelity of the implementation of the scientific algorithm. In many cases however, the chart may deviate from the details of the software implementation so as to simplify the presentation.

  9. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  10. Experimental validation of clock synchronization algorithms

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.; Graham, R. Lynn

    1992-01-01

    The objective of this work is to validate mathematically derived clock synchronization theories and their associated algorithms through experiment. Two theories are considered, the Interactive Convergence Clock Synchronization Algorithm and the Midpoint Algorithm. Special clock circuitry was designed and built so that several operating conditions and failure modes (including malicious failures) could be tested. Both theories are shown to predict conservative upper bounds (i.e., measured values of clock skew were always less than the theory prediction). Insight gained during experimentation led to alternative derivations of the theories. These new theories accurately predict the behavior of the clock system. It is found that a 100 percent penalty is paid to tolerate worst-case failures. It is also shown that under optimal conditions (with minimum error and no failures) the clock skew can be as much as three clock ticks. Clock skew grows to six clock ticks when failures are present. Finally, it is concluded that one cannot rely solely on test procedures or theoretical analysis to predict worst-case conditions.

  11. Short Time Exposure (STE) test in conjunction with Bovine Corneal Opacity and Permeability (BCOP) assay including histopathology to evaluate correspondence with the Globally Harmonized System (GHS) eye irritation classification of textile dyes.

    PubMed

    Oliveira, Gisele Augusto Rodrigues; Ducas, Rafael do Nascimento; Teixeira, Gabriel Campos; Batista, Aline Carvalho; Oliveira, Danielle Palma; Valadares, Marize Campos

    2015-09-01

    Eye irritation evaluation is mandatory for predicting health risks in consumers exposed to textile dyes. The two dyes, Reactive Orange 16 (RO16) and Reactive Green 19 (RG19) are classified as Category 2A (irritating to eyes) based on the UN Globally Harmonized System for classification (UN GHS), according to the Draize test. On the other hand, animal welfare considerations and the enforcement of a new regulation in the EU are drawing much attention in reducing or replacing animal experiments with alternative methods. This study evaluated the eye irritation of the two dyes RO16 and RG19 by combining the Short Time Exposure (STE) and the Bovine Corneal Opacity and Permeability (BCOP) assays and then comparing them with in vivo data from the GHS classification. The STE test (first level screening) categorized both dyes as GHS Category 1 (severe irritant). In the BCOP, dye RG19 was also classified as GHS Category 1 while dye RO16 was classified as GHS no prediction can be made. Both dyes caused damage to the corneal tissue as confirmed by histopathological analysis. Our findings demonstrated that the STE test did not contribute to arriving at a better conclusion about the eye irritation potential of the dyes when used in conjunction with the BCOP test. Adding the histopathology to the BCOP test could be an appropriate tool for a more meaningful prediction of the eye irritation potential of dyes.

  12. Field Testing of LIDAR-Assisted Feedforward Control Algorithms for Improved Speed Control and Fatigue Load Reduction on a 600-kW Wind Turbine: Preprint

    SciTech Connect

    Kumar, Avishek A.; Bossanyi, Ervin A.; Scholbrock, Andrew K.; Fleming, Paul; Boquet, Mathieu; Krishnamurthy, Raghu

    2015-12-14

    A severe challenge in controlling wind turbines is ensuring controller performance in the presence of a stochastic and unknown wind field, relying on the response of the turbine to generate control actions. Recent technologies such as LIDAR, allow sensing of the wind field before it reaches the rotor. In this work a field-testing campaign to test LIDAR Assisted Control (LAC) has been undertaken on a 600-kW turbine using a fixed, five-beam LIDAR system. The campaign compared the performance of a baseline controller to four LACs with progressively lower levels of feedback using 35 hours of collected data.

  13. Algorithms for Contact in a Mulitphysics Environment

    2001-12-19

    Many codes require either a contact capability or a need to determine geometric proximity of non-connected topological entities (which is a subset of what contact requires). ACME is a library to provide services to determine contact forces and/or geometric proximity interactions. This includes generic capabilities such as determining points in Cartesian volumes, finding faces in Cartesian volumes, etc. ACME can be run in single or multi-processor mode (the basic algorithms have been tested up tomore » 4500 processors).« less

  14. Comparative evaluation of the VITEK 2, disk diffusion, etest, broth microdilution, and agar dilution susceptibility testing methods for colistin in clinical isolates, including heteroresistant Enterobacter cloacae and Acinetobacter baumannii strains.

    PubMed

    Lo-Ten-Foe, Jerome R; de Smet, Anne Marie G A; Diederen, Bram M W; Kluytmans, Jan A J W; van Keulen, Peter H J

    2007-10-01

    Increasing antibiotic resistance in gram-negative bacteria has recently renewed interest in colistin as a therapeutic option. The increasing use of colistin necessitates the availability of rapid and reliable methods for colistin susceptibility testing. We compared seven methods of colistin susceptibility testing (disk diffusion, agar dilution on Mueller-Hinton [MH] and Isosensitest agar, Etest on MH and Isosensitest agar, broth microdilution, and VITEK 2) on 102 clinical isolates collected from patient materials during a selective digestive decontamination or selective oral decontamination trial in an intensive-care unit. Disk diffusion is an unreliable method to measure susceptibility to colistin. High error rates and low levels of reproducibility were observed in the disk diffusion test. The colistin Etest, agar dilution, and the VITEK 2 showed a high level of agreement with the broth microdilution reference method. Heteroresistance for colistin was observed in six Enterobacter cloacae isolates and in one Acinetobacter baumannii isolate. This is the first report of heteroresistance to colistin in E. cloacae isolates. Resistance to colistin in these isolates seemed to be induced upon exposure to colistin rather than being caused by stable mutations. Heteroresistant isolates could be detected in the broth microdilution, agar dilution, Etest, or disk diffusion test. The VITEK 2 displayed low sensitivity in the detection of heteroresistant subpopulations of E. cloacae. The VITEK 2 colistin susceptibility test can therefore be considered to be a reliable tool to determine susceptibility to colistin in isolates of genera that are known not to exhibit resistant subpopulations. In isolates of genera known to (occasionally) exhibit heteroresistance, an alternative susceptibility testing method capable of detecting heteroresistance should be used.

  15. SUBSURFACE RESIDENCE TIMES AS AN ALGORITHM FOR AQUIFER SENSITIVITY MAPPING: TESTING THE CONCEPT WITH ANALYTIC ELEMENT GROUND WATER MODELS IN THE CONTENTNEA CREEK BASIN, NORTH CAROLINA, USA

    EPA Science Inventory

    The objective of this research is to test the utility of simple functions of spatially integrated and temporally averaged ground water residence times in shallow "groundwatersheds" with field observations and more detailed computer simulations. The residence time of water in the...

  16. Enhancing Orthographic Competencies and Reducing Domain-Specific Test Anxiety: The Systematic Use of Algorithmic and Self-Instructional Task Formats in Remedial Spelling Training

    ERIC Educational Resources Information Center

    Faber, Gunter

    2010-01-01

    In this study the effects of a remedial spelling training approach were evaluated, which systematically combines certain visualization and verbalization methods to foster students' spelling knowledge and strategy use. Several achievement and test anxiety data from three measurement times were analyzed. All students displayed severe spelling…

  17. The footprint of old syphilis: using a reverse screening algorithm for syphilis testing in a U.S. Geographic Information Systems-Based Community Outreach Program.

    PubMed

    Goswami, Neela D; Stout, Jason E; Miller, William C; Hecker, Emily J; Cox, Gary M; Norton, Brianna L; Sena, Arlene C

    2013-11-01

    The impact of syphilis reverse sequence screening has not been evaluated in community outreach. Using reverse sequence screening in neighborhoods identified with geographic information systems, we found that among 239 participants, 45 (19%) were seropositive. Of these, 3 (7%) had untreated syphilis, 33 (73%) had previously treated syphilis infection, and 9 (20%) had negative nontreponemal test results.

  18. Hybrid Bearing Prognostic Test Rig

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Certo, Joseph M.; Handschuh, Robert F.; Dimofte, Florin

    2005-01-01

    The NASA Glenn Research Center has developed a new Hybrid Bearing Prognostic Test Rig to evaluate the performance of sensors and algorithms in predicting failures of rolling element bearings for aeronautics and space applications. The failure progression of both conventional and hybrid (ceramic rolling elements, metal races) bearings can be tested from fault initiation to total failure. The effects of different lubricants on bearing life can also be evaluated. Test conditions monitored and recorded during the test include load, oil temperature, vibration, and oil debris. New diagnostic research instrumentation will also be evaluated for hybrid bearing damage detection. This paper summarizes the capabilities of this new test rig.

  19. EDSP Tier 2 test (T2T) guidances and protocols are delivered, including web-based guidance for diagnosing and scoring, and evaluating EDC-induced pathology in fish and amphibian

    EPA Science Inventory

    The Agency’s Endocrine Disruptor Screening Program (EDSP) consists of two tiers. The first tier provides information regarding whether a chemical may have endocrine disruption properties. Tier 2 tests provide confirmation of ED effects and dose-response information to be us...

  20. An Efficient Pattern Matching Algorithm

    NASA Astrophysics Data System (ADS)

    Sleit, Azzam; Almobaideen, Wesam; Baarah, Aladdin H.; Abusitta, Adel H.

    In this study, we present an efficient algorithm for pattern matching based on the combination of hashing and search trees. The proposed solution is classified as an offline algorithm. Although, this study demonstrates the merits of the technique for text matching, it can be utilized for various forms of digital data including images, audio and video. The performance superiority of the proposed solution is validated analytically and experimentally.

  1. The Loop Algorithm

    NASA Astrophysics Data System (ADS)

    Evertz, Hans Gerd

    1998-03-01

    Exciting new investigations have recently become possible for strongly correlated systems of spins, bosons, and fermions, through Quantum Monte Carlo simulations with the Loop Algorithm (H.G. Evertz, G. Lana, and M. Marcu, Phys. Rev. Lett. 70, 875 (1993).) (For a recent review see: H.G. Evertz, cond- mat/9707221.) and its generalizations. A review of this new method, its generalizations and its applications is given, including some new results. The Loop Algorithm is based on a formulation of physical models in an extended ensemble of worldlines and graphs, and is related to Swendsen-Wang cluster algorithms. It performs nonlocal changes of worldline configurations, determined by local stochastic decisions. It overcomes many of the difficulties of traditional worldline simulations. Computer time requirements are reduced by orders of magnitude, through a corresponding reduction in autocorrelations. The grand-canonical ensemble (e.g. varying winding numbers) is naturally simulated. The continuous time limit can be taken directly. Improved Estimators exist which further reduce the errors of measured quantities. The algorithm applies unchanged in any dimension and for varying bond-strengths. It becomes less efficient in the presence of strong site disorder or strong magnetic fields. It applies directly to locally XYZ-like spin, fermion, and hard-core boson models. It has been extended to the Hubbard and the tJ model and generalized to higher spin representations. There have already been several large scale applications, especially for Heisenberg-like models, including a high statistics continuous time calculation of quantum critical exponents on a regularly depleted two-dimensional lattice of up to 20000 spatial sites at temperatures down to T=0.01 J.

  2. The development of algorithms for parallel knowledge discovery using graphics accelerators

    NASA Astrophysics Data System (ADS)

    Zieliński, Paweł; Mulawka, Jan

    2011-10-01

    The paper broaches topics of selected knowledge discovery algorithms. Different implementations have been verified on parallel platforms, including graphics accelerators using CUDA technology, multi-core microprocessors using OpenMP and many graphics accelerators. Results of investigations have been compared in terms of performance and scalability. Different types of data representation were also tested. The possibilities of both platforms, using the classification algorithms: the k-nearest neighbors, support vector machines and logistic regression are discussed.

  3. Fast algorithms for combustion kinetics calculations: A comparison

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    1984-01-01

    To identify the fastest algorithm currently available for the numerical integration of chemical kinetic rate equations, several algorithms were examined. Findings to date are summarized. The algorithms examined include two general-purpose codes EPISODE and LSODE and three special-purpose (for chemical kinetic calculations) codes CHEMEQ, CRK1D, and GCKP84. In addition, an explicit Runge-Kutta-Merson differential equation solver (IMSL Routine DASCRU) is used to illustrate the problems associated with integrating chemical kinetic rate equations by a classical method. Algorithms were applied to two test problems drawn from combustion kinetics. These problems included all three combustion regimes: induction, heat release and equilibration. Variations of the temperature and species mole fraction are given with time for test problems 1 and 2, respectively. Both test problems were integrated over a time interval of 1 ms in order to obtain near-equilibration of all species and temperature. Of the codes examined in this study, only CREK1D and GCDP84 were written explicitly for integrating exothermic, non-isothermal combustion rate equations. These therefore have built-in procedures for calculating the temperature.

  4. Normative data for the "Sniffin' Sticks" including tests of odor identification, odor discrimination, and olfactory thresholds: an upgrade based on a group of more than 3,000 subjects.

    PubMed

    Hummel, T; Kobal, G; Gudziol, H; Mackay-Sim, A

    2007-03-01

    "Sniffin' Sticks" is a test of nasal chemosensory function that is based on pen-like odor dispensing devices, introduced some 10 years ago by Kobal and co-workers. It consists of tests for odor threshold, discrimination, and identification. Previous work established its test-retest reliability and validity. Results of the test are presented as "TDI score", the sum of results obtained for threshold, discrimination, and identification measures. While normative data have been established they are based on a relatively small number of subjects, especially with regard to subjects older than 55 years where data from only 30 healthy subjects have been used. The present study aimed to remedy this situation. Now data are available from 3,282 subjects as compared to data from 738 subjects published previously. Disregarding sex-related differences, the TDI score at the tenth percentile was 24.9 in subjects younger than 15 years, 30.3 for ages from 16 to 35 years, 27.3 for ages from 36 to 55 years, and 19.6 for subjects older than 55 years. Because the tenth percentile has been defined to separate hyposmia from normosmia, these data can be used as a guide to estimate individual olfactory ability in relation to subject's age. Absolute hyposmia was defined as the tenth percentile score of 16-35 year old subjects. Other than previous reports the present norms are also sex-differentiated with women outperforming men in the three olfactory tests. Further, the present data suggest specific changes of individual olfactory functions in relation to age, with odor thresholds declining most dramatically compared to odor discrimination and odor identification. PMID:17021776

  5. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  6. Evaluating super resolution algorithms

    NASA Astrophysics Data System (ADS)

    Kim, Youn Jin; Park, Jong Hyun; Shin, Gun Shik; Lee, Hyun-Seung; Kim, Dong-Hyun; Park, Se Hyeok; Kim, Jaehyun

    2011-01-01

    This study intends to establish a sound testing and evaluation methodology based upon the human visual characteristics for appreciating the image restoration accuracy; in addition to comparing the subjective results with predictions by some objective evaluation methods. In total, six different super resolution (SR) algorithms - such as iterative back-projection (IBP), robust SR, maximum a posteriori (MAP), projections onto convex sets (POCS), a non-uniform interpolation, and frequency domain approach - were selected. The performance comparison between the SR algorithms in terms of their restoration accuracy was carried out through both subjectively and objectively. The former methodology relies upon the paired comparison method that involves the simultaneous scaling of two stimuli with respect to image restoration accuracy. For the latter, both conventional image quality metrics and color difference methods are implemented. Consequently, POCS and a non-uniform interpolation outperformed the others for an ideal situation, while restoration based methods appear more accurate to the HR image in a real world case where any prior information about the blur kernel is remained unknown. However, the noise-added-image could not be restored successfully by any of those methods. The latest International Commission on Illumination (CIE) standard color difference equation CIEDE2000 was found to predict the subjective results accurately and outperformed conventional methods for evaluating the restoration accuracy of those SR algorithms.

  7. Intra-and-Inter Species Biomass Prediction in a Plantation Forest: Testing the Utility of High Spatial Resolution Spaceborne Multispectral RapidEye Sensor and Advanced Machine Learning Algorithms

    PubMed Central

    Dube, Timothy; Mutanga, Onisimo; Adam, Elhadi; Ismail, Riyad

    2014-01-01

    The quantification of aboveground biomass using remote sensing is critical for better understanding the role of forests in carbon sequestration and for informed sustainable management. Although remote sensing techniques have been proven useful in assessing forest biomass in general, more is required to investigate their capabilities in predicting intra-and-inter species biomass which are mainly characterised by non-linear relationships. In this study, we tested two machine learning algorithms, Stochastic Gradient Boosting (SGB) and Random Forest (RF) regression trees to predict intra-and-inter species biomass using high resolution RapidEye reflectance bands as well as the derived vegetation indices in a commercial plantation. The results showed that the SGB algorithm yielded the best performance for intra-and-inter species biomass prediction; using all the predictor variables as well as based on the most important selected variables. For example using the most important variables the algorithm produced an R2 of 0.80 and RMSE of 16.93 t·ha−1 for E. grandis; R2 of 0.79, RMSE of 17.27 t·ha−1 for P. taeda and R2 of 0.61, RMSE of 43.39 t·ha−1 for the combined species data sets. Comparatively, RF yielded plausible results only for E. dunii (R2 of 0.79; RMSE of 7.18 t·ha−1). We demonstrated that although the two statistical methods were able to predict biomass accurately, RF produced weaker results as compared to SGB when applied to combined species dataset. The result underscores the relevance of stochastic models in predicting biomass drawn from different species and genera using the new generation high resolution RapidEye sensor with strategically positioned bands. PMID:25140631

  8. A High Precision Terahertz Wave Image Reconstruction Algorithm

    PubMed Central

    Guo, Qijia; Chang, Tianying; Geng, Guoshuai; Jia, Chengyan; Cui, Hong-Liang

    2016-01-01

    With the development of terahertz (THz) technology, the applications of this spectrum have become increasingly wide-ranging, in areas such as non-destructive testing, security applications and medical scanning, in which one of the most important methods is imaging. Unlike remote sensing applications, THz imaging features sources of array elements that are almost always supposed to be spherical wave radiators, including single antennae. As such, well-developed methodologies such as Range-Doppler Algorithm (RDA) are not directly applicable in such near-range situations. The Back Projection Algorithm (BPA) can provide products of high precision at the the cost of a high computational burden, while the Range Migration Algorithm (RMA) sacrifices the quality of images for efficiency. The Phase-shift Migration Algorithm (PMA) is a good alternative, the features of which combine both of the classical algorithms mentioned above. In this research, it is used for mechanical scanning, and is extended to array imaging for the first time. In addition, the performances of PMA are studied in detail in contrast to BPA and RMA. It is demonstrated in our simulations and experiments described herein that the algorithm can reconstruct images with high precision. PMID:27455269

  9. A High Precision Terahertz Wave Image Reconstruction Algorithm.

    PubMed

    Guo, Qijia; Chang, Tianying; Geng, Guoshuai; Jia, Chengyan; Cui, Hong-Liang

    2016-01-01

    With the development of terahertz (THz) technology, the applications of this spectrum have become increasingly wide-ranging, in areas such as non-destructive testing, security applications and medical scanning, in which one of the most important methods is imaging. Unlike remote sensing applications, THz imaging features sources of array elements that are almost always supposed to be spherical wave radiators, including single antennae. As such, well-developed methodologies such as Range-Doppler Algorithm (RDA) are not directly applicable in such near-range situations. The Back Projection Algorithm (BPA) can provide products of high precision at the the cost of a high computational burden, while the Range Migration Algorithm (RMA) sacrifices the quality of images for efficiency. The Phase-shift Migration Algorithm (PMA) is a good alternative, the features of which combine both of the classical algorithms mentioned above. In this research, it is used for mechanical scanning, and is extended to array imaging for the first time. In addition, the performances of PMA are studied in detail in contrast to BPA and RMA. It is demonstrated in our simulations and experiments described herein that the algorithm can reconstruct images with high precision. PMID:27455269

  10. Algorithm to assess causality after individual adverse events following immunizations.

    PubMed

    Halsey, Neal A; Edwards, Kathryn M; Dekker, Cornelia L; Klein, Nicola P; Baxter, Roger; Larussa, Philip; Marchant, Colin; Slade, Barbara; Vellozzi, Claudia

    2012-08-24

    Assessing individual reports of adverse events following immunizations (AEFI) can be challenging. Most published reviews are based on expert opinions, but the methods and logic used to arrive at these opinions are neither well described nor understood by many health care providers and scientists. We developed a standardized algorithm to assist in collecting and interpreting data, and to help assess causality after individual AEFI. Key questions that should be asked during the assessment of AEFI include: Is the diagnosis of the AEFI correct? Does clinical or laboratory evidence exist that supports possible causes for the AEFI other than the vaccine in the affected individual? Is there a known causal association between the AEFI and the vaccine? Is there strong evidence against a causal association? Is there a specific laboratory test implicating the vaccine in the pathogenesis? An algorithm can assist with addressing these questions in a standardized, transparent manner which can be tracked and reassessed if additional information becomes available. Examples in this document illustrate the process of using the algorithm to determine causality. As new epidemiologic and clinical data become available, the algorithm and guidelines will need to be modified. Feedback from users of the algorithm will be invaluable in this process. We hope that this algorithm approach can assist with educational efforts to improve the collection of key information on AEFI and provide a platform for teaching about causality assessment.

  11. Committee Meeting of Assembly Education Committee "To Receive Testimony from the Commissioner of Education, Mary Lee Fitzgerald, Department Staff, and Others Concerning the Department's Skills Testing Program, Including the Early Warning Test and High School Proficiency Test, Pursuant to Assembly Resolution No. 113."

    ERIC Educational Resources Information Center

    New Jersey State Office of Legislative Services, Trenton. Assembly Education Committee.

    The Assembly Education Committee of the New Jersey Office of Legislative Services held a hearing pursuant to Assembly Resolution 113, a proposal directing the Committee to investigate the skills testing program developed and administered to New Jersey children by the State Department of Education. The Committee was interested in the eighth-grade…

  12. Variational Algorithms for Drift and Collisional Guiding Center Dynamics

    NASA Astrophysics Data System (ADS)

    Ellison, C. Leland; Finn, John M.; Qin, Hong; Tang, William M.

    2014-10-01

    The simulation of guiding center test particle dynamics in the upcoming generation of magnetic confinement devices requires novel numerical methods to obtain the necessary long-term numerical fidelity. Geometric algorithms, which retain conserved quantities in the numerical time advances, are well-known to exhibit excellent long simulation time behavior. Due to the non-canonical Hamiltonian structure of the guiding center equations of motion, it is only recently that geometric algorithms have been developed for guiding center dynamics. This poster will discuss and compare several families of variational algorithms for application to 3-D guiding center test particle studies, while benchmarking the methods against standard Runge-Kutta techniques. Time-to-solution improvements using GPGPU hardware will be presented. Additionally, collisional dynamics will be incorporated into the structure-preserving guiding center algorithms for the first time. Non-Hamiltonian effects, such as polarization drag and simplified stochastic operators, can be incorporated using a Lagrange-d'Alembert variational principle. The long-time behavior of variational algorithms which include dissipative dynamics will be compared against standard techniques. This work was supported by DOE Contract DE-AC02-09CH11466.

  13. A distributed Canny edge detector: algorithm and FPGA implementation.

    PubMed

    Xu, Qian; Varadarajan, Srenivas; Chakrabarti, Chaitali; Karam, Lina J

    2014-07-01

    The Canny edge detector is one of the most widely used edge detection algorithms due to its superior performance. Unfortunately, not only is it computationally more intensive as compared with other edge detection algorithms, but it also has a higher latency because it is based on frame-level statistics. In this paper, we propose a mechanism to implement the Canny algorithm at the block level without any loss in edge detection performance compared with the original frame-level Canny algorithm. Directly applying the original Canny algorithm at the block-level leads to excessive edges in smooth regions and to loss of significant edges in high-detailed regions since the original Canny computes the high and low thresholds based on the frame-level statistics. To solve this problem, we present a distributed Canny edge detection algorithm that adaptively computes the edge detection thresholds based on the block type and the local distribution of the gradients in the image block. In addition, the new algorithm uses a nonuniform gradient magnitude histogram to compute block-based hysteresis thresholds. The resulting block-based algorithm has a significantly reduced latency and can be easily integrated with other block-based image codecs. It is capable of supporting fast edge detection of images and videos with high resolutions, including full-HD since the latency is now a function of the block size instead of the frame size. In addition, quantitative conformance evaluations and subjective tests show that the edge detection performance of the proposed algorithm is better than the original frame-based algorithm, especially when noise is present in the images. Finally, this algorithm is implemented using a 32 computing engine architecture and is synthesized on the Xilinx Virtex-5 FPGA. The synthesized architecture takes only 0.721 ms (including the SRAM READ/WRITE time and the computation time) to detect edges of 512 × 512 images in the USC SIPI database when clocked at 100

  14. A distributed Canny edge detector: algorithm and FPGA implementation.

    PubMed

    Xu, Qian; Varadarajan, Srenivas; Chakrabarti, Chaitali; Karam, Lina J

    2014-07-01

    The Canny edge detector is one of the most widely used edge detection algorithms due to its superior performance. Unfortunately, not only is it computationally more intensive as compared with other edge detection algorithms, but it also has a higher latency because it is based on frame-level statistics. In this paper, we propose a mechanism to implement the Canny algorithm at the block level without any loss in edge detection performance compared with the original frame-level Canny algorithm. Directly applying the original Canny algorithm at the block-level leads to excessive edges in smooth regions and to loss of significant edges in high-detailed regions since the original Canny computes the high and low thresholds based on the frame-level statistics. To solve this problem, we present a distributed Canny edge detection algorithm that adaptively computes the edge detection thresholds based on the block type and the local distribution of the gradients in the image block. In addition, the new algorithm uses a nonuniform gradient magnitude histogram to compute block-based hysteresis thresholds. The resulting block-based algorithm has a significantly reduced latency and can be easily integrated with other block-based image codecs. It is capable of supporting fast edge detection of images and videos with high resolutions, including full-HD since the latency is now a function of the block size instead of the frame size. In addition, quantitative conformance evaluations and subjective tests show that the edge detection performance of the proposed algorithm is better than the original frame-based algorithm, especially when noise is present in the images. Finally, this algorithm is implemented using a 32 computing engine architecture and is synthesized on the Xilinx Virtex-5 FPGA. The synthesized architecture takes only 0.721 ms (including the SRAM READ/WRITE time and the computation time) to detect edges of 512 × 512 images in the USC SIPI database when clocked at 100

  15. RADFLO physics and algorithms

    SciTech Connect

    Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.

    1995-09-01

    This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.

  16. Using gaming engines and editors to construct simulations of fusion algorithms for situation management

    NASA Astrophysics Data System (ADS)

    Lewis, Lundy M.; DiStasio, Nolan; Wright, Christopher

    2010-04-01

    In this paper we discuss issues in testing various cognitive fusion algorithms for situation management. We provide a proof-of-principle discussion and demo showing how gaming technologies and platforms could be used to devise and test various fusion algorithms, including input, processing, and output, and we look at how the proof-of-principle could lead to more advanced test beds and methods for high-level fusion in support of situation management. We develop four simple fusion scenarios and one more complex scenario in which a simple rule-based system is scripted to govern the behavior of battlespace entities.

  17. Radar target identification by natural resonances: Evaluation of signal processing algorithms

    NASA Astrophysics Data System (ADS)

    Lazarakos, Gregory A.

    1991-09-01

    When a radar pulse impinges upon a target, the resultant scattering process can be solved as a linear time-invariant (LTI) system problem. The system has a transfer function with poles and zeros. Previous work has shown that the poles are independent on the target's structure and geometry. This thesis evaluates the resonance estimation performance of two signal processing techniques: the Kumaresan-Tufts algorithm and the Cadzow-Solomon algorithm. Improvements are made to the Cadzow-Solomon algorithm. Both algorithms are programmed using MATLAB. Test data used to evaluate these algorithms includes synthetic and integral equation generated signals, with and without additive noise, in addition to new experimental scattering data from a thin wire, aluminum spheres, and scale model aircraft.

  18. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony

    1990-01-01

    The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  19. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.

    1990-01-01

    Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  20. A new algorithm for constrained nonlinear least-squares problems, part 1

    NASA Technical Reports Server (NTRS)

    Hanson, R. J.; Krogh, F. T.

    1983-01-01

    A Gauss-Newton algorithm is presented for solving nonlinear least squares problems. The problem statement may include simple bounds or more general constraints on the unknowns. The algorithm uses a trust region that allows the objective function to increase with logic for retreating to best values. The computations for the linear problem are done using a least squares system solver that allows for simple bounds and linear constraints. The trust region limits are defined by a box around the current point. In its current form the algorithm is effective only for problems with small residuals, linear constraints and dense Jacobian matrices. Results on a set of test problems are encouraging.

  1. JPSS CGS Tools For Rapid Algorithm Updates

    NASA Astrophysics Data System (ADS)

    Smith, D. C.; Grant, K. D.

    2011-12-01

    The National Oceanic and Atmospheric Administration (NOAA) and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation civilian weather and environmental satellite system: the Joint Polar Satellite System (JPSS). JPSS will contribute the afternoon orbit component and ground processing system of the restructured National Polar-orbiting Operational Environmental Satellite System (NPOESS). As such, JPSS replaces the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA and the ground processing component of both POES and the Defense Meteorological Satellite Program (DMSP) replacement known as the Defense Weather Satellite System (DWSS), managed by the Department of Defense (DoD). The JPSS satellites will carry a suite of sensors designed to collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The ground processing system for JPSS is known as the JPSS Common Ground System (JPSS CGS), and consists of a Command, Control, and Communications Segment (C3S) and the Interface Data Processing Segment (IDPS). Both are developed by Raytheon Intelligence and Information Systems (IIS). The Interface Data Processing Segment will process NPOESS Preparatory Project, Joint Polar Satellite System and Defense Weather Satellite System satellite data to provide environmental data products to NOAA and DoD processing centers operated by the United States government. Under NPOESS, Northrop Grumman Aerospace Systems Algorithms and Data Products (A&DP) organization was responsible for the algorithms that produce the EDRs, including their quality aspects. For JPSS, that responsibility has transferred to NOAA's Center for Satellite Applications & Research (STAR). As the Calibration and Validation (Cal/Val) activities move forward following both the NPP launch and subsequent JPSS and DWSS launches, rapid algorithm updates may be required. Raytheon and

  2. Streamlined Approach for Environmental Restoration (SAFER) Plan for Corrective Action Unit 357: Mud Pits and Waste Dump, Nevada Test Site, Nevada: Revision 0, Including Record of Technical Change No. 1

    SciTech Connect

    2003-06-25

    This Streamlined Approach for Environmental Restoration (SAFER) plan was prepared as a characterization and closure report for Corrective Action Unit (CAU) 357, Mud Pits and Waste Dump, in accordance with the Federal Facility Agreement and Consent Order. The CAU consists of 14 Corrective Action Sites (CASs) located in Areas 1, 4, 7, 8, 10, and 25 of the Nevada Test Site (NTS). All of the CASs are found within Yucca Flat except CAS 25-15-01 (Waste Dump). Corrective Action Site 25-15-01 is found in Area 25 in Jackass Flat. Of the 14 CASs in CAU 357, 11 are mud pits, suspected mud pits, or mud processing-related sites, which are by-products of drilling activities in support of the underground nuclear weapons testing done on the NTS. Of the remaining CASs, one CAS is a waste dump, one CAS contains scattered lead bricks, and one CAS has a building associated with Project 31.2. All 14 of the CASs are inactive and abandoned. Clean closure with no further action of CAU 357 will be completed if no contaminants are detected above preliminary action levels. A closure report will be prepared and submitted to the Nevada Division of Environmental Protection for review and approval upon completion of the field activities. Record of Technical Change No. 1 is dated 3/2004.

  3. Thyroid Tests

    MedlinePlus

    ... calories and how fast your heart beats. Thyroid tests check how well your thyroid is working. They ... thyroid diseases such as hyperthyroidism and hypothyroidism. Thyroid tests include blood tests and imaging tests. Blood tests ...

  4. Implementation on Landsat Data of a Simple Cloud Mask Algorithm Developed for MODIS Land Bands

    NASA Technical Reports Server (NTRS)

    Oreopoulos, Lazaros; Wilson, Michael J.; Varnai, Tamas

    2010-01-01

    This letter assesses the performance on Landsat-7 images of a modified version of a cloud masking algorithm originally developed for clear-sky compositing of Moderate Resolution Imaging Spectroradiometer (MODIS) images at northern mid-latitudes. While data from recent Landsat missions include measurements at thermal wavelengths, and such measurements are also planned for the next mission, thermal tests are not included in the suggested algorithm in its present form to maintain greater versatility and ease of use. To evaluate the masking algorithm we take advantage of the availability of manual (visual) cloud masks developed at USGS for the collection of Landsat scenes used here. As part of our evaluation we also include the Automated Cloud Cover Assesment (ACCA) algorithm that includes thermal tests and is used operationally by the Landsat-7 mission to provide scene cloud fractions, but no cloud masks. We show that the suggested algorithm can perform about as well as ACCA both in terms of scene cloud fraction and pixel-level cloud identification. Specifically, we find that the algorithm gives an error of 1.3% for the scene cloud fraction of 156 scenes, and a root mean square error of 7.2%, while it agrees with the manual mask for 93% of the pixels, figures very similar to those from ACCA (1.2%, 7.1%, 93.7%).

  5. Optimization of image processing algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Poudel, Pramod; Shirvaikar, Mukul

    2011-03-01

    This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.

  6. Parallel algorithms for matrix computations

    SciTech Connect

    Plemmons, R.J.

    1990-01-01

    The present conference on parallel algorithms for matrix computations encompasses both shared-memory systems and distributed-memory systems, as well as combinations of the two, to provide an overall perspective on parallel algorithms for both dense and sparse matrix computations in solving systems of linear equations, dense or structured problems related to least-squares computations, eigenvalue computations, singular-value computations, and rapid elliptic solvers. Specific issues addressed include the influence of parallel and vector architectures on algorithm design, computations for distributed-memory architectures such as hypercubes, solutions for sparse symmetric positive definite linear systems, symbolic and numeric factorizations, and triangular solutions. Also addressed are reference sources for parallel and vector numerical algorithms, sources for machine architectures, and sources for programming languages.

  7. Development, Comparisons and Evaluation of Aerosol Retrieval Algorithms

    NASA Astrophysics Data System (ADS)

    de Leeuw, G.; Holzer-Popp, T.; Aerosol-cci Team

    2011-12-01

    The Climate Change Initiative (cci) of the European Space Agency (ESA) has brought together a team of European Aerosol retrieval groups working on the development and improvement of aerosol retrieval algorithms. The goal of this cooperation is the development of methods to provide the best possible information on climate and climate change based on satellite observations. To achieve this, algorithms are characterized in detail as regards the retrieval approaches, the aerosol models used in each algorithm, cloud detection and surface treatment. A round-robin intercomparison of results from the various participating algorithms serves to identify the best modules or combinations of modules for each sensor. Annual global datasets including their uncertainties will then be produced and validated. The project builds on 9 existing algorithms to produce spectral aerosol optical depth (AOD and Ångström exponent) as well as other aerosol information; two instruments are included to provide the absorbing aerosol index (AAI) and stratospheric aerosol information. The algorithms included are: - 3 for ATSR (ORAC developed by RAL / Oxford university, ADV developed by FMI and the SU algorithm developed by Swansea University ) - 2 for MERIS (BAER by Bremen university and the ESA standard handled by HYGEOS) - 1 for POLDER over ocean (LOA) - 1 for synergetic retrieval (SYNAER by DLR ) - 1 for OMI retreival of the absorbing aerosol index with averaging kernel information (KNMI) - 1 for GOMOS stratospheric extinction profile retrieval (BIRA) The first seven algorithms aim at the retrieval of the AOD. However, each of the algorithms used differ in their approach, even for algorithms working with the same instrument such as ATSR or MERIS. To analyse the strengths and weaknesses of each algorithm several tests are made. The starting point for comparison and measurement of improvements is a retrieval run for 1 month, September 2008. The data from the same month are subsequently used for

  8. Performance analysis of freeware filtering algorithms for determining ground surface from airborne laser scanning data

    NASA Astrophysics Data System (ADS)

    Julge, Kalev; Ellmann, Artu; Gruno, Anti

    2014-01-01

    Numerous filtering algorithms have been developed in order to distinguish the ground surface from nonground points acquired by airborne laser scanning. These algorithms automatically attempt to determine the ground points using various features such as predefined parameters and statistical analysis. Their efficiency also depends on landscape characteristics. The aim of this contribution is to test the performance of six common filtering algorithms embedded in three freeware programs. The algorithms' adaptive TIN, elevation threshold with expand window, maximum local slope, progressive morphology, multiscale curvature, and linear prediction were tested on four relatively large (4 to 8 km2) and diverse landscape areas, which included steep sloped hills, urban areas, ridge-like eskers, and a river valley. The results show that in diverse test areas each algorithm yields various commission and omission errors. It appears that adaptive TIN is suitable in urban areas while the multiscale curvature algorithm is best suited in wooded areas. The multiscale curvature algorithm yielded the overall best results with average root-mean-square error values of 0.35 m.

  9. Mathematical algorithms for approximate reasoning

    NASA Technical Reports Server (NTRS)

    Murphy, John H.; Chay, Seung C.; Downs, Mary M.

    1988-01-01

    Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away

  10. A preliminary test of the application of the Lightning Detection and Ranging System (LDAR) as a thunderstorm warning and location device for the FHA including a correlation with updrafts, turbulence, and radar precipitation echoes

    NASA Technical Reports Server (NTRS)

    Poehler, H. A.

    1978-01-01

    Results of a test of the use of a Lightning Detection and Ranging (LDAR) remote display in the Patrick AFB RAPCON facility are presented. Agreement between LDAR and radar precipitation echoes of the RAPCON radar was observed, as well as agreement between LDAR and pilot's visual observations of lightning flashes. A more precise comparison between LDAR and KSC based radars is achieved by the superposition of LDAR precipitation echoes. Airborne measurements of updrafts and turbulence by an armored T-28 aircraft flying through the thunderclouds are correlated with LDAR along the flight path. Calibration and measurements of the accuracy of the LDAR System are discussed, and the extended range of the system is illustrated.

  11. Algorithmic causets

    NASA Astrophysics Data System (ADS)

    Bolognesi, Tommaso

    2011-07-01

    In the context of quantum gravity theories, several researchers have proposed causal sets as appropriate discrete models of spacetime. We investigate families of causal sets obtained from two simple models of computation - 2D Turing machines and network mobile automata - that operate on 'high-dimensional' supports, namely 2D arrays of cells and planar graphs, respectively. We study a number of quantitative and qualitative emergent properties of these causal sets, including dimension, curvature and localized structures, or 'particles'. We show how the possibility to detect and separate particles from background space depends on the choice between a global or local view at the causal set. Finally, we spot very rare cases of pseudo-randomness, or deterministic chaos; these exhibit a spontaneous phenomenon of 'causal compartmentation' that appears as a prerequisite for the occurrence of anything of physical interest in the evolution of spacetime.

  12. Improvements of HITS Algorithms for Spam Links

    NASA Astrophysics Data System (ADS)

    Asano, Yasuhito; Tezuka, Yu; Nishizeki, Takao

    The HITS algorithm proposed by Kleinberg is one of the representative methods of scoring Web pages by using hyperlinks. In the days when the algorithm was proposed, most of the pages given high score by the algorithm were really related to a given topic, and hence the algorithm could be used to find related pages. However, the algorithm and the variants including Bharat's improved HITS, abbreviated to BHITS, proposed by Bharat and Henzinger cannot be used to find related pages any more on today's Web, due to an increase of spam links. In this paper, we first propose three methods to find “linkfarms,” that is, sets of spam links forming a densely connected subgraph of a Web graph. We then present an algorithm, called a trust-score algorithm, to give high scores to pages which are not spam pages with a high probability. Combining the three methods and the trust-score algorithm with BHITS, we obtain several variants of the HITS algorithm. We ascertain by experiments that one of them, named TaN+BHITS using the trust-score algorithm and the method of finding linkfarms by employing name servers, is most suitable for finding related pages on today's Web. Our algorithms take time and memory no more than those required by the original HITS algorithm, and can be executed on a PC with a small amount of main memory.

  13. A Short Survey of Document Structure Similarity Algorithms

    SciTech Connect

    Buttler, D

    2004-02-27

    This paper provides a brief survey of document structural similarity algorithms, including the optimal Tree Edit Distance algorithm and various approximation algorithms. The approximation algorithms include the simple weighted tag similarity algorithm, Fourier transforms of the structure, and a new application of the shingle technique to structural similarity. We show three surprising results. First, the Fourier transform technique proves to be the least accurate of any of approximation algorithms, while also being slowest. Second, optimal Tree Edit Distance algorithms may not be the best technique for clustering pages from different sites. Third, the simplest approximation to structure may be the most effective and efficient mechanism for many applications.

  14. Library of Continuation Algorithms

    2005-03-01

    LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newton’s method for their nonlinear solve.

  15. Multi-directional search: A direct search algorithm for parallel machines

    SciTech Connect

    Torczon, V.J.

    1989-01-01

    In recent years there has been a great deal in the development of optimization algorithms which exploit the computational power of parallel computer architectures. The author has developed a new direct search algorithm, which he calls multi-directional search, that is ideally suited for parallel computation. His algorithm belongs to the class of direct search methods, a class of optimization algorithms which neither compute nor approximate any derivatives of the objective function. His work, in fact, was inspired by the simplex method of Spendley, Hext, and Himsworth, and the simplex method of Nelder and Mead. The multi-directional search algorithm is inherently parallel. The basic idea of the algorithm is to perform concurrent searches in multiple directions. These searches are free of any interdependencies, so the information required can be computed in parallel. A central result of his work is the convergence analysis for his algorithm. By requiring only that the function be continuously differentiable over a bounded level set, he can prove that a subsequence of the points generated by the multi-directional search algorithm converges to a stationary point of the objective function. This is of great interest since he knows of few convergence results for practical direct search algorithms. He also presents numerical results indicating that the multidirectional search algorithm is robust, even in the presence of noise. His results include comparisons with the Nelder-Mead simplex algorithm, the method of steepest descent, and a quasi-Newton method. One surprising conclusion of his numerical tests is that the Nelder-Mead simplex algorithm is not robust. He closes with some comments about future directions of research.

  16. Corrective Action Decision Document for Corrective Action Unit 168: Areas 25 and 26 Contaminated Materials and Waste Dumps, Nevada Test Site, Nevada: Revision 0, Including Record of Technical Change No. 1

    SciTech Connect

    U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office

    2003-08-08

    This Corrective Action Decision Document identifies and rationalizes the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office's selection of recommended corrective action alternatives (CAAs) to facilitate the closure of Corrective Action Unit (CAU)168: Areas 25 and 26 Contaminated Materials and Waste Dumps, Nevada Test Site (NTS), Nevada, under the Federal Facility Agreement and Consent Order. Located in Areas 25 and 26 at the NTS in Nevada, CAU 168 is comprised of twelve Corrective Action Sites (CASs). Review of data collected during the corrective action investigation, as well as consideration of current and future operations in Areas 25 and 26 of the NTS, led the way to the development of three CAAs for consideration: Alternative 1 - No Further Action; Alternative 2 - Clean Closure; and Alternative 3 - Close in Place with Administrative Controls. As a result of this evaluation, a combination of all three CAAs is recommended for this CAU. Alternative 1 was the preferred CAA for three CASs, Alternative 2 was the preferred CAA for six CASs (and nearly all of one other CAS), and Alternative 3 was the preferred CAA for two CASs (and a portion of one other CAS) to complete the closure at the CAU 168 sites. These alternatives were judged to meet all requirements for the technical components evaluated as well as all applicable state and federal regulations for closure of the sites and elimination of potential future exposure pathways to the contaminated soils at CAU 168.

  17. Corrective Action Investigation Plan for Corrective Action Unit 322: Areas 1 and 3 Release Sites and Injection Wells, Nevada Test Site, Nevada: Revision 0, Including Record of Technical Change No. 1

    SciTech Connect

    U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office

    2003-07-16

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office's approach to collect the data necessary to evaluate corrective action alternatives (CAAs) appropriate for the closure of Corrective Action Unit (CAU) 322, Areas 1 and 3 Release Sites and Injection Wells, Nevada Test Site, Nevada, under the Federal Facility Agreement and Consent Order. Corrective Action Unit 322 consists of three Corrective Action Sites (CASs): 01-25-01, AST Release (Area 1); 03-25-03, Mud Plant AST Diesel Release (Area 3); 03-20-05, Injection Wells (Area 3). Corrective Action Unit 322 is being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives. The investigation of three CASs in CAU 322 will determine if hazardous and/or radioactive constituents are present at concentrations and locations that could potentially pose a threat to human health and the environment. The results of this field investigation will support a defensible evaluation of corrective action alternatives in the corrective action decision document.

  18. Corrective Action Investigation Plan for Corrective Action Unit 527: Horn Silver Mine, Nevada Test Site, Nevada: Revision 1 (Including Records of Technical Change No.1, 2, 3, and 4)

    SciTech Connect

    U.S. Department of Energy, National Nuclear Security Administration Nevada Operations Office

    2002-12-06

    This Corrective Action Investigation Plan contains the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Operations Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 527, Horn Silver Mine, Nevada Test Site, Nevada, under the Federal Facility Agreement and Consent Order. Corrective Action Unit 527 consists of one Corrective Action Site (CAS): 26-20-01, Contaminated Waste Dump No.1. The site is located in an abandoned mine site in Area 26 (which is the most arid part of the NTS) approximately 65 miles northwest of Las Vegas. Historical documents may refer to this site as CAU 168, CWD-1, the Wingfield mine (or shaft), and the Wahmonie mine (or shaft). Historical documentation indicates that between 1959 and the 1970s, nonliquid classified material and unclassified waste was placed in the Horn Silver Mine's shaft. Some of the waste is known to be radioactive. Documentation indicates that the waste is present from 150 feet to the bottom of the mine (500 ft below ground surface). This CAU is being investigated because hazardous constituents migrating from materials and/or wastes disposed of in the Horn Silver Mine may pose a threat to human health and the environment as well as to assess the potential impacts associated with any potential releases from the waste. The results of this field investigation will support a defensible evaluation of corrective action alternatives in the corrective action decision document.

  19. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  20. On the new GPCC gridded reference data sets of observed (daily) monthly land-surface precipitation since (1988) 1901 published in 2014 including an all seasons open source test product

    NASA Astrophysics Data System (ADS)

    Ziese, Markus; Andreas, Becker; Peter, Finger; Anja, Meyer-Christoffer; Kirstin, Schamm; Udo, Schneider

    2014-05-01

    compared to other data sets like CRU or GHCN is based on the fact, that GPCC does not claim copyrights for its supplied data. Therefore GPCC cannot make public the original data of its analysis products. Still to allow the user to check GPCC's methods in re-processing and interpolation, a new Interpolation Test Dataset (ITD) will be released. The ITD will be based on a sub-set of public available station data and cover only one year. The gridded as well as underlying copyright free station data will be provided with the ITD addressing open source demands.