Science.gov

Sample records for algorithms tested include

  1. Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  2. Testing an earthquake prediction algorithm

    USGS Publications Warehouse

    Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.

    1997-01-01

    A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.

  3. A generalized memory test algorithm

    NASA Technical Reports Server (NTRS)

    Milner, E. J.

    1982-01-01

    A general algorithm for testing digital computer memory is presented. The test checks that (1) every bit can be cleared and set in each memory work, and (2) bits are not erroneously cleared and/or set elsewhere in memory at the same time. The algorithm can be applied to any size memory block and any size memory word. It is concise and efficient, requiring the very few cycles through memory. For example, a test of 16-bit-word-size memory requries only 384 cycles through memory. Approximately 15 seconds were required to test a 32K block of such memory, using a microcomputer having a cycle time of 133 nanoseconds.

  4. Effects of Including Humor in Test Items.

    ERIC Educational Resources Information Center

    McMorris, Robert F.; And Others

    Two 50-item multiple-choice forms of a grammar test were developed differing only in humor being included in 20 items of one form. One hundred twenty-six (126) eighth graders received the test plus alternate forms of a questionnaire. Humor inclusion did not affect grammar scores on matched humorous/nonhumorous items nor on common post-treatment…

  5. Component evaluation testing and analysis algorithms.

    SciTech Connect

    Hart, Darren M.; Merchant, Bion John

    2011-10-01

    The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.

  6. Testing block subdivision algorithms on block designs

    NASA Astrophysics Data System (ADS)

    Wiseman, Natalie; Patterson, Zachary

    2016-01-01

    Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.

  7. Quantum Statistical Testing of a QRNG Algorithm

    SciTech Connect

    Humble, Travis S; Pooser, Raphael C; Britt, Keith A

    2013-01-01

    We present the algorithmic design of a quantum random number generator, the subsequent synthesis of a physical design and its verification using quantum statistical testing. We also describe how quantum statistical testing can be used to diagnose channel noise in QKD protocols.

  8. Sequential Testing Algorithms for Multiple Fault Diagnosis

    NASA Technical Reports Server (NTRS)

    Shakeri, Mojdeh; Raghavan, Vijaya; Pattipati, Krishna R.; Patterson-Hine, Ann

    1997-01-01

    In this paper, we consider the problem of constructing optimal and near-optimal test sequencing algorithms for multiple fault diagnosis. The computational complexity of solving the optimal multiple-fault isolation problem is super-exponential, that is, it is much more difficult than the single-fault isolation problem, which, by itself, is NP-hard. By employing concepts from information theory and AND/OR graph search, we present several test sequencing algorithms for the multiple fault isolation problem. These algorithms provide a trade-off between the degree of suboptimality and computational complexity. Furthermore, we present novel diagnostic strategies that generate a diagnostic directed graph (digraph), instead of a diagnostic tree, for multiple fault diagnosis. Using this approach, the storage complexity of the overall diagnostic strategy reduces substantially. The algorithms developed herein have been successfully applied to several real-world systems. Computational results indicate that the size of a multiple fault strategy is strictly related to the structure of the system.

  9. Motion Cueing Algorithm Development: Piloted Performance Testing of the Cueing Algorithms

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    The relative effectiveness in simulating aircraft maneuvers with both current and newly developed motion cueing algorithms was assessed with an eleven-subject piloted performance evaluation conducted on the NASA Langley Visual Motion Simulator (VMS). In addition to the current NASA adaptive algorithm, two new cueing algorithms were evaluated: the optimal algorithm and the nonlinear algorithm. The test maneuvers included a straight-in approach with a rotating wind vector, an offset approach with severe turbulence and an on/off lateral gust that occurs as the aircraft approaches the runway threshold, and a takeoff both with and without engine failure after liftoff. The maneuvers were executed with each cueing algorithm with added visual display delay conditions ranging from zero to 200 msec. Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. Piloted performance parameters for the approach maneuvers, the vertical velocity upon touchdown and the runway touchdown position, were also analyzed but did not show any noticeable difference among the cueing algorithms. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach were less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.

  10. 8. VIEW OF RADIOGRAPHY EQUIPMENT, TEST METHODS INCLUDED RADIOGRAPHY AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    8. VIEW OF RADIOGRAPHY EQUIPMENT, TEST METHODS INCLUDED RADIOGRAPHY AND BETA BACKSCATTERING. (7/13/56) - Rocky Flats Plant, Non-Nuclear Production Facility, South of Cottonwood Avenue, west of Seventh Avenue & east of Building 460, Golden, Jefferson County, CO

  11. Testing Intelligently Includes Double-Checking Wechsler IQ Scores

    ERIC Educational Resources Information Center

    Kuentzel, Jeffrey G.; Hetterscheidt, Lesley A.; Barnett, Douglas

    2011-01-01

    The rigors of standardized testing make for numerous opportunities for examiner error, including simple computational mistakes in scoring. Although experts recommend that test scoring be double-checked, the extent to which independent double-checking would reduce scoring errors is not known. A double-checking procedure was established at a…

  12. 13. Historic drawing of rocket engine test facility layout, including ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    13. Historic drawing of rocket engine test facility layout, including Buildings 202, 205, 206, and 206A, February 3, 1984. NASA GRC drawing number CF-101539. On file at NASA Glenn Research Center. - Rocket Engine Testing Facility, NASA Glenn Research Center, Cleveland, Cuyahoga County, OH

  13. Datasets for radiation network algorithm development and testing

    SciTech Connect

    Rao, Nageswara S; Sen, Satyabrata; Berry, M. L..; Wu, Qishi; Grieme, M.; Brooks, Richard R; Cordone, G.

    2016-01-01

    Domestic Nuclear Detection Office s (DNDO) Intelligence Radiation Sensors Systems (IRSS) program supported the development of networks of commercial-off-the-shelf (COTS) radiation counters for detecting, localizing, and identifying low-level radiation sources. Under this program, a series of indoor and outdoor tests were conducted with multiple source strengths and types, different background profiles, and various types of source and detector movements. Following the tests, network algorithms were replayed in various re-constructed scenarios using sub-networks. These measurements and algorithm traces together provide a rich collection of highly valuable datasets for testing the current and next generation radiation network algorithms, including the ones (to be) developed by broader R&D communities such as distributed detection, information fusion, and sensor networks. From this multiple TeraByte IRSS database, we distilled out and packaged the first batch of canonical datasets for public release. They include measurements from ten indoor and two outdoor tests which represent increasingly challenging baseline scenarios for robustly testing radiation network algorithms.

  14. Reliability based design including future tests and multiagent approaches

    NASA Astrophysics Data System (ADS)

    Villanueva, Diane

    The initial stages of reliability-based design optimization involve the formulation of objective functions and constraints, and building a model to estimate the reliability of the design with quantified uncertainties. However, even experienced hands often overlook important objective functions and constraints that affect the design. In addition, uncertainty reduction measures, such as tests and redesign, are often not considered in reliability calculations during the initial stages. This research considers two areas that concern the design of engineering systems: 1) the trade-off of the effect of a test and post-test redesign on reliability and cost and 2) the search for multiple candidate designs as insurance against unforeseen faults in some designs. In this research, a methodology was developed to estimate the effect of a single future test and post-test redesign on reliability and cost. The methodology uses assumed distributions of computational and experimental errors with re-design rules to simulate alternative future test and redesign outcomes to form a probabilistic estimate of the reliability and cost for a given design. Further, it was explored how modeling a future test and redesign provides a company an opportunity to balance development costs versus performance by simultaneously designing the design and the post-test redesign rules during the initial design stage. The second area of this research considers the use of dynamic local surrogates, or surrogate-based agents, to locate multiple candidate designs. Surrogate-based global optimization algorithms often require search in multiple candidate regions of design space, expending most of the computation needed to define multiple alternate designs. Thus, focusing on solely locating the best design may be wasteful. We extended adaptive sampling surrogate techniques to locate multiple optima by building local surrogates in sub-regions of the design space to identify optima. The efficiency of this method

  15. Full motion video geopositioning algorithm integrated test bed

    NASA Astrophysics Data System (ADS)

    Dolloff, John; Hottel, Bryant; Doucette, Peter; Braun, Aaron; Theiss, Henry; Gurson, Adam

    2015-05-01

    In order to better understand the issues associated with Full Motion Video (FMV) geopositioning and to develop corresponding strategies and algorithms, an integrated test bed is required. It is used to evaluate the performance of various candidate algorithms associated with registration of the video frames and subsequent geopositioning using the registered frames. Major issues include reliable error propagation or predicted solution accuracy, optimal vs. suboptimal vs. divergent solutions, robust processing in the presence of poor or non-existent a priori estimates of sensor metadata, difficulty in the measurement of tie points between adjacent frames, poor imaging geometry including small field-of-view and little vertical relief, and no control (points). The test bed modules must be integrated with appropriate data flows between them. The test bed must also ingest/generate real and simulated data and support evaluation of corresponding performance based on module-internal metrics as well as comparisons to real or simulated "ground truth". Selection of the appropriate modules and algorithms must be both operator specifiable and specifiable as automatic. An FMV test bed has been developed and continues to be improved with the above characteristics. The paper describes its overall design as well as key underlying algorithms, including a recent update to "A matrix" generation, which allows for the computation of arbitrary inter-frame error cross-covariance matrices associated with Kalman filter (KF) registration in the presence of dynamic state vector definition, necessary for rigorous error propagation when the contents/definition of the KF state vector changes due to added/dropped tie points. Performance of a tested scenario is also presented.

  16. Development and Testing of Data Mining Algorithms for Earth Observation

    NASA Technical Reports Server (NTRS)

    Glymour, Clark

    2005-01-01

    The new algorithms developed under this project included a principled procedure for classification of objects, events or circumstances according to a target variable when a very large number of potential predictor variables is available but the number of cases that can be used for training a classifier is relatively small. These "high dimensional" problems require finding a minimal set of variables -called the Markov Blanket-- sufficient for predicting the value of the target variable. An algorithm, the Markov Blanket Fan Search, was developed, implemented and tested on both simulated and real data in conjunction with a graphical model classifier, which was also implemented. Another algorithm developed and implemented in TETRAD IV for time series elaborated on work by C. Granger and N. Swanson, which in turn exploited some of our earlier work. The algorithms in question learn a linear time series model from data. Given such a time series, the simultaneous residual covariances, after factoring out time dependencies, may provide information about causal processes that occur more rapidly than the time series representation allow, so called simultaneous or contemporaneous causal processes. Working with A. Monetta, a graduate student from Italy, we produced the correct statistics for estimating the contemporaneous causal structure from time series data using the TETRAD IV suite of algorithms. Two economists, David Bessler and Kevin Hoover, have independently published applications using TETRAD style algorithms to the same purpose. These implementations and algorithmic developments were separately used in two kinds of studies of climate data: Short time series of geographically proximate climate variables predicting agricultural effects in California, and longer duration climate measurements of temperature teleconnections.

  17. An Evaluation of a Flight Deck Interval Management Algorithm Including Delayed Target Trajectories

    NASA Technical Reports Server (NTRS)

    Swieringa, Kurt A.; Underwood, Matthew C.; Barmore, Bryan; Leonard, Robert D.

    2014-01-01

    NASA's first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature air traffic management technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise timebased scheduling in the terminal airspace; Controller Managed Spacing (CMS), which provides controllers with decision support tools enabling precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain precise in-trail spacing. During high demand operations, TMA-TM may produce a schedule and corresponding aircraft trajectories that include delay to ensure that a particular aircraft will be properly spaced from other aircraft at each schedule waypoint. These delayed trajectories are not communicated to the automation onboard the aircraft, forcing the IM aircraft to use the published speeds to estimate the target aircraft's estimated time of arrival. As a result, the aircraft performing IM operations may follow an aircraft whose TMA-TM generated trajectories have substantial speed deviations from the speeds expected by the spacing algorithm. Previous spacing algorithms were not designed to handle this magnitude of uncertainty. A simulation was conducted to examine a modified spacing algorithm with the ability to follow aircraft flying delayed trajectories. The simulation investigated the use of the new spacing algorithm with various delayed speed profiles and wind conditions, as well as several other variables designed to simulate real-life variability. The results and conclusions of this study indicate that the new spacing algorithm generally exhibits good performance; however, some types of target aircraft speed profiles can cause the spacing algorithm to command less than optimal speed control behavior.

  18. A Study of a Network-Flow Algorithm and a Noncorrecting Algorithm for Test Assembly.

    ERIC Educational Resources Information Center

    Armstrong, R. D.; And Others

    1996-01-01

    When the network-flow algorithm (NFA) and the average growth approximation algorithm (AGAA) were used for automated test assembly with American College Test and Armed Services Vocational Aptitude Battery item banks, results indicate that reasonable error in item parameters is not harmful for test assembly using NFA or AGAA. (SLD)

  19. ALGORITHMS AND PROGRAMS FOR STRONG GRAVITATIONAL LENSING IN KERR SPACE-TIME INCLUDING POLARIZATION

    SciTech Connect

    Chen, Bin; Maddumage, Prasad; Kantowski, Ronald; Dai, Xinyu; Baron, Eddie

    2015-05-15

    Active galactic nuclei (AGNs) and quasars are important astrophysical objects to understand. Recently, microlensing observations have constrained the size of the quasar X-ray emission region to be of the order of 10 gravitational radii of the central supermassive black hole. For distances within a few gravitational radii, light paths are strongly bent by the strong gravity field of the central black hole. If the central black hole has nonzero angular momentum (spin), then a photon’s polarization plane will be rotated by the gravitational Faraday effect. The observed X-ray flux and polarization will then be influenced significantly by the strong gravity field near the source. Consequently, linear gravitational lensing theory is inadequate for such extreme circumstances. We present simple algorithms computing the strong lensing effects of Kerr black holes, including the effects on polarization. Our algorithms are realized in a program “KERTAP” in two versions: MATLAB and Python. The key ingredients of KERTAP are a graphic user interface, a backward ray-tracing algorithm, a polarization propagator dealing with gravitational Faraday rotation, and algorithms computing observables such as flux magnification and polarization angles. Our algorithms can be easily realized in other programming languages such as FORTRAN, C, and C++. The MATLAB version of KERTAP is parallelized using the MATLAB Parallel Computing Toolbox and the Distributed Computing Server. The Python code was sped up using Cython and supports full implementation of MPI using the “mpi4py” package. As an example, we investigate the inclination angle dependence of the observed polarization and the strong lensing magnification of AGN X-ray emission. We conclude that it is possible to perform complex numerical-relativity related computations using interpreted languages such as MATLAB and Python.

  20. Algorithms and Programs for Strong Gravitational Lensing In Kerr Space-time Including Polarization

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kantowski, Ronald; Dai, Xinyu; Baron, Eddie; Maddumage, Prasad

    2015-05-01

    Active galactic nuclei (AGNs) and quasars are important astrophysical objects to understand. Recently, microlensing observations have constrained the size of the quasar X-ray emission region to be of the order of 10 gravitational radii of the central supermassive black hole. For distances within a few gravitational radii, light paths are strongly bent by the strong gravity field of the central black hole. If the central black hole has nonzero angular momentum (spin), then a photon’s polarization plane will be rotated by the gravitational Faraday effect. The observed X-ray flux and polarization will then be influenced significantly by the strong gravity field near the source. Consequently, linear gravitational lensing theory is inadequate for such extreme circumstances. We present simple algorithms computing the strong lensing effects of Kerr black holes, including the effects on polarization. Our algorithms are realized in a program “KERTAP” in two versions: MATLAB and Python. The key ingredients of KERTAP are a graphic user interface, a backward ray-tracing algorithm, a polarization propagator dealing with gravitational Faraday rotation, and algorithms computing observables such as flux magnification and polarization angles. Our algorithms can be easily realized in other programming languages such as FORTRAN, C, and C++. The MATLAB version of KERTAP is parallelized using the MATLAB Parallel Computing Toolbox and the Distributed Computing Server. The Python code was sped up using Cython and supports full implementation of MPI using the “mpi4py” package. As an example, we investigate the inclination angle dependence of the observed polarization and the strong lensing magnification of AGN X-ray emission. We conclude that it is possible to perform complex numerical-relativity related computations using interpreted languages such as MATLAB and Python.

  1. A Boundary Condition Relaxation Algorithm for Strongly Coupled, Ablating Flows Including Shape Change

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.; Johnston, Christopher O.

    2011-01-01

    Implementations of a model for equilibrium, steady-state ablation boundary conditions are tested for the purpose of providing strong coupling with a hypersonic flow solver. The objective is to remove correction factors or film cooling approximations that are usually applied in coupled implementations of the flow solver and the ablation response. Three test cases are considered - the IRV-2, the Galileo probe, and a notional slender, blunted cone launched at 10 km/s from the Earth's surface. A successive substitution is employed and the order of succession is varied as a function of surface temperature to obtain converged solutions. The implementation is tested on a specified trajectory for the IRV-2 to compute shape change under the approximation of steady-state ablation. Issues associated with stability of the shape change algorithm caused by explicit time step limits are also discussed.

  2. Algorithms for Multiple Fault Diagnosis With Unreliable Tests

    NASA Technical Reports Server (NTRS)

    Shakeri, Mojdeh; Raghavan, Vijaya; Pattipati, Krishna R.; Patterson-Hine, Ann

    1997-01-01

    In this paper, we consider the problem of constructing optimal and near-optimal multiple fault diagnosis (MFD) in bipartite systems with unreliable (imperfect) tests. It is known that exact computation of conditional probabilities for multiple fault diagnosis is NP-hard. The novel feature of our diagnostic algorithms is the use of Lagrangian relaxation and subgradient optimization methods to provide: (1) near optimal solutions for the MFD problem, and (2) upper bounds for an optimal branch-and-bound algorithm. The proposed method is illustrated using several examples. Computational results indicate that: (1) our algorithm has superior computational performance to the existing algorithms (approximately three orders of magnitude improvement), (2) the near optimal algorithm generates the most likely candidates with a very high accuracy, and (3) our algorithm can find the most likely candidates in systems with as many as 1000 faults.

  3. Comparative testing of DNA segmentation algorithms using benchmark simulations.

    PubMed

    Elhaik, Eran; Graur, Dan; Josic, Kresimir

    2010-05-01

    Numerous segmentation methods for the detection of compositionally homogeneous domains within genomic sequences have been proposed. Unfortunately, these methods yield inconsistent results. Here, we present a benchmark consisting of two sets of simulated genomic sequences for testing the performances of segmentation algorithms. Sequences in the first set are composed of fixed-sized homogeneous domains, distinct in their between-domain guanine and cytosine (GC) content variability. The sequences in the second set are composed of a mosaic of many short domains and a few long ones, distinguished by sharp GC content boundaries between neighboring domains. We use these sets to test the performance of seven segmentation algorithms in the literature. Our results show that recursive segmentation algorithms based on the Jensen-Shannon divergence outperform all other algorithms. However, even these algorithms perform poorly in certain instances because of the arbitrary choice of a segmentation-stopping criterion.

  4. A Sparse Reconstruction Algorithm for Ultrasonic Images in Nondestructive Testing

    PubMed Central

    Guarneri, Giovanni Alfredo; Pipa, Daniel Rodrigues; Junior, Flávio Neves; de Arruda, Lúcia Valéria Ramos; Zibetti, Marcelo Victor Wüst

    2015-01-01

    Ultrasound imaging systems (UIS) are essential tools in nondestructive testing (NDT). In general, the quality of images depends on two factors: system hardware features and image reconstruction algorithms. This paper presents a new image reconstruction algorithm for ultrasonic NDT. The algorithm reconstructs images from A-scan signals acquired by an ultrasonic imaging system with a monostatic transducer in pulse-echo configuration. It is based on regularized least squares using a l1 regularization norm. The method is tested to reconstruct an image of a point-like reflector, using both simulated and real data. The resolution of reconstructed image is compared with four traditional ultrasonic imaging reconstruction algorithms: B-scan, SAFT, ω-k SAFT and regularized least squares (RLS). The method demonstrates significant resolution improvement when compared with B-scan—about 91% using real data. The proposed scheme also outperforms traditional algorithms in terms of signal-to-noise ratio (SNR). PMID:25905700

  5. Optimal Configuration of a Square Array Group Testing Algorithm

    PubMed Central

    Hudgens, Michael G.; Kim, Hae-Young

    2009-01-01

    We consider the optimal configuration of a square array group testing algorithm (denoted A2) to minimize the expected number of tests per specimen. For prevalence greater than 0.2498, individual testing is shown to be more efficient than A2. For prevalence less than 0.2498, closed form lower and upper bounds on the optimal group sizes for A2 are given. Arrays of dimension 2 × 2, 3 × 3, and 4 × 4 are shown to never be optimal. The results are illustrated by considering the design of a specimen pooling algorithm for detection of recent HIV infections in Malawi. PMID:21218195

  6. A Procedure for Empirical Initialization of Adaptive Testing Algorithms.

    ERIC Educational Resources Information Center

    van der Linden, Wim J.

    In constrained adaptive testing, the numbers of constraints needed to control the content of the tests can easily run into the hundreds. Proper initialization of the algorithm becomes a requirement because the presence of large numbers of constraints slows down the convergence of the ability estimator. In this paper, an empirical initialization of…

  7. Automated segment matching algorithm-theory, test, and evaluation

    NASA Technical Reports Server (NTRS)

    Kalcic, M. T. (Principal Investigator)

    1982-01-01

    Results to automate the U.S. Department of Agriculture's process of segment shifting and obtain results within one-half pixel accuracy are presented. Given an initial registration, the digitized segment is shifted until a more precise fit to the LANDSAT data is found. The algorithm automates the shifting process and performs certain tests for matching and accepting the computed shift numbers. Results indicate the algorithm can obtain results within one-half pixel accuracy.

  8. TS: a test-split algorithm for inductive learning

    NASA Astrophysics Data System (ADS)

    Wu, Xindong

    1993-09-01

    This paper presents a new attribute-based learning algorithm, TS. Different from ID3, AQ11, and HCV in strategies, this algorithm operates in cycles of test and split. It uses those attribute values which occur only in positives but not in negatives to straightforwardly discriminate positives against negatives and chooses the attributes with least number of different values to split example sets. TS is natural, easy to implement, and low-order polynomial in time complexity.

  9. Tests of the PSF reconstruction algorithm for NACO/VLT

    NASA Astrophysics Data System (ADS)

    Clénet, Yann; Lidman, Christopher; Gendron, Eric; Rousset, Gérard; Fusco, Thierry; Kornweibel, Nick; Kasper, Markus; Ageorges, Nancy

    2008-07-01

    We have developed an PSF reconstruction algorithm for the NAOS adaptive optics system that is coupled with CONICA at ESO/VLT. We have modified the algorithm of Véran et al. (1997), originally written for PUEO at CFHT, to make use of the specific real-time wavefront-related data that observers with NACO receive together with their scientific images. In addition, we use the Vii algorithm introduced by Clénet et al. (2006) and Gendron et al. (2006) instead of the Uij algorithm originally used by Véran et al. (1997). Until now, tests on NAOS has been undertaken during technical time thanks to the NACO team at Paranal. A first test has been successfully performed to calibrate the orientation of reconstructed PSFs with respect to NACO images. We have also obtained two sets of PSF reconstruction test data with NACO in November 2006 and September 2007 to reconstruct PSFs. Discrepancies exist between the observed and reconstructed PSFs: their Strehl ratios are ~31% and ~39% respectively in Nov. 2006, ~31% and ~19% respectively in Sept. 2007. These differences may be at least partly explained by reconstructions that either did not account for the aliasing contribution or poorly estimated the noise contribution with the available noise information at that time. We have additionally just started to test our algorithm using the AO bench Sésame, at LESIA. Results are promising but need to be extended to a larger set of atmospheric conditions or AO correction qualities.

  10. An enhanced bacterial foraging algorithm approach for optimal power flow problem including FACTS devices considering system loadability.

    PubMed

    Belwin Edward, J; Rajasekar, N; Sathiyasekar, K; Senthilnathan, N; Sarjila, R

    2013-09-01

    Obtaining optimal power flow solution is a strenuous task for any power system engineer. The inclusion of FACTS devices in the power system network adds to its complexity. The dual objective of OPF with fuel cost minimization along with FACTS device location for IEEE 30 bus is considered and solved using proposed Enhanced Bacterial Foraging algorithm (EBFA). The conventional Bacterial Foraging Algorithm (BFA) has the difficulty of optimal parameter selection. Hence, in this paper, BFA is enhanced by including Nelder-Mead (NM) algorithm for better performance. A MATLAB code for EBFA is developed and the problem of optimal power flow with inclusion of FACTS devices is solved. After several run with different initial values, it is found that the inclusion of FACTS devices such as SVC and TCSC in the network reduces the generation cost along with increased voltage stability limits. It is also observed that, the proposed algorithm requires lesser computational time compared to earlier proposed algorithms.

  11. Testing of hardware implementation of infrared image enhancing algorithm

    NASA Astrophysics Data System (ADS)

    Dulski, R.; Sosnowski, T.; PiÄ tkowski, T.; Trzaskawka, P.; Kastek, M.; Kucharz, J.

    2012-10-01

    The interpretation of IR images depends on radiative properties of observed objects and surrounding scenery. Skills and experience of an observer itself are also of great importance. The solution to improve the effectiveness of observation is utilization of algorithm of image enhancing capable to improve the image quality and the same effectiveness of object detection. The paper presents results of testing the hardware implementation of IR image enhancing algorithm based on histogram processing. Main issue in hardware implementation of complex procedures for image enhancing algorithms is high computational cost. As a result implementation of complex algorithms using general purpose processors and software usually does not bring satisfactory results. Because of high efficiency requirements and the need of parallel operation, the ALTERA's EP2C35F672 FPGA device was used. It provides sufficient processing speed combined with relatively low power consumption. A digital image processing and control module was designed and constructed around two main integrated circuits: a FPGA device and a microcontroller. Programmable FPGA device performs image data processing operations which requires considerable computing power. It also generates the control signals for array readout, performs NUC correction and bad pixel mapping, generates the control signals for display module and finally executes complex image processing algorithms. Implemented adaptive algorithm is based on plateau histogram equalization. Tests were performed on real IR images of different types of objects registered in different spectral bands. The simulations and laboratory experiments proved the correct operation of the designed system in executing the sophisticated image enhancement.

  12. Development and Application of a Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Fulton, Christopher E.; Maul, William A.; Sowers, T. Shane

    2007-01-01

    This paper describes the development and initial demonstration of a Portable Health Algorithms Test (PHALT) System that is being developed by researchers at the NASA Glenn Research Center (GRC). The PHALT System was conceived as a means of evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT System allows systems health management algorithms to be developed in a graphical programming environment; to be tested and refined using system simulation or test data playback; and finally, to be evaluated in a real-time hardware-in-the-loop mode with a live test article. In this paper, PHALT System development is described through the presentation of a functional architecture, followed by the selection and integration of hardware and software. Also described is an initial real-time hardware-in-the-loop demonstration that used sensor data qualification algorithms to diagnose and isolate simulated sensor failures in a prototype Power Distribution Unit test-bed. Success of the initial demonstration is highlighted by the correct detection of all sensor failures and the absence of any real-time constraint violations.

  13. BROMOCEA Code: An Improved Grand Canonical Monte Carlo/Brownian Dynamics Algorithm Including Explicit Atoms.

    PubMed

    Solano, Carlos J F; Pothula, Karunakar R; Prajapati, Jigneshkumar D; De Biase, Pablo M; Noskov, Sergei Yu; Kleinekathöfer, Ulrich

    2016-05-10

    All-atom molecular dynamics simulations have a long history of applications studying ion and substrate permeation across biological and artificial pores. While offering unprecedented insights into the underpinning transport processes, MD simulations are limited in time-scales and ability to simulate physiological membrane potentials or asymmetric salt solutions and require substantial computational power. While several approaches to circumvent all of these limitations were developed, Brownian dynamics simulations remain an attractive option to the field. The main limitation, however, is an apparent lack of protein flexibility important for the accurate description of permeation events. In the present contribution, we report an extension of the Brownian dynamics scheme which includes conformational dynamics. To achieve this goal, the dynamics of amino-acid residues was incorporated into the many-body potential of mean force and into the Langevin equations of motion. The developed software solution, called BROMOCEA, was applied to ion transport through OmpC as a test case. Compared to fully atomistic simulations, the results show a clear improvement in the ratio of permeating anions and cations. The present tests strongly indicate that pore flexibility can enhance permeation properties which will become even more important in future applications to substrate translocation.

  14. A Review of Optimisation Techniques for Layered Radar Materials Including the Genetic Algorithm

    DTIC Science & Technology

    2004-11-01

    Algorithm......................... 8 4.1.4 Optimisation of Jaumann Layers: Other methods (Finite Element, FDTD and Taguchi Methods ) ........................................................................ 11...DRDC Atlantic TM 2004 - 260 4.1.4 Optimisation of Jaumann Layers: Other methods (Finite Element, FDTD and Taguchi Methods ) Scattering...that the performance of these devices is not limited by resonant behaviour.43 The Taguchi method of optimization was used as a means of exploring

  15. JPSS Cryosphere Algorithms: Integration and Testing in Algorithm Development Library (ADL)

    NASA Astrophysics Data System (ADS)

    Tsidulko, M.; Mahoney, R. L.; Meade, P.; Baldwin, D.; Tschudi, M. A.; Das, B.; Mikles, V. J.; Chen, W.; Tang, Y.; Sprietzer, K.; Zhao, Y.; Wolf, W.; Key, J.

    2014-12-01

    JPSS is a next generation satellite system that is planned to be launched in 2017. The satellites will carry a suite of sensors that are already on board the Suomi National Polar-orbiting Partnership (S-NPP) satellite. The NOAA/NESDIS/STAR Algorithm Integration Team (AIT) works within the Algorithm Development Library (ADL) framework which mimics the operational JPSS Interface Data Processing Segment (IDPS). The AIT contributes in development, integration and testing of scientific algorithms employed in the IDPS. This presentation discusses cryosphere related activities performed in ADL. The addition of a new ancillary data set - NOAA Global Multisensor Automated Snow/Ice data (GMASI) - with ADL code modifications is described. Preliminary GMASI impact on the gridded Snow/Ice product is estimated. Several modifications to the Ice Age algorithm that demonstrates mis-classification of ice type for certain areas/time periods are tested in the ADL. Sensitivity runs for day time, night time and terminator zone are performed and presented. Comparisons between the original and modified versions of the Ice Age algorithm are also presented.

  16. Improved Ant Algorithms for Software Testing Cases Generation

    PubMed Central

    Yang, Shunkun; Xu, Jiaqi

    2014-01-01

    Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to porduce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations. PMID:24883391

  17. Improved ant algorithms for software testing cases generation.

    PubMed

    Yang, Shunkun; Man, Tianlong; Xu, Jiaqi

    2014-01-01

    Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to produce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations.

  18. Testing PEPT Algorithm on a Medical PET Scanner

    NASA Astrophysics Data System (ADS)

    Sadrmomtaz, Alireza

    The basis of Positron Emission Tomography (PET) is the detection of the photons produced, when a positron annihilates with an electron. Conservation of energy and momentum then require that two 511 keV gamma rays are emitted almost back to back (180° apart). This method is used to determine the spatial distribution of a positron emitting fluid. Verifying the position of a single emitting particle in an object instead of determining the distribution of a positron emitting fluid is the basis of another technique, which has been named positron emitting particle tracking PEPT and has been developed in Birmingham University. Birmingham University has recently obtained the PET scanner from Hammersmith Hospital which was installed there in 1987. This scanner consists of 32 detector buckets, each includes 128 bismuth germanate detection elements, which are configured in 8 rings. This scanner has been rebuilt in a flexible geometry and will be used for PEPT studies. Testing the PEPT algorithm on ECAT scanner gives a high data rate, can track approximately accurate at high speed and also has the possibility of making measurements on large vessels.

  19. Computerized Scoring Algorithms for the Autobiographical Memory Test.

    PubMed

    Takano, Keisuke; Gutenbrunner, Charlotte; Martens, Kris; Salmon, Karen; Raes, Filip

    2017-04-03

    Reduced specificity of autobiographical memories is a hallmark of depressive cognition. Autobiographical memory (AM) specificity is typically measured by the Autobiographical Memory Test (AMT), in which respondents are asked to describe personal memories in response to emotional cue words. Due to this free descriptive responding format, the AMT relies on experts' hand scoring for subsequent statistical analyses. This manual coding potentially impedes research activities in big data analytics such as large epidemiological studies. Here, we propose computerized algorithms to automatically score AM specificity for the Dutch (adult participants) and English (youth participants) versions of the AMT by using natural language processing and machine learning techniques. The algorithms showed reliable performances in discriminating specific and nonspecific (e.g., overgeneralized) autobiographical memories in independent testing data sets (area under the receiver operating characteristic curve > .90). Furthermore, outcome values of the algorithms (i.e., decision values of support vector machines) showed a gradient across similar (e.g., specific and extended memories) and different (e.g., specific memory and semantic associates) categories of AMT responses, suggesting that, for both adults and youth, the algorithms well capture the extent to which a memory has features of specific memories. (PsycINFO Database Record

  20. A New Computer Algorithm for Simultaneous Test Construction of Two-Stage and Multistage Testing.

    ERIC Educational Resources Information Center

    Wu, Ing-Long

    2001-01-01

    Presents two binary programming models with a special network structure that can be explored computationally for simultaneous test construction. Uses an efficient special purpose network algorithm to solve these models. An empirical study illustrates the approach. (SLD)

  1. Testing the race model inequality: an algorithm and computer programs.

    PubMed

    Ulrich, Rolf; Miller, Jeff; Schröter, Hannes

    2007-05-01

    In divided-attention tasks, responses are faster when two target stimuli are presented, and thus one is redundant, than when only a single target stimulus is presented. Raab (1962) suggested an account of this redundant-targets effect in terms of a race model in which the response to redundant target stimuli is initiated by the faster of two separate target detection processes. Such models make a prediction about the probability distributions of reaction times that is often called the race model inequality, and it is often of interest to test this prediction. In this article, we describe a precise algorithm that can be used to test the race model inequality and present MATLAB routines and a Pascal program that implement this algorithm.

  2. An Algorithm for Testing the Efficient Market Hypothesis

    PubMed Central

    Boboc, Ioana-Andreea; Dinică, Mihai-Cristian

    2013-01-01

    The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH). PMID:24205148

  3. Testing trivializing maps in the Hybrid Monte Carlo algorithm

    PubMed Central

    Engel, Georg P.; Schaefer, Stefan

    2011-01-01

    We test a recent proposal to use approximate trivializing maps in a field theory to speed up Hybrid Monte Carlo simulations. Simulating the CPN−1 model, we find a small improvement with the leading order transformation, which is however compensated by the additional computational overhead. The scaling of the algorithm towards the continuum is not changed. In particular, the effect of the topological modes on the autocorrelation times is studied. PMID:21969733

  4. GPS 3-D cockpit displays: Sensors, algorithms, and flight testing

    NASA Astrophysics Data System (ADS)

    Barrows, Andrew Kevin

    Tunnel-in-the-Sky 3-D flight displays have been investigated for several decades as a means of enhancing aircraft safety and utility. However, high costs have prevented commercial development and seriously hindered research into their operational benefits. The rapid development of Differential Global Positioning Systems (DGPS), inexpensive computing power, and ruggedized displays is now changing this situation. A low-cost prototype system was built and flight tested to investigate implementation and operational issues. The display provided an "out the window" 3-D perspective view of the world, letting the pilot see the horizon, runway, and desired flight path even in instrument flight conditions. The flight path was depicted as a tunnel through which the pilot flew the airplane, while predictor symbology provided guidance to minimize path-following errors. Positioning data was supplied, by various DGPS sources including the Stanford Wide Area Augmentation System (WAAS) testbed. A combination of GPS and low-cost inertial sensors provided vehicle heading, pitch, and roll information. Architectural and sensor fusion tradeoffs made during system implementation are discussed. Computational algorithms used to provide guidance on curved paths over the earth geoid are outlined along with display system design issues. It was found that current technology enables low-cost Tunnel-in-the-Sky display systems with a target cost of $20,000 for large-scale commercialization. Extensive testing on Piper Dakota and Beechcraft Queen Air aircraft demonstrated enhanced accuracy and operational flexibility on a variety of complex flight trajectories. These included curved and segmented approaches, traffic patterns flown on instruments, and skywriting by instrument reference. Overlays to existing instrument approaches at airports in California and Alaska were flown and compared with current instrument procedures. These overlays demonstrated improved utility and situational awareness for

  5. Effect of Restricting Perimetry Testing Algorithms to Reliable Sensitivities on Test-Retest Variability

    PubMed Central

    Gardiner, Stuart K.; Mansberger, Steven L.

    2016-01-01

    Purpose We have previously shown that sensitivities obtained at severely damaged visual field locations (<15–19 dB) are unreliable and highly variable. This study evaluates a testing algorithm that does not present very high contrast stimuli in damaged locations above approximately 1000% contrast, but instead concentrates on more precise estimation at remaining locations. Methods A trained ophthalmic technician tested 36 eyes of 36 participants twice with each of two different testing algorithms: ZEST0, which allowed sensitivities within the range 0 to 35 dB, and ZEST15, which allowed sensitivities between 15 and 35 dB but was otherwise identical. The difference between the two runs for the same algorithm was used as a measure of test-retest variability. These were compared between algorithms using a random effects model with homoscedastic within-group errors whose variance was allowed to differ between algorithms. Results The estimated test-retest variance for ZEST15 was 53.1% of the test-retest variance for ZEST0, with 95% confidence interval (50.5%–55.7%). Among locations whose sensitivity was ≥17 dB on all tests, the variability of ZEST15 was 86.4% of the test-retest variance for ZEST0, with 95% confidence interval (79.3%–94.0%). Conclusions Restricting the range of possible sensitivity estimates reduced test-retest variability, not only at locations with severe damage but also at locations with higher sensitivity. Future visual field algorithms should avoid high-contrast stimuli in severely damaged locations. Given that low sensitivities cannot be measured reliably enough for most clinical uses, it appears to be more efficient to concentrate on more precise testing of less damaged locations. PMID:27784065

  6. Monte Carlo tests of the ELIPGRID-PC algorithm

    SciTech Connect

    Davidson, J.R.

    1995-04-01

    The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM{reg_sign} PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within {plus_minus}0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangular sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error.

  7. Oscillation Detection Algorithm Development Summary Report and Test Plan

    SciTech Connect

    Zhou, Ning; Huang, Zhenyu; Tuffner, Francis K.; Jin, Shuangshuang

    2009-10-03

    -based modal analysis algorithms have been developed. They include Prony analysis, Regularized Ro-bust Recursive Least Square (R3LS) algorithm, Yule-Walker algorithm, Yule-Walker Spectrum algorithm, and the N4SID algo-rithm. Each has been shown to be effective for certain situations, but not as effective for some other situations. For example, the traditional Prony analysis works well for disturbance data but not for ambient data, while Yule-Walker is designed for ambient data only. Even in an algorithm that works for both disturbance data and ambient data, such as R3LS, latency results from the time window used in the algorithm is an issue in timely estimation of oscillation modes. For ambient data, the time window needs to be longer to accumulate information for a reasonably accurate estimation; while for disturbance data, the time window can be significantly shorter so the latency in estimation can be much less. In addition, adding a known input signal such as noise probing signals can increase the knowledge of system oscillatory properties and thus improve the quality of mode estimation. System situations change over time. Disturbances can occur at any time, and probing signals can be added for a certain time period and then removed. All these observations point to the need to add intelligence to ModeMeter applications. That is, a ModeMeter needs to adaptively select different algorithms and adjust parameters for various situations. This project aims to develop systematic approaches for algorithm selection and parameter adjustment. The very first step is to detect occurrence of oscillations so the algorithm and parameters can be changed accordingly. The proposed oscillation detection approach is based on the signal-noise ratio of measurements.

  8. A modified WTC algorithm for the Painlevé test of nonlinear variable-coefficient PDEs

    NASA Astrophysics Data System (ADS)

    Zhao, Yin-Long; Liu, Yin-Ping; Li, Zhi-Bin

    2009-11-01

    A modified WTC algorithm for the Painlevé test of nonlinear PDEs with variable coefficients is proposed. Compared to the Kruskal's simplification algorithm, the modified algorithm further simplifies the computation in the third step of the Painlevé test for variable-coefficient PDEs to some extent. Two examples illustrate the proposed modified algorithm.

  9. Use of synthetic data to test biometric algorithms

    NASA Astrophysics Data System (ADS)

    Murphy, Thomas M.; Broussard, Randy; Rakvic, Ryan; Ngo, Hau; Ives, Robert W.; Schultz, Robert; Aguayo, Joseph T.

    2016-07-01

    For digital imagery, face detection and identification are functions of great importance in wide-ranging applications, including full facial recognition systems. The development and evaluation of unique and existing face detection and face identification applications require a significant amount of data. Increased availability of such data volumes could benefit the formulation and advancement of many biometric algorithms. Here, the utility of using synthetically generated face data to evaluate facial biometry methodologies to a precision that would be unrealistic for a parametrically uncontrolled dataset, is demonstrated. Particular attention is given to similarity metrics, symmetry within and between recognition algorithms, discriminatory power and optimality of pan and/or tilt in reference images or libraries, susceptibilities to variations, identification confidence, meaningful identification mislabelings, sensitivity, specificity, and threshold values. The face identification results, in particular, could be generalized to address shortcomings in various applications and help to inform the design of future strategies.

  10. ecode - Electron Transport Algorithm Testing v. 1.0

    SciTech Connect

    Franke, Brian C.; Olson, Aaron J.; Bruss, Donald Eugene; Laub, Thomas W.; Crawford, Martin J; Kenseck, Ronald P.; Prinja, Anil

    2016-10-05

    ecode is a Monte Carlo code used for testing algorithms related to electron transport. The code can read basic physics parameters, such as energy-dependent stopping powers and screening parameters. The code permits simple planar geometries of slabs or cubes. Parallelization consists of domain replication, with work distributed at the start of the calculation and statistical results gathered at the end of the calculation. Some basic routines (such as input parsing, random number generation, and statistics processing) are shared with the Integrated Tiger Series codes. A variety of algorithms for uncertainty propagation are incorporated based on the stochastic collocation and stochastic Galerkin methods. These permit uncertainty only in the total and angular scattering cross sections. The code contains algorithms for simulating stochastic mixtures of two materials. The physics is approximate, ranging from mono-energetic and isotropic scattering to screened Rutherford angular scattering and Rutherford energy-loss scattering (simple electron transport models). No production of secondary particles is implemented, and no photon physics is implemented.

  11. Faith in the algorithm, part 1: beyond the turing test

    SciTech Connect

    Rodriguez, Marko A; Pepe, Alberto

    2009-01-01

    Since the Turing test was first proposed by Alan Turing in 1950, the goal of artificial intelligence has been predicated on the ability for computers to imitate human intelligence. However, the majority of uses for the computer can be said to fall outside the domain of human abilities and it is exactly outside of this domain where computers have demonstrated their greatest contribution. Another definition for artificial intelligence is one that is not predicated on human mimicry, but instead, on human amplification, where the algorithms that are best at accomplishing this are deemed the most intelligent. This article surveys various systems that augment human and social intelligence.

  12. MST Fitness Index and implicit data narratives: A comparative test on alternative unsupervised algorithms

    NASA Astrophysics Data System (ADS)

    Buscema, Massimo; Sacco, Pier Luigi

    2016-11-01

    In this paper, we introduce a new methodology for the evaluation of alternative algorithms in capturing the deep statistical structure of datasets of different types and nature, called MST Fitness, and based on the notion of Minimum Spanning Tree (MST). We test this methodology on six different databases, some of which artificial and widely used in similar experimentations, and some related to real world phenomena. Our test set consists of eight different algorithms, including some widely known and used, such as Principal Component Analysis, Linear Correlation, or Euclidean Distance. We moreover consider more sophisticated Artificial Neural Network based algorithms, such as the Self-Organizing Map (SOM) and a relatively new algorithm called Auto-Contractive Map (AutoCM). We find that, for our benchmark of datasets, AutoCM performs consistently better than all other algorithms for all of the datasets, and that its global performance is superior to that of the others of several orders of magnitude. It is to be checked in future research if AutoCM can be considered a truly general-purpose algorithm for the analysis of heterogeneous categories of datasets.

  13. Empirical Testing of an Algorithm for Defining Somatization in Children

    PubMed Central

    Eisman, Howard D.; Fogel, Joshua; Lazarovich, Regina; Pustilnik, Inna

    2007-01-01

    Introduction A previous article proposed an algorithm for defining somatization in children by classifying them into three categories: well, medically ill, and somatizer; the authors suggested further empirical validation of the algorithm (Postilnik et al., 2006). We use the Child Behavior Checklist (CBCL) to provide this empirical validation. Method Parents of children seen in pediatric clinics completed the CBCL (n=126). The physicians of these children completed specially-designed questionnaires. The sample comprised of 62 boys and 64 girls (age range 2 to 15 years). Classification categories included: well (n=53), medically ill (n=55), and somatizer (n=18). Analysis of variance (ANOVA) was used for statistical comparisons. Discriminant function analysis was conducted with the CBCL subscales. Results There were significant differences between the classification categories for the somatic complaints (p=<0.001), social problems (p=0.004), thought problems (p=0.01), attention problems (0.006), and internalizing (p=0.003) subscales and also total (p=0.001), and total-t (p=0.001) scales of the CBCL. Discriminant function analysis showed that 78% of somatizers and 66% of well were accurately classified, while only 35% of medically ill were accurately classified. Conclusion The somatization classification algorithm proposed by Postilnik et al. (2006) shows promise for classification of children and adolescents with somatic symptoms. PMID:18421368

  14. An Algorithm of Making Switching Operation Sequence for Fault Testing using Tree Structured Data

    NASA Astrophysics Data System (ADS)

    Shiota, Masatoshi; Komai, Kenji; Yamanishi, Asao

    This paper describes an algorithm of making switching operation sequence for fault testing using tree structured data. When the faulty section is not isolated exactly, faulty section is tested whether the fault exist in the section by energizing. The proposed algorithm can determine appropriate order of components for fault testing and valid switching operation sequence for each fault testing. An example shows the effectiveness of the proposed algorithm. The proposed algorithm is used at actual control centers.

  15. A Fano cavity test for Monte Carlo proton transport algorithms

    SciTech Connect

    Sterpin, Edmond; Sorriaux, Jefferson; Souris, Kevin; Vynckier, Stefaan; Bouchard, Hugo

    2014-01-15

    Purpose: In the scope of reference dosimetry of radiotherapy beams, Monte Carlo (MC) simulations are widely used to compute ionization chamber dose response accurately. Uncertainties related to the transport algorithm can be verified performing self-consistency tests, i.e., the so-called “Fano cavity test.” The Fano cavity test is based on the Fano theorem, which states that under charged particle equilibrium conditions, the charged particle fluence is independent of the mass density of the media as long as the cross-sections are uniform. Such tests have not been performed yet for MC codes simulating proton transport. The objectives of this study are to design a new Fano cavity test for proton MC and to implement the methodology in two MC codes: Geant4 and PENELOPE extended to protons (PENH). Methods: The new Fano test is designed to evaluate the accuracy of proton transport. Virtual particles with an energy ofE{sub 0} and a mass macroscopic cross section of (Σ)/(ρ) are transported, having the ability to generate protons with kinetic energy E{sub 0} and to be restored after each interaction, thus providing proton equilibrium. To perform the test, the authors use a simplified simulation model and rigorously demonstrate that the computed cavity dose per incident fluence must equal (ΣE{sub 0})/(ρ) , as expected in classic Fano tests. The implementation of the test is performed in Geant4 and PENH. The geometry used for testing is a 10 × 10 cm{sup 2} parallel virtual field and a cavity (2 × 2 × 0.2 cm{sup 3} size) in a water phantom with dimensions large enough to ensure proton equilibrium. Results: For conservative user-defined simulation parameters (leading to small step sizes), both Geant4 and PENH pass the Fano cavity test within 0.1%. However, differences of 0.6% and 0.7% were observed for PENH and Geant4, respectively, using larger step sizes. For PENH, the difference is attributed to the random-hinge method that introduces an artificial energy

  16. Development of a Smart Release Algorithm for Mid-Air Separation of Parachute Test Articles

    NASA Technical Reports Server (NTRS)

    Moore, James W.

    2011-01-01

    The Crew Exploration Vehicle Parachute Assembly System (CPAS) project is currently developing an autonomous method to separate a capsule-shaped parachute test vehicle from an air-drop platform for use in the test program to develop and validate the parachute system for the Orion spacecraft. The CPAS project seeks to perform air-drop tests of an Orion-like boilerplate capsule. Delivery of the boilerplate capsule to the test condition has proven to be a critical and complicated task. In the current concept, the boilerplate vehicle is extracted from an aircraft on top of a Type V pallet and then separated from the pallet in mid-air. The attitude of the vehicles at separation is critical to avoiding re-contact and successfully deploying the boilerplate into a heatshield-down orientation. Neither the pallet nor the boilerplate has an active control system. However, the attitude of the mated vehicle as a function of time is somewhat predictable. CPAS engineers have designed an avionics system to monitor the attitude of the mated vehicle as it is extracted from the aircraft and command a release when the desired conditions are met. The algorithm includes contingency capabilities designed to release the test vehicle before undesirable orientations occur. The algorithm was verified with simulation and ground testing. The pre-flight development and testing is discussed and limitations of ground testing are noted. The CPAS project performed a series of three drop tests as a proof-of-concept of the release technique. These tests helped to refine the attitude instrumentation and software algorithm to be used on future tests. The drop tests are described in detail and the evolution of the release system with each test is described.

  17. Testing and Development of the Onsite Earthquake Early Warning Algorithm to Reduce Event Uncertainties

    NASA Astrophysics Data System (ADS)

    Andrews, J. R.; Cochran, E. S.; Hauksson, E.; Felizardo, C.; Liu, T.; Ross, Z.; Heaton, T. H.

    2015-12-01

    Primary metrics for measuring earthquake early warning (EEW) system and algorithm performance are the rate of false alarms and the uncertainty in earthquake parameters. The Onsite algorithm, currently one of three EEW algorithms implemented in ShakeAlert, uses the ground-motion period parameter (τc) and peak initial displacement parameter (Pd) to estimate the magnitude and expected ground shaking of an ongoing earthquake. It is the only algorithm originally designed to issue single station alerts, necessitating that results from individual stations be as reliable and accurate as possible.The ShakeAlert system has been undergoing testing on continuous real-time data in California for several years, and the latest version of the Onsite algorithm for several months. This permits analysis of the response to a range of signals, from environmental noise to hardware testing and maintenance procedures to moderate or large earthquake signals at varying distances from the networks. We find that our existing discriminator, relying only on τc and Pd, while performing well to exclude large teleseismic events, is less effective for moderate regional events and can also incorrectly exclude data from local events. Motivated by these experiences, we use a collection of waveforms from potentially problematic 'noise' events and real earthquakes to explore methods to discriminate real and false events, using the ground motion and period parameters available in Onsite's processing methodology. Once an event is correctly identified, a magnitude and location estimate is critical to determining the expected ground shaking. Scatter in the measured parameters translates to higher than desired uncertainty in Onsite's current calculations We present an overview of alternative methods, including incorporation of polarization information, to improve parameter determination for a test suite including both large (M4 to M7) events and three years of small to moderate events across California.

  18. A Review of Scoring Algorithms for Multiple-Choice Tests.

    ERIC Educational Resources Information Center

    Kurz, Terri Barber

    Multiple-choice tests are generally scored using a conventional number right scoring method. While this method is easy to use, it has several weaknesses. These weaknesses include decreased validity due to guessing and failure to credit partial knowledge. In an attempt to address these weaknesses, psychometricians have developed various scoring…

  19. Development of a computer algorithm for the analysis of variable-frequency AC drives: Case studies included

    NASA Technical Reports Server (NTRS)

    Kankam, M. David; Benjamin, Owen

    1991-01-01

    The development of computer software for performance prediction and analysis of voltage-fed, variable-frequency AC drives for space power applications is discussed. The AC drives discussed include the pulse width modulated inverter (PWMI), a six-step inverter and the pulse density modulated inverter (PDMI), each individually connected to a wound-rotor induction motor. Various d-q transformation models of the induction motor are incorporated for user-selection of the most applicable model for the intended purpose. Simulation results of selected AC drives correlate satisfactorily with published results. Future additions to the algorithm are indicated. These improvements should enhance the applicability of the computer program to the design and analysis of space power systems.

  20. Adaptive testing for psychological assessment: how many items are enough to run an adaptive testing algorithm?

    PubMed

    Wagner-Menghin, Michaela M; Masters, Geoff N

    2013-01-01

    Although the principles of adaptive testing were established in the psychometric literature many years ago (e.g., Weiss, 1977), and practice of adaptive testing is established in educational assessment, it not yet widespread in psychological assessment. One obstacle to adaptive psychological testing is a lack of clarity about the necessary number of items to run an adaptive algorithm. The study explores the relationship between item bank size, test length and measurement precision. Simulated adaptive test runs (allowing a maximum of 30 items per person) out of an item bank with 10 items per ability level (covering .5 logits, 150 items total) yield a standard error of measurement (SEM) of .47 (.39) after an average of 20 (29) items for 85-93% (64-82%) of the simulated rectangular sample. Expanding the bank to 20 items per level (300 items total) did not improve the algorithm's performance significantly. With a small item bank (5 items per ability level, 75 items total) it is possible to reach the same SEM as with a conventional test, but with fewer items or a better SEM with the same number of items.

  1. New algorithms for phase unwrapping: implementation and testing

    NASA Astrophysics Data System (ADS)

    Kotlicki, Krzysztof

    1998-11-01

    In this paper it is shown how the regularization theory was used for the new noise immune algorithm for phase unwrapping. The algorithm were developed by M. Servin, J.L. Marroquin and F.J. Cuevas in Centro de Investigaciones en Optica A.C. and Centro de Investigacion en Matematicas A.C. in Mexico. The theory was presented. The objective of the work was to implement the algorithm into the software able to perform the off-line unwrapping on the fringe pattern. The algorithms are present as well as the result and also the software developed for the implementation.

  2. A Test of Genetic Algorithms in Relevance Feedback.

    ERIC Educational Resources Information Center

    Lopez-Pujalte, Cristina; Guerrero Bote, Vicente P.; Moya Anegon, Felix de

    2002-01-01

    Discussion of information retrieval, query optimization techniques, and relevance feedback focuses on genetic algorithms, which are derived from artificial intelligence techniques. Describes an evaluation of different genetic algorithms using a residual collection method and compares results with the Ide dec-hi method (Salton and Buckley, 1990…

  3. Evaluating Knowledge Structure-Based Adaptive Testing Algorithms and System Development

    ERIC Educational Resources Information Center

    Wu, Huey-Min; Kuo, Bor-Chen; Yang, Jinn-Min

    2012-01-01

    In recent years, many computerized test systems have been developed for diagnosing students' learning profiles. Nevertheless, it remains a challenging issue to find an adaptive testing algorithm to both shorten testing time and precisely diagnose the knowledge status of students. In order to find a suitable algorithm, four adaptive testing…

  4. LPT. Plot plan and site layout. Includes shield test pool/EBOR ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    LPT. Plot plan and site layout. Includes shield test pool/EBOR facility. (TAN-645 and -646) low power test building (TAN-640 and -641), water storage tanks, guard house (TAN-642), pump house (TAN-644), driveways, well, chlorination building (TAN-643), septic system. Ralph M. Parsons 1229-12 ANP/GE-7-102. November 1956. Approved by INEEL Classification Office for public release. INEEL index code no. 038-0102-00-693-107261 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  5. An Improved SoC Test Scheduling Method Based on Simulated Annealing Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Jingjing; Shen, Zhihang; Gao, Huaien; Chen, Bianna; Zheng, Weida; Xiong, Xiaoming

    2017-02-01

    In this paper, we propose an improved SoC test scheduling method based on simulated annealing algorithm (SA). It is our first to disorganize IP core assignment for each TAM to produce a new solution for SA, allocate TAM width for each TAM using greedy algorithm and calculate corresponding testing time. And accepting the core assignment according to the principle of simulated annealing algorithm and finally attain the optimum solution. Simultaneously, we run the test scheduling experiment with the international reference circuits provided by International Test Conference 2002(ITC’02) and the result shows that our algorithm is superior to the conventional integer linear programming algorithm (ILP), simulated annealing algorithm (SA) and genetic algorithm(GA). When TAM width reaches to 48,56 and 64, the testing time based on our algorithm is lesser than the classic methods and the optimization rates are 30.74%, 3.32%, 16.13% respectively. Moreover, the testing time based on our algorithm is very close to that of improved genetic algorithm (IGA), which is state-of-the-art at present.

  6. Testing and validation of a Ḃ algorithm for cubesat satellites

    NASA Astrophysics Data System (ADS)

    Böttcher, M.; Eshghi, S.; Varatharajoo, R.

    2016-10-01

    For most satellite missions, it is essential to decrease the satellite angular velocity. The B algorithm is a common algorithm to stabilize the spacecraft by using magnetorquers. Controlling the satellite using the magnetorquers is part of the attitude control subsystem detumbling mode. Due to oscillating disturbances in the space environment, the required initial conditions needs analysis. As a consequence, the satellite stays in B detumbling mode for the entire operation. In the detumbling mode, the spacecraft oscillates around its spatial axes. The purpose of this paper is to extend the B algorithm with a disturbances compensation module and to achieve reduction of satellite's angular velocity. The developed algorithm is found to be able to reduce satellite's angular velocity up to 10-11 degrees.

  7. Computational Analysis of Arc-Jet Wedge Tests Including Ablation and Shape Change

    NASA Technical Reports Server (NTRS)

    Goekcen, Tahir; Chen, Yih-Kanq; Skokova, Kristina A.; Milos, Frank S.

    2010-01-01

    Coupled fluid-material response analyses of arc-jet wedge ablation tests conducted in a NASA Ames arc-jet facility are considered. These tests were conducted using blunt wedge models placed in a free jet downstream of the 6-inch diameter conical nozzle in the Ames 60-MW Interaction Heating Facility. The fluid analysis includes computational Navier-Stokes simulations of the nonequilibrium flowfield in the facility nozzle and test box as well as the flowfield over the models. The material response analysis includes simulation of two-dimensional surface ablation and internal heat conduction, thermal decomposition, and pyrolysis gas flow. For ablating test articles undergoing shape change, the material response and fluid analyses are coupled in order to calculate the time dependent surface heating and pressure distributions that result from shape change. The ablating material used in these arc-jet tests was Phenolic Impregnated Carbon Ablator. Effects of the test article shape change on fluid and material response simulations are demonstrated, and computational predictions of surface recession, shape change, and in-depth temperatures are compared with the experimental measurements.

  8. Fast mode decision algorithm in MPEG-2 to H.264/AVC transcoding including group of picture structure conversion

    NASA Astrophysics Data System (ADS)

    Lee, Kangjun; Jeon, Gwanggil; Jeong, Jechang

    2009-05-01

    The H.264/AVC baseline profile is used in many applications, including digital multimedia broadcasting, Internet protocol television, and storage devices, while the MPEG-2 main profile is widely used in applications, such as high-definition television and digital versatile disks. The MPEG-2 main profile supports B pictures for bidirectional motion prediction. Therefore, transcoding the MPEG-2 main profile to the H.264/AVC baseline is necessary for universal multimedia access. In the cascaded pixel domain transcoder architecture, the calculation of the rate distortion cost as part of the mode decision process in the H.264/AVC encoder requires extremely complex computations. To reduce the complexity inherent in the implementation of a real-time transcoder, we propose a fast mode decision algorithm based on complexity information from the reference region that is used for motion compensation. In this study, an adaptive mode decision process was used based on the modes assigned to the reference regions. Simulation results indicated that a significant reduction in complexity was achieved without significant degradation of video quality.

  9. Statistical algorithms for a comprehensive test ban treaty discrimination framework

    SciTech Connect

    Foote, N.D.; Anderson, D.N.; Higbee, K.T.; Miller, N.E.; Redgate, T.; Rohay, A.C.; Hagedorn, D.N.

    1996-10-01

    Seismic discrimination is the process of identifying a candidate seismic event as an earthquake or explosion using information from seismic waveform features (seismic discriminants). In the CTBT setting, low energy seismic activity must be detected and identified. A defensible CTBT discrimination decision requires an understanding of false-negative (declaring an event to be an earthquake given it is an explosion) and false-position (declaring an event to be an explosion given it is an earthquake) rates. These rates are derived from a statistical discrimination framework. A discrimination framework can be as simple as a single statistical algorithm or it can be a mathematical construct that integrates many different types of statistical algorithms and CTBT technologies. In either case, the result is the identification of an event and the numerical assessment of the accuracy of an identification, that is, false-negative and false-positive rates. In Anderson et al., eight statistical discrimination algorithms are evaluated relative to their ability to give results that effectively contribute to a decision process and to be interpretable with physical (seismic) theory. These algorithms can be discrimination frameworks individually or components of a larger framework. The eight algorithms are linear discrimination (LDA), quadratic discrimination (QDA), variably regularized discrimination (VRDA), flexible discrimination (FDA), logistic discrimination, K-th nearest neighbor (KNN), kernel discrimination, and classification and regression trees (CART). In this report, the performance of these eight algorithms, as applied to regional seismic data, is documented. Based on the findings in Anderson et al. and this analysis: CART is an appropriate algorithm for an automated CTBT setting.

  10. A new free-surface stabilization algorithm for geodynamical modelling: Theory and numerical tests

    NASA Astrophysics Data System (ADS)

    Andrés-Martínez, Miguel; Morgan, Jason P.; Pérez-Gussinyé, Marta; Rüpke, Lars

    2015-09-01

    The surface of the solid Earth is effectively stress free in its subaerial portions, and hydrostatic beneath the oceans. Unfortunately, this type of boundary condition is difficult to treat computationally, and for computational convenience, numerical models have often used simpler approximations that do not involve a normal stress-loaded, shear-stress free top surface that is free to move. Viscous flow models with a computational free surface typically confront stability problems when the time step is bigger than the viscous relaxation time. The small time step required for stability (< 2 Kyr) makes this type of model computationally intensive, so there remains a need to develop strategies that mitigate the stability problem by making larger (at least ∼10 Kyr) time steps stable and accurate. Here we present a new free-surface stabilization algorithm for finite element codes which solves the stability problem by adding to the Stokes formulation an intrinsic penalization term equivalent to a portion of the future load at the surface nodes. Our algorithm is straightforward to implement and can be used with both Eulerian or Lagrangian grids. It includes α and β parameters to respectively control both the vertical and the horizontal slope-dependent penalization terms, and uses Uzawa-like iterations to solve the resulting system at a cost comparable to a non-stress free surface formulation. Four tests were carried out in order to study the accuracy and the stability of the algorithm: (1) a decaying first-order sinusoidal topography test, (2) a decaying high-order sinusoidal topography test, (3) a Rayleigh-Taylor instability test, and (4) a steep-slope test. For these tests, we investigate which α and β parameters give the best results in terms of both accuracy and stability. We also compare the accuracy and the stability of our algorithm with a similar implicit approach recently developed by Kaus et al. (2010). We find that our algorithm is slightly more accurate

  11. Comparison of Marketed Cosmetic Products Constituents with the Antigens Included in Cosmetic-related Patch Test

    PubMed Central

    Cheong, Seung Hyun; Choi, You Won; Myung, Ki Bum

    2010-01-01

    Background Currently, cosmetic series (Chemotechnique Diagnostics, Sweden) is the most widely used cosmetic-related patch test in Korea. However, no studies have been conducted on how accurately it reflects the constituents of the cosmetics in Korea. Objective We surveyed the constituents of various cosmetics and compare with the cosmetic series, to investigate whether it is accurate in determining allergic contact dermatitis caused by cosmetics sold in Korea. Methods Cosmetics were classified into 11 categories and the survey was conducted on the constituents of 55 cosmetics, with 5 cosmetics in each category. The surveyed constituents were classified by chemical function and compared with the antigens of cosmetic series. Results 155 constituents were found in 55 cosmetics, and 74 (47.7%) of constituents were included as antigen. Among them, only 20 constituents (27.0%) were included in cosmetic series. A significant number of constituents, such as fragrance, vehicle and surfactant were not included. Only 41.7% of antigens in cosmetic series were found to be in the cosmetics sampled. Conclusion The constituents not included in the patch test but possess antigenicity are widely used in cosmetics. Therefore, the patch test should be modified to reflect ingredients in the marketed products that may stimulate allergies. PMID:20711261

  12. Testing Algorithmic Skills in Traditional and Non-Traditional Programming Environments

    ERIC Educational Resources Information Center

    Csernoch, Mária; Biró, Piroska; Máth, János; Abari, Kálmán

    2015-01-01

    The Testing Algorithmic and Application Skills (TAaAS) project was launched in the 2011/2012 academic year to test first year students of Informatics, focusing on their algorithmic skills in traditional and non-traditional programming environments, and on the transference of their knowledge of Informatics from secondary to tertiary education. The…

  13. Development of Online Cognitive and Algorithm Tests as Assessment Tools in Introductory Computer Science Courses

    ERIC Educational Resources Information Center

    Avancena, Aimee Theresa; Nishihara, Akinori; Vergara, John Paul

    2012-01-01

    This paper presents the online cognitive and algorithm tests, which were developed in order to determine if certain cognitive factors and fundamental algorithms correlate with the performance of students in their introductory computer science course. The tests were implemented among Management Information Systems majors from the Philippines and…

  14. The Comparability of the Statistical Characteristics of Test Items Generated by Computer Algorithms.

    ERIC Educational Resources Information Center

    Meisner, Richard; And Others

    This paper presents a study on the generation of mathematics test items using algorithmic methods. The history of this approach is briefly reviewed and is followed by a survey of the research to date on the statistical parallelism of algorithmically generated mathematics items. Results are presented for 8 parallel test forms generated using 16…

  15. Photo Library of the Nevada Site Office (Includes historical archive of nuclear testing images)

    DOE Data Explorer

    The Nevada Site Office makes available publicly released photos from their archive that includes photos from both current programs and historical activities. The historical collections include atmospheric and underground nuclear testing photos and photos of other events and people related to the Nevada Test Site. Current collections are focused on homeland security, stockpile stewardship, and environmental management and restoration. See also the Historical Film Library at http://www.nv.doe.gov/library/films/testfilms.aspx and the Current Film Library at http://www.nv.doe.gov/library/films/current.aspx. Current films can be viewed online, but only short clips of the historical films are viewable. They can be ordered via an online request form for a very small shipping and handling fee.

  16. An open-source framework for stress-testing non-invasive foetal ECG extraction algorithms.

    PubMed

    Andreotti, Fernando; Behar, Joachim; Zaunseder, Sebastian; Oster, Julien; Clifford, Gari D

    2016-05-01

    Over the past decades, many studies have been published on the extraction of non-invasive foetal electrocardiogram (NI-FECG) from abdominal recordings. Most of these contributions claim to obtain excellent results in detecting foetal QRS (FQRS) complexes in terms of location. A small subset of authors have investigated the extraction of morphological features from the NI-FECG. However, due to the shortage of available public databases, the large variety of performance measures employed and the lack of open-source reference algorithms, most contributions cannot be meaningfully assessed. This article attempts to address these issues by presenting a standardised methodology for stress testing NI-FECG algorithms, including absolute data, as well as extraction and evaluation routines. To that end, a large database of realistic artificial signals was created, totaling 145.8 h of multichannel data and over one million FQRS complexes. An important characteristic of this dataset is the inclusion of several non-stationary events (e.g. foetal movements, uterine contractions and heart rate fluctuations) that are critical for evaluating extraction routines. To demonstrate our testing methodology, three classes of NI-FECG extraction algorithms were evaluated: blind source separation (BSS), template subtraction (TS) and adaptive methods (AM). Experiments were conducted to benchmark the performance of eight NI-FECG extraction algorithms on the artificial database focusing on: FQRS detection and morphological analysis (foetal QT and T/QRS ratio). The overall median FQRS detection accuracies (i.e. considering all non-stationary events) for the best performing methods in each group were 99.9% for BSS, 97.9% for AM and 96.0% for TS. Both FQRS detections and morphological parameters were shown to heavily depend on the extraction techniques and signal-to-noise ratio. Particularly, it is shown that their evaluation in the source domain, obtained after using a BSS technique, should be

  17. An Efficient Functional Test Generation Method For Processors Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Hudec, Ján; Gramatová, Elena

    2015-07-01

    The paper presents a new functional test generation method for processors testing based on genetic algorithms and evolutionary strategies. The tests are generated over an instruction set architecture and a processor description. Such functional tests belong to the software-oriented testing. Quality of the tests is evaluated by code coverage of the processor description using simulation. The presented test generation method uses VHDL models of processors and the professional simulator ModelSim. The rules, parameters and fitness functions were defined for various genetic algorithms used in automatic test generation. Functionality and effectiveness were evaluated using the RISC type processor DP32.

  18. Perceptual Tests of an Algorithm for Musical Key-Finding

    ERIC Educational Resources Information Center

    Schmuckler, Mark A.; Tomovski, Robert

    2005-01-01

    Perceiving the tonality of a musical passage is a fundamental aspect of the experience of hearing music. Models for determining tonality have thus occupied a central place in music cognition research. Three experiments investigated 1 well-known model of tonal determination: the Krumhansl-Schmuckler key-finding algorithm. In Experiment 1,…

  19. A Bi-objective Model Inspired Greedy Algorithm for Test Suite Minimization

    NASA Astrophysics Data System (ADS)

    Parsa, Saeed; Khalilian, Alireza

    Regression testing is a critical activity which occurs during the maintenance stage of the software lifecycle. However, it requires large amounts of test cases to assure the attainment of a certain degree of quality. As a result, test suite sizes may grow significantly. To address this issue, Test Suite Reduction techniques have been proposed. However, suite size reduction may lead to significant loss of fault detection efficacy. To deal with this problem, a greedy algorithm is presented in this paper. This algorithm attempts to select a test case which satisfies the maximum number of testing requirements while having minimum overlap in requirements coverage with other test cases. In order to evaluate the proposed algorithm, experiments have been conducted on the Siemens suite and the Space program. The results demonstrate the effectiveness of the proposed algorithm by retaining the fault detection capability of the suites while achieving significant suite size reduction.

  20. Applications of Assignment Algorithms to Nonparametric Tests for Homogeneity

    DTIC Science & Technology

    2009-09-01

    of alternatives but is less powerful than the Mann-Whitney test (Conover, 1999), is the Wald - Wolfowitz runs test ( Wald and Wolfowitz , 1940...from different distributions. Like the Mann-Whitney test, the Wald - Wolfowitz test is asymptotically normal. Two other tests that are consistent...and multivariate cases: techniques based on rank permutations (such as Mann-Whitney and Wald - Wolfowitz ) and tests based on distribution function

  1. Characterization of variables that may influence ozenoxacin in susceptibility testing, including MIC and MBC values.

    PubMed

    Tato, Marta; López, Yuly; Morosini, Maria Isabel; Moreno-Bofarull, Ana; Garcia-Alonso, Fernando; Gargallo-Viola, Domingo; Vila, Jordi; Cantón, Rafael

    2014-03-01

    Ozenoxacin is a new des-fluoro-(6)-quinolone active against pathogens involved in skin and skin structure infections, including Gram-positives resistant to fluoroquinolones. The in vitro bacteriostatic and bactericidal activity of ozenoxacin, ciprofloxacin, and levofloxacin was studied against 40 clinical isolates and 16 ATCC quality control strains under different test conditions, including cation supplementation, pH, inoculum size, inoculum preparation, incubation time, human serum, and CO2 incubation. The activity of ozenoxacin was unaffected by cation test medium supplementation, inoculum preparation, incubation time, and the increasing CO2 environment. On the contrary, ozenoxacin activity decreased by high inoculum (10(7) CFU/mL), increased presence of human serum in the medium, and increased pH. The last effect was different for ciprofloxacin and levofloxacin, which decreased activity when pH decreased. The bactericidal mode of action of ozenoxacin and control drugs was consistently maintained (MBC/MIC ratios ≤4) in spite of variations of their activity under different test conditions.

  2. Particle-In-Cell Multi-Algorithm Numerical Test-Bed

    NASA Astrophysics Data System (ADS)

    Meyers, M. D.; Yu, P.; Tableman, A.; Decyk, V. K.; Mori, W. B.

    2015-11-01

    We describe a numerical test-bed that allows for the direct comparison of different numerical simulation schemes using only a single code. It is built from the UPIC Framework, which is a set of codes and modules for constructing parallel PIC codes. In this test-bed code, Maxwell's equations are solved in Fourier space in two dimensions. One can readily examine the numerical properties of a real space finite difference scheme by including its operators' Fourier space representations in the Maxwell solver. The fields can be defined at the same location in a simulation cell or can be offset appropriately by half-cells, as in the Yee finite difference time domain scheme. This allows for the accurate comparison of numerical properties (dispersion relations, numerical stability, etc.) across finite difference schemes, or against the original spectral scheme. We have also included different options for the charge and current deposits, including a strict charge conserving current deposit. The test-bed also includes options for studying the analytic time domain scheme, which eliminates numerical dispersion errors in vacuum. We will show examples from the test-bed that illustrate how the properties of some numerical instabilities vary between different PIC algorithms. Work supported by the NSF grant ACI 1339893 and DOE grant DE-SC0008491.

  3. Applying 3D measurements and computer matching algorithms to two firearm examination proficiency tests.

    PubMed

    Ott, Daniel; Thompson, Robert; Song, Junfeng

    2017-02-01

    In order for a crime laboratory to assess a firearms examiner's training, skills, experience, and aptitude, it is necessary for the examiner to participate in proficiency testing. As computer algorithms for comparisons of pattern evidence become more prevalent, it is of interest to test algorithm performance as well, using these same proficiency examinations. This article demonstrates the use of the Congruent Matching Cell (CMC) algorithm to compare 3D topography measurements of breech face impressions and firing pin impressions from a previously distributed firearms proficiency test. In addition, the algorithm is used to analyze the distribution of many comparisons from a collection of cartridge cases used to construct another recent set of proficiency tests. These results are provided along with visualizations that help to relate the features used in optical comparisons by examiners to the features used by computer comparison algorithms.

  4. Small-scale rotor test rig capabilities for testing vibration alleviation algorithms

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.; Leyland, Jane Anne

    1987-01-01

    A test was conducted to assess the capabilities of a small scale rotor test rig for implementing higher harmonic control and stability augmentation algorithms. The test rig uses three high speed actuators to excite the swashplate over a range of frequencies. The actuator position signals were monitored to measure the response amplitudes at several frequencies. The ratio of response amplitude to excitation amplitude was plotted as a function of frequency. In addition to actuator performance, acceleration from six accelerometers placed on the test rig was monitored to determine whether a linear relationship exists between the harmonics of N/Rev control input and the least square error (LSE) identification technique was used to identify local and global transfer matrices for two rotor speeds at two batch sizes each. It was determined that the multicyclic control computer system interfaced very well with the rotor system and kept track of the input accelerometer signals and their phase angles. However, the current high speed actuators were found to be incapable of providing sufficient control authority at the higher excitation frequencies.

  5. More Than Just Accuracy: A Novel Method to Incorporate Multiple Test Attributes in Evaluating Diagnostic Tests Including Point of Care Tests

    PubMed Central

    Weigl, Bernhard; Fitzpatrick, Annette; Ide, Nicole

    2016-01-01

    Current frameworks for evaluating diagnostic tests are constrained by a focus on diagnostic accuracy, and assume that all aspects of the testing process and test attributes are discrete and equally important. Determining the balance between the benefits and harms associated with new or existing tests has been overlooked. Yet, this is critically important information for stakeholders involved in developing, testing, and implementing tests. This is particularly important for point of care tests (POCTs) where tradeoffs exist between numerous aspects of the testing process and test attributes. We developed a new model that multiple stakeholders (e.g., clinicians, patients, researchers, test developers, industry, regulators, and health care funders) can use to visualize the multiple attributes of tests, the interactions that occur between these attributes, and their impacts on health outcomes. We use multiple examples to illustrate interactions between test attributes (test availability, test experience, and test results) and outcomes, including several POCTs. The model could be used to prioritize research and development efforts, and inform regulatory submissions for new diagnostics. It could potentially provide a way to incorporate the relative weights that various subgroups or clinical settings might place on different test attributes. Our model provides a novel way that multiple stakeholders can use to visualize test attributes, their interactions, and impacts on individual and population outcomes. We anticipate that this will facilitate more informed decision making around diagnostic tests. PMID:27574576

  6. Incremental Yield of Including Determine-TB LAM Assay in Diagnostic Algorithms for Hospitalized and Ambulatory HIV-Positive Patients in Kenya

    PubMed Central

    Ferlazzo, Gabriella; Bevilacqua, Paolo; Kirubi, Beatrice; Ardizzoni, Elisa; Wanjala, Stephen; Sitienei, Joseph; Bonnet, Maryline

    2017-01-01

    Background Determine-TB LAM assay is a urine point-of-care test useful for TB diagnosis in HIV-positive patients. We assessed the incremental diagnostic yield of adding LAM to algorithms based on clinical signs, sputum smear-microscopy, chest X-ray and Xpert MTB/RIF in HIV-positive patients with symptoms of pulmonary TB (PTB). Methods Prospective observational cohort of ambulatory (either severely ill or CD4<200cells/μl or with Body Mass Index<17Kg/m2) and hospitalized symptomatic HIV-positive adults in Kenya. Incremental diagnostic yield of adding LAM was the difference in the proportion of confirmed TB patients (positive Xpert or MTB culture) diagnosed by the algorithm with LAM compared to the algorithm without LAM. The multivariable mortality model was adjusted for age, sex, clinical severity, BMI, CD4, ART initiation, LAM result and TB confirmation. Results Among 474 patients included, 44.1% were severely ill, 69.6% had CD4<200cells/μl, 59.9% had initiated ART, 23.2% could not produce sputum. LAM, smear-microscopy, Xpert and culture in sputum were positive in 39.0% (185/474), 21.6% (76/352), 29.1% (102/350) and 39.7% (92/232) of the patients tested, respectively. Of 156 patients with confirmed TB, 65.4% were LAM positive. Of those classified as non-TB, 84.0% were LAM negative. Adding LAM increased the diagnostic yield of the algorithms by 36.6%, from 47.4% (95%CI:39.4–55.6) to 84.0% (95%CI:77.3–89.4%), when using clinical signs and X-ray; by 19.9%, from 62.2% (95%CI:54.1–69.8) to 82.1% (95%CI:75.1–87.7), when using clinical signs and microscopy; and by 13.4%, from 74.4% (95%CI:66.8–81.0) to 87.8% (95%CI:81.6–92.5), when using clinical signs and Xpert. LAM positive patients had an increased risk of 2-months mortality (aOR:2.7; 95%CI:1.5–4.9). Conclusion LAM should be included in TB diagnostic algorithms in parallel to microscopy or Xpert request for HIV-positive patients either ambulatory (severely ill or CD4<200cells/μl) or hospitalized. LAM

  7. Low voltage 30-cm ion thruster development. [including performance and structural integrity (vibration) tests

    NASA Technical Reports Server (NTRS)

    King, H. J.

    1974-01-01

    The basic goal was to advance the development status of the 30-cm electron bombardment ion thruster from a laboratory model to a flight-type engineering model (EM) thruster. This advancement included the more conventional aspects of mechanical design and testing for launch loads, weight reduction, fabrication process development, reliability and quality assurance, and interface definition, as well as a relatively significant improvement in thruster total efficiency. The achievement of this goal was demonstrated by the successful completion of a series of performance and structural integrity (vibration) tests. In the course of the program, essentially every part and feature of the original 30-cm Thruster was critically evaluated. These evaluations, led to new or improved designs for the ion optical system, discharge chamber, cathode isolator vaporizer assembly, main isolator vaporizer assembly, neutralizer assembly, packaging for thermal control, electrical terminations and structure.

  8. An Algorithm for Real-Time Optimal Photocurrent Estimation Including Transient Detection for Resource-Constrained Imaging Applications

    NASA Astrophysics Data System (ADS)

    Zemcov, Michael; Crill, Brendan; Ryan, Matthew; Staniszewski, Zak

    2016-06-01

    Mega-pixel charge-integrating detectors are common in near-IR imaging applications. Optimal signal-to-noise ratio estimates of the photocurrents, which are particularly important in the low-signal regime, are produced by fitting linear models to sequential reads of the charge on the detector. Algorithms that solve this problem have a long history, but can be computationally intensive. Furthermore, the cosmic ray background is appreciable for these detectors in Earth orbit, particularly above the Earth’s magnetic poles and the South Atlantic Anomaly, and on-board reduction routines must be capable of flagging affected pixels. In this paper, we present an algorithm that generates optimal photocurrent estimates and flags random transient charge generation from cosmic rays, and is specifically designed to fit on a computationally restricted platform. We take as a case study the Spectro-Photometer for the History of the Universe, Epoch of Reionization, and Ices Explorer (SPHEREx), a NASA Small Explorer astrophysics experiment concept, and show that the algorithm can easily fit in the resource-constrained environment of such a restricted platform. Detailed simulations of the input astrophysical signals and detector array performance are used to characterize the fitting routines in the presence of complex noise properties and charge transients. We use both Hubble Space Telescope Wide Field Camera-3 and Wide-field Infrared Survey Explorer to develop an empirical understanding of the susceptibility of near-IR detectors in low earth orbit and build a model for realistic cosmic ray energy spectra and rates. We show that our algorithm generates an unbiased estimate of the true photocurrent that is identical to that from a standard line fitting package, and characterize the rate, energy, and timing of both detected and undetected transient events. This algorithm has significant potential for imaging with charge-integrating detectors in astrophysics, earth science, and remote

  9. Clinical features of congenital adrenal insufficiency including growth patterns and significance of ACTH stimulation test.

    PubMed

    Koh, Ji Won; Kim, Gu Hwan; Yoo, Han Wook; Yu, Jeesuk

    2013-11-01

    Congenital adrenal insufficiency is caused by specific genetic mutations. Early suspicion and definite diagnosis are crucial because the disease can precipitate a life-threatening hypovolemic shock without prompt treatment. This study was designed to understand the clinical manifestations including growth patterns and to find the usefulness of ACTH stimulation test. Sixteen patients with confirmed genotyping were subdivided into three groups according to the genetic study results: congenital adrenal hyperplasia due to 21-hydroxylase deficiency (CAH, n=11), congenital lipoid adrenal hyperplasia (n=3) and X-linked adrenal hypoplasia congenita (n=2). Bone age advancement was prominent in patients with CAH especially after 60 months of chronologic age (n=6, 67%). They were diagnosed in older ages in group with bone age advancement (P<0.05). Comorbid conditions such as obesity, mental retardation, and central precocious puberty were also prominent in this group. In conclusion, this study showed the importance of understanding the clinical symptoms as well as genetic analysis for early diagnosis and management of congenital adrenal insufficiency. ACTH stimulation test played an important role to support the diagnosis and serum 17-hydroxyprogesterone levels were significantly elevated in all of the CAH patients. The test will be important for monitoring growth and puberty during follow up of patients with congenital adrenal insufficiency.

  10. Clinical Features of Congenital Adrenal Insufficiency Including Growth Patterns and Significance of ACTH Stimulation Test

    PubMed Central

    Koh, Ji Won; Kim, Gu Hwan; Yoo, Han Wook

    2013-01-01

    Congenital adrenal insufficiency is caused by specific genetic mutations. Early suspicion and definite diagnosis are crucial because the disease can precipitate a life-threatening hypovolemic shock without prompt treatment. This study was designed to understand the clinical manifestations including growth patterns and to find the usefulness of ACTH stimulation test. Sixteen patients with confirmed genotyping were subdivided into three groups according to the genetic study results: congenital adrenal hyperplasia due to 21-hydroxylase deficiency (CAH, n=11), congenital lipoid adrenal hyperplasia (n=3) and X-linked adrenal hypoplasia congenita (n=2). Bone age advancement was prominent in patients with CAH especially after 60 months of chronologic age (n=6, 67%). They were diagnosed in older ages in group with bone age advancement (P<0.05). Comorbid conditions such as obesity, mental retardation, and central precocious puberty were also prominent in this group. In conclusion, this study showed the importance of understanding the clinical symptoms as well as genetic analysis for early diagnosis and management of congenital adrenal insufficiency. ACTH stimulation test played an important role to support the diagnosis and serum 17-hydroxyprogesterone levels were significantly elevated in all of the CAH patients. The test will be important for monitoring growth and puberty during follow up of patients with congenital adrenal insufficiency. PMID:24265530

  11. FY 2016 Status Report: Documentation of All CIRFT Data including Hydride Reorientation Tests (Draft M2)

    SciTech Connect

    Wang, Jy-An John; Wang, Hong; Jiang, Hao; Yan, Yong; Bevard, Bruce B.; Scaglione, John M.

    2016-09-04

    The first portion of this report provides a detailed description of fiscal year (FY) 2015 test result corrections and analysis updates based on FY 2016 updates to the Cyclic Integrated Reversible-Bending Fatigue Tester (CIRFT) program methodology, which is used to evaluate the vibration integrity of spent nuclear fuel (SNF) under normal conditions of transport (NCT). The CIRFT consists of a U-frame test setup and a real-time curvature measurement method. The three-component U-frame setup of the CIRFT has two rigid arms and linkages connecting to a universal testing machine. The curvature SNF rod bending is obtained through a three-point deflection measurement method. Three linear variable differential transformers (LVDTs) are clamped to the side connecting plates of the U-frame and used to capture deformation of the rod. The second portion of this report provides the latest CIRFT data, including data for the hydride reorientation test. The variations in fatigue life are provided in terms of moment, equivalent stress, curvature, and equivalent strain for the tested SNFs. The equivalent stress plot collapsed the data points from all of the SNF samples into a single zone. A detailed examination revealed that, at the same stress level, fatigue lives display a descending order as follows: H. B. Robinson Nuclear Power Station (HBR), LMK, and mixed uranium-plutonium oxide (MOX). Just looking at the strain, LMK fuel has a slightly longer fatigue life than HBR fuel, but the difference is subtle. The third portion of this report provides finite element analysis (FEA) dynamic deformation simulation of SNF assemblies . In a horizontal layout under NCT, the fuel assembly’s skeleton, which is formed by guide tubes and spacer grids, is the primary load bearing apparatus carrying and transferring vibration loads within an SNF assembly. These vibration loads include interaction forces between the SNF assembly and the canister basket walls. Therefore, the integrity of the guide

  12. Simple and Effective Algorithms: Computer-Adaptive Testing.

    ERIC Educational Resources Information Center

    Linacre, John Michael

    Computer-adaptive testing (CAT) allows improved security, greater scoring accuracy, shorter testing periods, quicker availability of results, and reduced guessing and other undesirable test behavior. Simple approaches can be applied by the classroom teacher, or other content specialist, who possesses simple computer equipment and elementary…

  13. Synthesizing ocean bottom pressure records including seismic wave and tsunami contributions: Toward realistic tests of monitoring systems

    NASA Astrophysics Data System (ADS)

    Saito, Tatsuhiko; Tsushima, Hiroaki

    2016-11-01

    The present study proposes a method for synthesizing the ocean bottom pressure records during a tsunamigenic earthquake. First, a linear seismic wave simulation is conducted with a kinematic earthquake fault model as a source. Then, a nonlinear tsunami simulation is conducted using the sea bottom movement calculated in the seismic wave simulation. By using these simulation results, this method can provide realistic ocean bottom pressure change data, including both seismic and tsunami contributions. A simple theoretical consideration indicates that the dynamic pressure change caused by the sea bottom acceleration can contribute significantly until the duration of 90 s for a depth of 4000 m in the ocean. The performance of a tsunami monitoring system was investigated using the synthesized ocean bottom pressure records. It indicates that the system based on the hydrostatic approximation could not measure the actual tsunami height when the time does not elapse enough. The dynamic pressure change and the permanent sea bottom deformation inside the source region break the condition of a simple hydrostatic approximation. A tsunami source estimation method of tFISH is also examined. Even though the synthesized records contain a large dynamic pressure change, which is not considered in the algorithm, tFISH showed a satisfactory performance 5 min after the earthquake occurrence. The pressure records synthesized in this study, including both seismic wave and tsunami contributions, are more practical for evaluating the performance of our monitoring ability, whereas most tsunami monitoring tests neglect the seismic wave contribution.

  14. Automated Test Assembly for Cognitive Diagnosis Models Using a Genetic Algorithm

    ERIC Educational Resources Information Center

    Finkelman, Matthew; Kim, Wonsuk; Roussos, Louis A.

    2009-01-01

    Much recent psychometric literature has focused on cognitive diagnosis models (CDMs), a promising class of instruments used to measure the strengths and weaknesses of examinees. This article introduces a genetic algorithm to perform automated test assembly alongside CDMs. The algorithm is flexible in that it can be applied whether the goal is to…

  15. Collaborative Research Developing, Testing and Validating Brain Alignment Algorithm using Geometric Analysis

    DTIC Science & Technology

    2013-11-13

    This is the final report by the University of Southern California on a AFSOR grant, part of a joint program with Harvard University (PI, Shing-Tung...the algorithm was the task assigned to Harvard University ). Finally, we were to test and validate the algorithm once it had been developed.

  16. ZEUS-2D: A radiation magnetohydrodynamics code for astrophysical flows in two space dimensions. I - The hydrodynamic algorithms and tests.

    NASA Astrophysics Data System (ADS)

    Stone, James M.; Norman, Michael L.

    1992-06-01

    A detailed description of ZEUS-2D, a numerical code for the simulation of fluid dynamical flows including a self-consistent treatment of the effects of magnetic fields and radiation transfer is presented. Attention is given to the hydrodynamic (HD) algorithms which form the foundation for the more complex MHD and radiation HD algorithms. The effect of self-gravity on the flow dynamics is accounted for by an iterative solution of the sparse-banded matrix resulting from discretizing the Poisson equation in multidimensions. The results of an extensive series of HD test problems are presented. A detailed description of the MHD algorithms in ZEUS-2D is presented. A new method of computing the electromotive force is developed using the method of characteristics (MOC). It is demonstrated through the results of an extensive series of MHD test problems that the resulting hybrid MOC-constrained transport method provides for the accurate evolution of all modes of MHD wave families.

  17. The classification and diagnostic algorithm for primary lymphatic dysplasia: an update from 2010 to include molecular findings.

    PubMed

    Connell, F C; Gordon, K; Brice, G; Keeley, V; Jeffery, S; Mortimer, P S; Mansour, S; Ostergaard, P

    2013-10-01

    Historically, primary lymphoedema was classified into just three categories depending on the age of onset of swelling; congenital, praecox and tarda. Developments in clinical phenotyping and identification of the genetic cause of some of these conditions have demonstrated that primary lymphoedema is highly heterogenous. In 2010, we introduced a new classification and diagnostic pathway as a clinical and research tool. This algorithm has been used to delineate specific primary lymphoedema phenotypes, facilitating the discovery of new causative genes. This article reviews the latest molecular findings and provides an updated version of the classification and diagnostic pathway based on this new knowledge.

  18. A Test Generation Framework for Distributed Fault-Tolerant Algorithms

    NASA Technical Reports Server (NTRS)

    Goodloe, Alwyn; Bushnell, David; Miner, Paul; Pasareanu, Corina S.

    2009-01-01

    Heavyweight formal methods such as theorem proving have been successfully applied to the analysis of safety critical fault-tolerant systems. Typically, the models and proofs performed during such analysis do not inform the testing process of actual implementations. We propose a framework for generating test vectors from specifications written in the Prototype Verification System (PVS). The methodology uses a translator to produce a Java prototype from a PVS specification. Symbolic (Java) PathFinder is then employed to generate a collection of test cases. A small example is employed to illustrate how the framework can be used in practice.

  19. Creating IRT-Based Parallel Test Forms Using the Genetic Algorithm Method

    ERIC Educational Resources Information Center

    Sun, Koun-Tem; Chen, Yu-Jen; Tsai, Shu-Yen; Cheng, Chien-Fen

    2008-01-01

    In educational measurement, the construction of parallel test forms is often a combinatorial optimization problem that involves the time-consuming selection of items to construct tests having approximately the same test information functions (TIFs) and constraints. This article proposes a novel method, genetic algorithm (GA), to construct parallel…

  20. Flight test results of failure detection and isolation algorithms for a redundant strapdown inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Morrell, F. R.; Motyka, P. R.; Bailey, M. L.

    1990-01-01

    Flight test results for two sensor fault-tolerant algorithms developed for a redundant strapdown inertial measurement unit are presented. The inertial measurement unit (IMU) consists of four two-degrees-of-freedom gyros and accelerometers mounted on the faces of a semi-octahedron. Fault tolerance is provided by edge vector test and generalized likelihood test algorithms, each of which can provide dual fail-operational capability for the IMU. To detect the wide range of failure magnitudes in inertial sensors, which provide flight crucial information for flight control and navigation, failure detection and isolation are developed in terms of a multi level structure. Threshold compensation techniques, developed to enhance the sensitivity of the failure detection process to navigation level failures, are presented. Four flight tests were conducted in a commercial transport-type environment to compare and determine the performance of the failure detection and isolation methods. Dual flight processors enabled concurrent tests for the algorithms. Failure signals such as hard-over, null, or bias shift, were added to the sensor outputs as simple or multiple failures during the flights. Both algorithms provided timely detection and isolation of flight control level failures. The generalized likelihood test algorithm provided more timely detection of low-level sensor failures, but it produced one false isolation. Both algorithms demonstrated the capability to provide dual fail-operational performance for the skewed array of inertial sensors.

  1. Considerations When Including Students with Disabilities in Test Security Policies. NCEO Policy Directions. Number 23

    ERIC Educational Resources Information Center

    Lazarus, Sheryl; Thurlow, Martha

    2015-01-01

    Sound test security policies and procedures are needed to ensure test security and confidentiality, and to help prevent cheating. In this era when cheating on tests draws regular media attention, there is a need for thoughtful consideration of the ways in which possible test security measures may affect accessibility for some students with…

  2. Equivalency of Spanish language versions of the trail making test part B including or excluding "CH".

    PubMed

    Cherner, Mariana; Suarez, Paola; Posada, Carolina; Fortuny, Lidia Artiola I; Marcotte, Thomas; Grant, Igor; Heaton, Robert

    2008-07-01

    Spanish speakers commonly use two versions of the alphabet, one that includes the sound "Ch" between C and D and another that goes directly to D, as in English. Versions of the Trail Making Test Part B (TMT-B) have been created accordingly to accommodate this preference. The pattern and total number of circles to be connected are identical between versions. However, the equivalency of these alternate forms has not been reported. We compared the performance of 35 healthy Spanish speakers who completed the "Ch" form (CH group) to that of 96 individuals who received the standard form (D group), based on whether they mentioned "Ch" in their oral recitation of the alphabet. The groups had comparable demographic characteristics and overall neuropsychological performance. There were no significant differences in TMT-B scores between the CH and D groups, and relationships with demographic variables were comparable. The findings suggest that both versions are equivalent and can be administered to Spanish speakers based on their preference without sacrificing comparability.

  3. A Review of Scoring Algorithms for Ability and Aptitude Tests.

    ERIC Educational Resources Information Center

    Chevalier, Shirley A.

    In conventional practice, most educators and educational researchers score cognitive tests using a dichotomous right-wrong scoring system. Although simple and straightforward, this method does not take into consideration other factors, such as partial knowledge or guessing tendencies and abilities. This paper discusses alternative scoring models:…

  4. Vertical drop test of a transport fuselage center section including the wheel wells

    NASA Technical Reports Server (NTRS)

    Williams, M. S.; Hayduk, R. J.

    1983-01-01

    A Boeing 707 fuselage section was drop tested to measure structural, seat, and anthropomorphic dummy response to vertical crash loads. The specimen had nominally zero pitch, roll and yaw at impact with a sink speed of 20 ft/sec. Results from this drop test and other drop tests of different transport sections will be used to prepare for a full-scale crash test of a B-720.

  5. Computationally efficient algorithms for the two-dimensional Kolmogorov Smirnov test

    NASA Astrophysics Data System (ADS)

    Lopes, R. H. C.; Hobson, P. R.; Reid, I. D.

    2008-07-01

    Goodness-of-fit statistics measure the compatibility of random samples against some theoretical or reference probability distribution function. The classical one-dimensional Kolmogorov-Smirnov test is a non-parametric statistic for comparing two empirical distributions which defines the largest absolute difference between the two cumulative distribution functions as a measure of disagreement. Adapting this test to more than one dimension is a challenge because there are 2d-1 independent ways of ordering a cumulative distribution function in d dimensions. We discuss Peacock's version of the Kolmogorov-Smirnov test for two-dimensional data sets which computes the differences between cumulative distribution functions in 4n2 quadrants. We also examine Fasano and Franceschini's variation of Peacock's test, Cooke's algorithm for Peacock's test, and ROOT's version of the two-dimensional Kolmogorov-Smirnov test. We establish a lower-bound limit on the work for computing Peacock's test of Ω(n2lgn), introducing optimal algorithms for both this and Fasano and Franceschini's test, and show that Cooke's algorithm is not a faithful implementation of Peacock's test. We also discuss and evaluate parallel algorithms for Peacock's test.

  6. Testing the Spectral Deconvolution Algorithm Tool (SDAT) with Xe Spectra

    DTIC Science & Technology

    2007-09-01

    import spectra, analyze the data for Xe concentrations, and graphically display the results. This tool has been tested with data generated via MCNPX ...characteristics, e.g., the sample Xe gas volume from which the total sampled atmospheric volume is calculated . The sample histogram will be deconvolved...SDAT window. It contains the concentration coefficients and errors of each radioxenon of interest calculated with and without the use of data

  7. Interpretation of Colloid-Homologue Tracer Test 10-03, Including Comparisons to Test 10-01

    SciTech Connect

    Reimus, Paul W.

    2012-06-26

    This presentation covers the interpretations of colloid-homologue tracer test 10-03 conducted at the Grimsel Test Site, Switzerland, in 2010. It also provides a comparison of the interpreted test results with those of tracer test 10-01, which was conducted in the same fracture flow system and using the same tracers than test 10-03, but at a higher extraction flow rate. A method of correcting for apparent uranine degradation in test 10-03 is presented. Conclusions are: (1) Uranine degradation occurred in test 10-03, but not in 10-01; (2) Uranine correction based on apparent degradation rate in injection loop in test 11-02 seems reasonable when applied to data from test 10-03; (3) Colloid breakthrough curves quite similar in the two tests with similar recoveries relative to uranine (after correction); and (4) Much slower apparent desorption of homologues in test 10-03 than in 10-01 (any effect of residual homologues from test 10-01 in test 10-03?).

  8. DEVELOPMENT AND TESTING OF FAULT-DIAGNOSIS ALGORITHMS FOR REACTOR PLANT SYSTEMS

    SciTech Connect

    Grelle, Austin L.; Park, Young S.; Vilim, Richard B.

    2016-06-26

    Argonne National Laboratory is further developing fault diagnosis algorithms for use by the operator of a nuclear plant to aid in improved monitoring of overall plant condition and performance. The objective is better management of plant upsets through more timely, informed decisions on control actions with the ultimate goal of improved plant safety, production, and cost management. Integration of these algorithms with visual aids for operators is taking place through a collaboration under the concept of an operator advisory system. This is a software entity whose purpose is to manage and distill the enormous amount of information an operator must process to understand the plant state, particularly in off-normal situations, and how the state trajectory will unfold in time. The fault diagnosis algorithms were exhaustively tested using computer simulations of twenty different faults introduced into the chemical and volume control system (CVCS) of a pressurized water reactor (PWR). The algorithms are unique in that each new application to a facility requires providing only the piping and instrumentation diagram (PID) and no other plant-specific information; a subject-matter expert is not needed to install and maintain each instance of an application. The testing approach followed accepted procedures for verifying and validating software. It was shown that the code satisfies its functional requirement which is to accept sensor information, identify process variable trends based on this sensor information, and then to return an accurate diagnosis based on chains of rules related to these trends. The validation and verification exercise made use of GPASS, a one-dimensional systems code, for simulating CVCS operation. Plant components were failed and the code generated the resulting plant response. Parametric studies with respect to the severity of the fault, the richness of the plant sensor set, and the accuracy of sensors were performed as part of the validation

  9. Classifying performance impairment in response to sleep loss using pattern recognition algorithms on single session testing

    PubMed Central

    St. Hilaire, Melissa A.; Sullivan, Jason P.; Anderson, Clare; Cohen, Daniel A.; Barger, Laura K.; Lockley, Steven W.; Klerman, Elizabeth B.

    2012-01-01

    There is currently no “gold standard” marker of cognitive performance impairment resulting from sleep loss. We utilized pattern recognition algorithms to determine which features of data collected under controlled laboratory conditions could most reliably identify cognitive performance impairment in response to sleep loss using data from only one testing session, such as would occur in the “real world” or field conditions. A training set for testing the pattern recognition algorithms was developed using objective Psychomotor Vigilance Task (PVT) and subjective Karolinska Sleepiness Scale (KSS) data collected from laboratory studies during which subjects were sleep deprived for 26 – 52 hours. The algorithm was then tested in data from both laboratory and field experiments. The pattern recognition algorithm was able to identify performance impairment with a single testing session in individuals studied under laboratory conditions using PVT, KSS, length of time awake and time of day information with sensitivity and specificity as high as 82%. When this algorithm was tested on data collected under real-world conditions from individuals whose data were not in the training set, accuracy of predictions for individuals categorized with low performance impairment were as high as 98%. Predictions for medium and severe performance impairment were less accurate. We conclude that pattern recognition algorithms may be a promising method for identifying performance impairment in individuals using only current information about the individual’s behavior. Single testing features (e.g., number of PVT lapses) with high correlation with performance impairment in the laboratory setting may not be the best indicators of performance impairment under real-world conditions. Pattern recognition algorithms should be further tested for their ability to be used in conjunction with other assessments of sleepiness in real-world conditions to quantify performance impairment in

  10. The Langley thermal protection system test facility: A description including design operating boundaries

    NASA Technical Reports Server (NTRS)

    Klich, G. F.

    1976-01-01

    A description of the Langley thermal protection system test facility is presented. This facility was designed to provide realistic environments and times for testing thermal protection systems proposed for use on high speed vehicles such as the space shuttle. Products from the combustion of methane-air-oxygen mixtures, having a maximum total enthalpy of 10.3 MJ/kg, are used as a test medium. Test panels with maximum dimensions of 61 cm x 91.4 cm are mounted in the side wall of the test region. Static pressures in the test region can range from .005 to .1 atm and calculated equilibrium temperatures of test panels range from 700 K to 1700 K. Test times can be as long as 1800 sec. Some experimental data obtained while using combustion products of methane-air mixtures are compared with theory, and calibration of the facility is being continued to verify calculated values of parameters which are within the design operating boundaries.

  11. A Runs-Test Algorithm: Contingent Reinforcement and Response Run Structures

    ERIC Educational Resources Information Center

    Hachiga, Yosuke; Sakagami, Takayuki

    2010-01-01

    Four rats' choices between two levers were differentially reinforced using a runs-test algorithm. On each trial, a runs-test score was calculated based on the last 20 choices. In Experiment 1, the onset of stimulus lights cued when the runs score was smaller than criterion. Following cuing, the correct choice was occasionally reinforced with food,…

  12. Unified Framework for Development, Deployment and Robust Testing of Neuroimaging Algorithms

    PubMed Central

    Joshi, Alark; Scheinost, Dustin; Okuda, Hirohito; Belhachemi, Dominique; Murphy, Isabella; Staib, Lawrence H.; Papademetris, Xenophon

    2011-01-01

    Developing both graphical and command-line user interfaces for neuroimaging algorithms requires considerable effort. Neuroimaging algorithms can meet their potential only if they can be easily and frequently used by their intended users. Deployment of a large suite of such algorithms on multiple platforms requires consistency of user interface controls, consistent results across various platforms and thorough testing. We present the design and implementation of a novel object-oriented framework that allows for rapid development of complex image analysis algorithms with many reusable components and the ability to easily add graphical user interface controls. Our framework also allows for simplified yet robust nightly testing of the algorithms to ensure stability and cross platform interoperability. All of the functionality is encapsulated into a software object requiring no separate source code for user interfaces, testing or deployment. This formulation makes our framework ideal for developing novel, stable and easy-to-use algorithms for medical image analysis and computer assisted interventions. The framework has been both deployed at Yale and released for public use in the open source multi-platform image analysis software—BioImage Suite (bioimagesuite.org). PMID:21249532

  13. Unified framework for development, deployment and robust testing of neuroimaging algorithms.

    PubMed

    Joshi, Alark; Scheinost, Dustin; Okuda, Hirohito; Belhachemi, Dominique; Murphy, Isabella; Staib, Lawrence H; Papademetris, Xenophon

    2011-03-01

    Developing both graphical and command-line user interfaces for neuroimaging algorithms requires considerable effort. Neuroimaging algorithms can meet their potential only if they can be easily and frequently used by their intended users. Deployment of a large suite of such algorithms on multiple platforms requires consistency of user interface controls, consistent results across various platforms and thorough testing. We present the design and implementation of a novel object-oriented framework that allows for rapid development of complex image analysis algorithms with many reusable components and the ability to easily add graphical user interface controls. Our framework also allows for simplified yet robust nightly testing of the algorithms to ensure stability and cross platform interoperability. All of the functionality is encapsulated into a software object requiring no separate source code for user interfaces, testing or deployment. This formulation makes our framework ideal for developing novel, stable and easy-to-use algorithms for medical image analysis and computer assisted interventions. The framework has been both deployed at Yale and released for public use in the open source multi-platform image analysis software--BioImage Suite (bioimagesuite.org).

  14. Operational feasibility of using whole blood in the rapid HIV testing algorithm of a resource-limited settings like Bangladesh

    PubMed Central

    Munshi, Saif U.; Oyewale, Tajudeen O.; Begum, Shahnaz; Uddin, Ziya; Tabassum, Shahina

    2016-01-01

    Background Serum-based rapid HIV testing algorithm in Bangladesh constitutes operational challenge to scaleup HIV testing and counselling (HTC) in the country. This study explored the operational feasibility of using whole blood as alternative to serum for rapid HIV testing in Bangladesh. Methods Whole blood specimens were collected from two study groups. The groups included HIV-positive patients (n = 200) and HIV-negative individuals (n = 200) presenting at the reference laboratory in Dhaka, Bangladesh. The specimens were subjected to rapid HIV tests using the national algorithm with A1 = Alere Determine (United States), A2 = Uni-Gold (Ireland), and A3 = First Response (India). The sensitivity and specificity of the test results, and the operational cost were compared with current serum-based testing. Results The sensitivities [95% of confidence interval (CI)] for A1, A2, and A3 tests using whole blood were 100% (CI: 99.1–100%), 100% (CI: 99.1–100%), and 97% (CI: 96.4–98.2%), respectively, and specificities of all test kits were 100% (CI: 99.1–100%). Significant (P < 0.05) reduction in the cost of establishing HTC centre and consumables by 94 and 61%, respectively, were observed. The cost of administration and external quality assurance reduced by 39 and 43%, respectively. Overall, there was a 36% cost reduction in total operational cost of rapid HIV testing with blood when compared with serum. Conclusion Considering the similar sensitivity and specificity of the two specimens, and significant cost reduction, rapid HIV testing with whole blood is feasible. A review of the national HIV rapid testing algorithm with whole blood will contribute toward improving HTC coverage in Bangladesh. PMID:26945143

  15. Risk algorithms that include pathology adjustment for HER2 amplification need to make further downward adjustments in likelihood scores.

    PubMed

    Evans, D G; Woodward, E R; Howell, S J; Verhoef, S; Howell, A; Lalloo, F

    2017-04-01

    To assess the need for adjustment in the likelihood of germline BRCA1/2 mutations in women with HER2+ breast cancers. We analysed primary mutation screens on women with breast cancer with unequivocal HER2 overexpression and assessed the likelihood of BRCA1/BRCA2 mutations by age, oestrogen receptor status and Manchester score. Of 1111 primary BRCA screens with confirmed HER2 status only 4/161 (2.5%) of women with HER2 amplification had a BRCA1 mutation identified and 5/161 (3.1%) a BRCA2 mutation. The pathology adjusted Manchester score between 10 and 19% and 20%+ thresholds resulted in a detection rate of only 6.5 and 15% respectively. BOADICEA examples appeared to make even less downward adjustment. There is a very low detection rate of BRCA1 and BRCA2 mutations in women with HER2 amplified breast cancers. The Manchester score and BOADICEA do not make sufficient downward adjustment for HER2 amplification. For unaffected women, assessment of breast cancer risk and BRCA1/2 probability should take into account the pathology of the most relevant close relative. Unaffected women undergoing mutation testing for BRCA1/2 should be advised that there is limited reassurance from a negative test result if their close relative had a HER2+ breast cancer.

  16. Including State Excitation in the Fixed-Interval Smoothing Algorithm and Implementation of the Maneuver Detection Method Using Error Residuals

    DTIC Science & Technology

    1990-12-01

    N is taken as the first smoothed estimate, P, must be equal to P,,, at this last data point. This can be seen graphically in Figure 4. Meditch [Ref...D-A246 336 NAVAL POSTGRADUATE SCHOOL Monterey , California R AWDTIC ELECTIE THESIS INCLUDING STATE EXCITATION IN THE FIXED-INTERVAL SMOOTHING ...Filter, Smoothing , Noise Process, Maneuver Detection. 19 Abstract (continue on reverse f necessary and idcntify by block number) The effects of the state

  17. Manufacture of fiber-epoxy test specimens: Including associated jigs and instrumentation

    NASA Technical Reports Server (NTRS)

    Mathur, S. B.; Felbeck, D. K.

    1980-01-01

    Experimental work on the manufacture and strength of graphite-epoxy composites is considered. The correct data and thus a true assessment of the strength properties based on a proper and scientifically modeled test specimen with engineered design, construction, and manufacture has led to claims of a very broad spread in optimized values. Such behavior is in the main due to inadequate control during manufacture of test specimen, improper curing, and uneven scatter in the fiber orientation. The graphite fibers are strong but brittle. Even with various epoxy matrices and volume fraction, the fracture toughness is still relatively low. Graphite-epoxy prepreg tape was investigated as a sandwich construction with intermittent interlaminar bonding between the laminates in order to produce high strength, high fracture toughness composites. The quality and control of manufacture of the multilaminate test specimen blanks was emphasized. The dimensions, orientation and cure must be meticulous in order to produce the desired mix.

  18. Nuclear Rocket Test Facility Decommissioning Including Controlled Explosive Demolition of a Neutron-Activated Shield Wall

    SciTech Connect

    Michael Kruzic

    2007-09-01

    Located in Area 25 of the Nevada Test Site, the Test Cell A Facility was used in the 1960s for the testing of nuclear rocket engines, as part of the Nuclear Rocket Development Program. The facility was decontaminated and decommissioned (D&D) in 2005 using the Streamlined Approach For Environmental Restoration (SAFER) process, under the Federal Facilities Agreement and Consent Order (FFACO). Utilities and process piping were verified void of contents, hazardous materials were removed, concrete with removable contamination decontaminated, large sections mechanically demolished, and the remaining five-foot, five-inch thick radiologically-activated reinforced concrete shield wall demolished using open-air controlled explosive demolition (CED). CED of the shield wall was closely monitored and resulted in no radiological exposure or atmospheric release.

  19. Drop and Flight Tests on NY-2 Landing Gears Including Measurements of Vertical Velocities at Landing

    NASA Technical Reports Server (NTRS)

    Peck, W D; Beard, A P

    1933-01-01

    This investigation was conducted to obtain quantitative information on the effectiveness of three landing gears for the NY-2 (consolidated training) airplane. The investigation consisted of static, drop, and flight tests on landing gears of the oleo-rubber-disk and the mercury rubber-chord types, and flight tests only on a landing gear of the conventional split-axle rubber-cord type. The results show that the oleo gear is the most effective of the three landing gears in minimizing impact forces and in dissipating the energy taken.

  20. Public interest in predictive genetic testing, including direct-to-consumer testing, for susceptibility to major depression: preliminary findings.

    PubMed

    Wilde, Alex; Meiser, Bettina; Mitchell, Philip B; Schofield, Peter R

    2010-01-01

    The past decade has seen rapid advances in the identification of associations between candidate genes and a range of common multifactorial disorders. This paper evaluates public attitudes towards the complexity of genetic risk prediction in psychiatry involving susceptibility genes, uncertain penetrance and gene-environment interactions on which successful molecular-based mental health interventions will depend. A qualitative approach was taken to enable the exploration of the views of the public. Four structured focus groups were conducted with a total of 36 participants. The majority of participants indicated interest in having a genetic test for susceptibility to major depression, if it was available. Having a family history of mental illness was cited as a major reason. After discussion of perceived positive and negative implications of predictive genetic testing, nine of 24 participants initially interested in having such a test changed their mind. Fear of genetic discrimination and privacy issues predominantly influenced change of attitude. All participants still interested in having a predictive genetic test for risk for depression reported they would only do so through trusted medical professionals. Participants were unanimously against direct-to-consumer genetic testing marketed through the Internet, although some would consider it if there was suitable protection against discrimination. The study highlights the importance of general practitioner and public education about psychiatric genetics, and the availability of appropriate treatment and support services prior to implementation of future predictive genetic testing services.

  1. Test driving ToxCast: endocrine profiling for1858 chemicals included in phase II

    EPA Science Inventory

    Introduction: Identifying chemicals to test for potential endocrine disruption beyond those already implicated in the peer-reviewed literature is a challenge. This review is intended to help by summarizing findings from the Environmental Protection Agency’s (EPA) ToxCast™ high th...

  2. Development of potential methods for testing congestion control algorithm implemented in vehicle to vehicle communications.

    PubMed

    Hsu, Chung-Jen; Fikentscher, Joshua; Kreeb, Robert

    2017-03-21

    Objective A channel congestion problem might occur when the traffic density increases since the number of basic safety messages carried on the communication channel also increases in vehicle-to-vehicle communications. A remedy algorithm proposed in SAE J2945/1 is designed to address the channel congestion issue by decreasing transmission frequency and radiated power. This study is to develop potential test procedures for evaluating or validating the congestion control algorithm. Methods Simulations of a reference unit transmitting at a higher frequency are implemented to emulate a number of Onboard Equipment (OBE) transmitting at the normal interval of 100 milliseconds (10 Hz). When the transmitting interval is reduced to 1.25 milliseconds (800 Hz), the reference unit emulates 80 vehicles transmitting at 10 Hz. By increasing the number of reference units transmitting at 800 Hz in the simulations, the corresponding channel busy percentages are obtained. An algorithm for GPS data generation of virtual vehicles is developed for facilitating the validation of transmission intervals in the congestion control algorithm. Results Channel busy percentage is the channel busy time over a specified period of time. Three or four reference units are needed to generate channel busy percentages between 50% and 80%, and five reference units can generate channel busy percentages above 80%. The proposed test procedures can verify the operation of congestion control algorithm when channel busy percentages are between 50% and 80%, and above 80%. By using GPS data generation algorithm, the test procedures can also verify the transmission intervals when traffic densities are 80 and 200 vehicles in the radius of 100 m. A suite of test tools with functional requirements is also proposed for facilitating the implementation of test procedures. Conclusions The potential test procedures for congestion control algorithm are developed based on the simulation results of channel busy

  3. Three-dimensional array-based group testing algorithms with one-stage

    NASA Astrophysics Data System (ADS)

    Martins, João Paulo; Felgueiras, Miguel; Santos, Rui

    2015-12-01

    The use of three-dimensional array-based testing algorithms is more efficient and accurate in some situations than other more commonly used algorithms to protocol pooled samples testing. We evaluate the advantages of using of this complex pooling schemes with only one stage in the problem of estimation of the prevalence rate of some disease. Using simulation work, we show that it does not seem to exist any advantage in using three or even higher-dimensional arrays for this type of problem.

  4. The algorithm of crack and crack tip coordinates detection in optical images during fatigue test

    NASA Astrophysics Data System (ADS)

    Panin, S. V.; Chemezov, V. O.; Lyubutin, P. S.; Titkov, V. V.

    2017-02-01

    An algorithm of crack detection during fatigue testing of materials, designed to automate the process of cyclic loading and tracking the crack tip, is proposed and tested. The ultimate goal of the study is aimed at controlling the displacements of the optical system with regard to the specimen under fatigue loading to ensure observation of the ‘area of interest’. It is shown that the image region that contains the crack may be detected and positioned with an average error of 1.93%. In terms of determining the crack tip position, the algorithm provides the accuracy of its localization with the average error value of 56 pixels.

  5. Testing and Adapting a Daytime Four Band Satellite Ash Detection Algorithm for Eruptions in Alaska and the Kamchatka Peninsula, Russia

    NASA Astrophysics Data System (ADS)

    Andrup-Henriksen, G.; Skoog, R. A.

    2007-12-01

    Volcanic ash is detectable from satellite remote sensing due to the differences in spectral signatures compared to meteorological clouds. Recently a new global daytime ash detection algorithm was developed at University of Madison, Wisconsin. The algorithm is based on four spectral bands with the central wavelengths 0.65, 3.75, 11 and 12 micrometers that are common on weather satellite sensors including MODIS, AVHRR, GOES and MTSAT. The initial development of the algorithm was primarily based on MODIS data with global coverage. We have tested it using three years of AVHRR data in Alaska and the Kamchatka Peninsula, Russia. All the AVHRR data have been manually analyzed and recorded into an observational database during the daily monitoring performed by the remote sensing group at the Alaska Volcano Observatory (AVO). By taking the manual observations as accurate we were able to examine the accuracy of the four-channel algorithm for daytime data. The results were also compared to the current automated ash alarm used by AVO, based on the reverse absorption technique, also known as the split window method, with a threshold of -1.7K. This comparison indicates that the four- banded technique has a higher sensitivity to volcanic ash, but a greater number of false alarms. The algorithm was modified to achieve a false alarm rate comparable to current ash alarm while still maintaining increased sensitivity.

  6. A parameter estimation algorithm for spatial sine testing - Theory and evaluation

    NASA Technical Reports Server (NTRS)

    Rost, R. W.; Deblauwe, F.

    1992-01-01

    This paper presents the theory and an evaluation of a spatial sine testing parameter estimation algorithm that uses directly the measured forced mode of vibration and the measured force vector. The parameter estimation algorithm uses an ARMA model and a recursive QR algorithm is applied for data reduction. In this first evaluation, the algorithm has been applied to a frequency response matrix (which is a particular set of forced mode of vibration) using a sliding frequency window. The objective of the sliding frequency window is to execute the analysis simultaneously with the data acquisition. Since the pole values and the modal density are obtained from this analysis during the acquisition, the analysis information can be used to help determine the forcing vectors during the experimental data acquisition.

  7. 40 CFR 1039.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...-state duty cycles, including ramped-modal testing? 1039.505 Section 1039.505 Protection of Environment... duty cycles, including ramped-modal testing? This section describes how to test engines under steady-state conditions. In some cases, we allow you to choose the appropriate steady-state duty cycle for...

  8. 40 CFR 1039.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...-state duty cycles, including ramped-modal testing? 1039.505 Section 1039.505 Protection of Environment... duty cycles, including ramped-modal testing? This section describes how to test engines under steady-state conditions. In some cases, we allow you to choose the appropriate steady-state duty cycle for...

  9. 40 CFR 1039.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...-state duty cycles, including ramped-modal testing? 1039.505 Section 1039.505 Protection of Environment... duty cycles, including ramped-modal testing? This section describes how to test engines under steady-state conditions. In some cases, we allow you to choose the appropriate steady-state duty cycle for...

  10. [Long QT syndrome: a brief review of the electrocardiographical diagnosis including Viskin's test].

    PubMed

    Márquez, Manlio F

    2012-01-01

    The QT interval measures both repolarization and depolarization. Learning to measure the QT interval and know how to correct (QTc) for heart rate (HR) is essential for the diagnosis of long QT syndrome (LQTS). The QTc interval changes in duration and even morphology depending on the time of the day and on a day-to-day basis. A diminished adaptive response of the QTc interval to changes in HR is known as QT hysteresis. Viskin has introduced a very simple clinical test to confirm the diagnosis of LQTS based on the "hypoadaptation" of the QT when standing. This phenomenon gives the appearance of a "stretching of the QT" on the surface ECG. Likewise, he has coined the term "QT stunning" to refer to the phenomenon that the QTc interval does not return to baseline despite recovery of baseline HR after standing. This article shows some examples of the Viskin's test.

  11. Simulation analysis of the EUSAMA Plus suspension testing method including the impact of the vehicle untested side

    NASA Astrophysics Data System (ADS)

    Dobaj, K.

    2016-09-01

    The work deals with the simulation analysis of the half car vehicle model parameters on the suspension testing results. The Matlab simulation software was used. The considered model parameters are involved with the shock absorber damping coefficient, the tire radial stiffness, the car width and the rocker arm length. The consistent vibrations of both test plates were considered. Both wheels of the car were subjected to identical vibration, with frequency changed similar to the EUSAMA Plus principle. The shock absorber damping coefficient (for several values of the car width and rocker arm length) was changed on one and both sides of the vehicle. The obtained results are essential for the new suspension testing algorithm (basing on the EUSAMA Plus principle), which will be the aim of the further author's work.

  12. Pilot's Guide to an Airline Career, Including Sample Pre-Employment Tests.

    ERIC Educational Resources Information Center

    Traylor, W.L.

    Occupational information for persons considering a career as an airline pilot includes a detailed description of the pilot's duties and material concerning preparation for occupational entry and determining the relative merits of available jobs. The book consists of four parts: Part I, The Job, provides an overview of a pilot's duties in his daily…

  13. Solar Energy Education. Home economics: teacher's guide. Field test edition. [Includes glossary

    SciTech Connect

    Not Available

    1981-06-01

    An instructional aid is provided for home economics teachers who wish to integrate the subject of solar energy into their classroom activities. This teacher's guide was produced along with the student activities book for home economics by the US Department of Energy Solar Energy Education. A glossary of solar energy terms is included. (BCS)

  14. Solar Energy Education. Industrial arts: teacher's guide. Field test edition. [Includes glossary

    SciTech Connect

    Not Available

    1981-05-01

    An instructional aid is presented which integrates the subject of solar energy into the classroom study of industrial arts. This guide for teachers was produced in addition to the student activities book for industrial arts by the USDOE Solar Energy Education. A glossary of solar energy terms is included. (BCS)

  15. An evaluation of the NASA Tech House, including live-in test results, volume 1

    NASA Technical Reports Server (NTRS)

    Abbott, I. H. A.; Hopping, K. A.; Hypes, W. D.

    1979-01-01

    The NASA Tech House was designed and constructed at the NASA Langley Research Center, Hampton, Virginia, to demonstrate and evaluate new technology potentially applicable for conservation of energy and resources and for improvements in safety and security in a single-family residence. All technology items, including solar-energy systems and a waste-water-reuse system, were evaluated under actual living conditions for a 1 year period with a family of four living in the house in their normal lifestyle. Results are presented which show overall savings in energy and resources compared with requirements for a defined similar conventional house under the same conditions. General operational experience and performance data are also included for all the various items and systems of technology incorporated into the house design.

  16. Directionally solidified lamellar eutectic superalloys by edge-defined, film-fed growth. [including tensile tests

    NASA Technical Reports Server (NTRS)

    Hurley, G. F.

    1975-01-01

    A program was performed to scale up the edge-defined, film-fed growth (EFG) method for the gamma/gamma prime-beta eutectic alloy of the nominal composition Ni-19.7 Cb - 6 Cr-2.5 Al. Procedures and problem areas are described. Flat bars approximately 12 x 1.7 x 200 mm were grown, mostly at speeds of 38 mm/hr, and tensile tests on these bars at 25 and 1000 C showed lower strength than expected. The feasibility of growing hollow airfoils was also demonstrated by growing bars over 200 mm long with a teardrop shaped cross-section, having a major dimension of 12 mm and a maximum width of 5 mm.

  17. Algorithms for Developing Test Questions from Sentences in Instructional Materials: an Extension of an Earlier Study

    DTIC Science & Technology

    1980-01-01

    8217.> age were developed using the following procedure; 1. The selected mat -rial was computer-analyzed to identify high information words—those that an...frequencies (keyword and rare singletons), (4) the two foil types (writer’s choice and algorithmic), and (5) the two test occasions (pi etest and

  18. An approach to a comprehensive test framework for analysis and evaluation of text line segmentation algorithms.

    PubMed

    Brodic, Darko; Milivojevic, Dragan R; Milivojevic, Zoran N

    2011-01-01

    The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures.

  19. Algorithms for Developing Test Questions from Sentences in Instructional Materials. Interim Report, January-September 1977.

    ERIC Educational Resources Information Center

    Roid, Gale; Finn, Patrick

    The feasibility of generating multiple-choice test questions by transforming sentences from prose instructional materials was examined. A computer-based algorithm was used to analyze prose subject matter and to identify high-information words. Sentences containing selected words were then transformed into multiple-choice items by four writers who…

  20. Algorithms for Developing Test Questions from Sentences in Instructional Materials: An Extension of an Earlier Study.

    ERIC Educational Resources Information Center

    Roid, Gale H.; And Others

    An earlier study was extended and replicated to examine the feasibility of generating multiple-choice test questions by transforming sentences from prose instructional material. In the first study, a computer-based algorithm was used to analyze prose subject matter and to identify high-information words. Sentences containing selected words were…

  1. An Algorithm to Improve Test Answer Copying Detection Using the Omega Statistic

    ERIC Educational Resources Information Center

    Maeda, Hotaka; Zhang, Bo

    2017-01-01

    The omega (?) statistic is reputed to be one of the best indices for detecting answer copying on multiple choice tests, but its performance relies on the accurate estimation of copier ability, which is challenging because responses from the copiers may have been contaminated. We propose an algorithm that aims to identify and delete the suspected…

  2. Development, analysis, and testing of robust nonlinear guidance algorithms for space applications

    NASA Astrophysics Data System (ADS)

    Wibben, Daniel R.

    not identical. Finally, this work has a large focus on the application of these various algorithms to a large number of space based applications. These include applications to powered-terminal descent for landing on planetary bodies such as the moon and Mars and to proximity operations (landing, hovering, or maneuvering) about small bodies such as an asteroid or a comet. Further extensions of these algorithms have allowed for adaptation of a hybrid control strategy for planetary landing, and the combined modeling and simultaneous control of both the vehicle's position and orientation implemented within a full six degree-of-freedom spacecraft simulation.

  3. A test of a modified algorithm for computing spherical harmonic coefficients using an FFT

    NASA Technical Reports Server (NTRS)

    Elowitz, Mark; Hill, Frank; Duvall, Thomas L., Jr.

    1989-01-01

    The Dilts (1985) algorithm for computing the spherical harmonic expansion coefficients for a function on a sphere, on the basis of a two-dimensional FFT, is presently modified, tested, and found to eliminate problems of overflow and large storage requirements associated with the encounter of harmonic degree values greater than 16. Results from timing tests show the Dilts program to be impractical, however, for the computation of spherical harmonic expansion coefficients for large harmonic degree values.

  4. Experimental infrared point-source detection using an iterative generalized likelihood ratio test algorithm.

    PubMed

    Nichols, J M; Waterman, J R

    2017-03-01

    This work documents the performance of a recently proposed generalized likelihood ratio test (GLRT) algorithm in detecting thermal point-source targets against a sky background. A calibrated source is placed above the horizon at various ranges and then imaged using a mid-wave infrared camera. The proposed algorithm combines a so-called "shrinkage" estimator of the background covariance matrix and an iterative maximum likelihood estimator of the point-source parameters to produce the GLRT statistic. It is clearly shown that the proposed approach results in better detection performance than either standard energy detection or previous implementations of the GLRT detector.

  5. Bees Algorithm for Construction of Multiple Test Forms in E-Testing

    ERIC Educational Resources Information Center

    Songmuang, Pokpong; Ueno, Maomi

    2011-01-01

    The purpose of this research is to automatically construct multiple equivalent test forms that have equivalent qualities indicated by test information functions based on item response theory. There has been a trade-off in previous studies between the computational costs and the equivalent qualities of test forms. To alleviate this problem, we…

  6. Simple but novel test method for quantitatively comparing robot mapping algorithms using SLAM and dead reckoning

    NASA Astrophysics Data System (ADS)

    Davey, Neil S.; Godil, Haris

    2013-05-01

    This article presents a comparative study between a well-known SLAM (Simultaneous Localization and Mapping) algorithm, called Gmapping, and a standard Dead-Reckoning algorithm; the study is based on experimental results of both approaches by using a commercial skid-based turning robot, P3DX. Five main base-case scenarios are conducted to evaluate and test the effectiveness of both algorithms. The results show that SLAM outperformed the Dead Reckoning in terms of map-making accuracy in all scenarios but one, since SLAM did not work well in a rapidly changing environment. Although the main conclusion about the excellence of SLAM is not surprising, the presented test method is valuable to professionals working in this area of mobile robots, as it is highly practical, and provides solid and valuable results. The novelty of this study lies in its simplicity. The simple but novel test method for quantitatively comparing robot mapping algorithms using SLAM and Dead Reckoning and some applications using autonomous robots are being patented by the authors in U.S. Patent Application Nos. 13/400,726 and 13/584,862.

  7. First principles molecular dynamics of Li: Test of a new algorithm

    NASA Astrophysics Data System (ADS)

    Wentzcovitch, Renata M.; Martins, JoséLuís

    1991-06-01

    We have tested a new algorithm to perform first-principles molecular dynamics simulations. This new scheme differs from the Car-Parrinello method and is based on the calculation of the self-consistent solutions of the Kohn-Sham equations at each molecular dynamics timestep, using a fast iterative diagonalization algorithm. We do not use a fictitious electron dynamics, and therefore the molecular dynamics timesteps can be considerably larger in our method than in the Car-Parrinello algorithm. Furthermore, the number of basis functions is variable, which makes this method particularly suited to deal with simulations involving a cell with variable shape and volume. Applications of this method to liquid Li offers results which are in excellent agreement with experiment and indicates that it is basically comparable in efficiency to the Car-Parrinello method.

  8. Reader reaction: A note on the evaluation of group testing algorithms in the presence of misclassification.

    PubMed

    Malinovsky, Yaakov; Albert, Paul S; Roy, Anindya

    2016-03-01

    In the context of group testing screening, McMahan, Tebbs, and Bilder (2012, Biometrics 68, 287-296) proposed a two-stage procedure in a heterogenous population in the presence of misclassification. In earlier work published in Biometrics, Kim, Hudgens, Dreyfuss, Westreich, and Pilcher (2007, Biometrics 63, 1152-1162) also proposed group testing algorithms in a homogeneous population with misclassification. In both cases, the authors evaluated performance of the algorithms based on the expected number of tests per person, with the optimal design being defined by minimizing this quantity. The purpose of this article is to show that although the expected number of tests per person is an appropriate evaluation criteria for group testing when there is no misclassification, it may be problematic when there is misclassification. Specifically, a valid criterion needs to take into account the amount of correct classification and not just the number of tests. We propose, a more suitable objective function that accounts for not only the expected number of tests, but also the expected number of correct classifications. We then show how using this objective function that accounts for correct classification is important for design when considering group testing under misclassification. We also present novel analytical results which characterize the optimal Dorfman (1943) design under the misclassification.

  9. Derivation and Testing of Computer Algorithms for Automatic Real-Time Determination of Space Vehicle Potentials in Various Plasma Environments

    DTIC Science & Technology

    1988-05-31

    COMPUTER ALGORITHMS FOR AUTOMATIC REAL-TIME DETERMINATION OF SPACE VEHICLE POTENTIALS IN VARIOUS PLASMA ENVIRONMENTS May 31, 1988 Stanley L. Spiegel...crrnaion DiviSiofl 838 12 2 DERIVATION AND TESTING OF COMPUTER ALGORITHMS FOR AUTOMATIC REAL-TIME DETERMINATION OF SPACE VEHICLE POTENTIALS IN VARIOUS...S.L., "Derivation and testing of computer algorithms for automatic real time determination of space vehicle poteuatials in various plasma

  10. Application of a Smart Parachute Release Algorithm to the CPAS Test Architecture

    NASA Technical Reports Server (NTRS)

    Bledsoe, Kristin

    2013-01-01

    One of the primary test vehicles for the Capsule Parachute Assembly System (CPAS) is the Parachute Test Vehicle (PTV), a capsule shaped structure similar to the Orion design but truncated to fit in the cargo area of a C-17 aircraft. The PTV has a full Orion-like parachute compartment and similar aerodynamics; however, because of the single point attachment of the CPAS parachutes and the lack of Orion-like Reaction Control System (RCS), the PTV has the potential to reach significant body rates. High body rates at the time of the Drogue release may cause the PTV to flip while the parachutes deploy, which may result in the severing of the Pilot or Main risers. In order to prevent high rates at the time of Drogue release, a "smart release" algorithm was implemented in the PTV avionics system. This algorithm, which was developed for the Orion Flight system, triggers the Drogue parachute release when the body rates are near a minimum. This paper discusses the development and testing of the smart release algorithm; its implementation in the PTV avionics and the pretest simulation; and the results of its use on two CPAS tests.

  11. Hypercoagulable states: an algorithmic approach to laboratory testing and update on monitoring of direct oral anticoagulants

    PubMed Central

    Nakashima, Megan O.

    2014-01-01

    Hypercoagulability can result from a variety of inherited and, more commonly, acquired conditions. Testing for the underlying cause of thrombosis in a patient is complicated both by the number and variety of clinical conditions that can cause hypercoagulability as well as the many potential assay interferences. Using an algorithmic approach to hypercoagulability testing provides the ability to tailor assay selection to the clinical scenario. It also reduces the number of unnecessary tests performed, saving cost and time, and preventing potential false results. New oral anticoagulants are powerful tools for managing hypercoagulable patients; however, their use introduces new challenges in terms of test interpretation and therapeutic monitoring. The coagulation laboratory plays an essential role in testing for and treating hypercoagulable states. The input of laboratory professionals is necessary to guide appropriate testing and synthesize interpretation of results. PMID:25025009

  12. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The development of the Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large complex systems engineering challenge being addressed in part by focusing on the specific subsystems handling of off-nominal mission and fault tolerance. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML), the Mission and Fault Management (M&FM) algorithms are crafted and vetted in specialized Integrated Development Teams composed of multiple development disciplines. NASA also has formed an M&FM team for addressing fault management early in the development lifecycle. This team has developed a dedicated Vehicle Management End-to-End Testbed (VMET) that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. The flexibility of VMET enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the algorithms utilizing actual subsystem models. The intent is to validate the algorithms and substantiate them with performance baselines for each of the vehicle subsystems in an independent platform exterior to flight software test processes. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test processes. Risk reduction is addressed by working with other organizations such as S

  13. Using modified fruit fly optimisation algorithm to perform the function test and case studies

    NASA Astrophysics Data System (ADS)

    Pan, Wen-Tsao

    2013-06-01

    Evolutionary computation is a computing mode established by practically simulating natural evolutionary processes based on the concept of Darwinian Theory, and it is a common research method. The main contribution of this paper was to reinforce the function of searching for the optimised solution using the fruit fly optimization algorithm (FOA), in order to avoid the acquisition of local extremum solutions. The evolutionary computation has grown to include the concepts of animal foraging behaviour and group behaviour. This study discussed three common evolutionary computation methods and compared them with the modified fruit fly optimization algorithm (MFOA). It further investigated the ability of the three mathematical functions in computing extreme values, as well as the algorithm execution speed and the forecast ability of the forecasting model built using the optimised general regression neural network (GRNN) parameters. The findings indicated that there was no obvious difference between particle swarm optimization and the MFOA in regards to the ability to compute extreme values; however, they were both better than the artificial fish swarm algorithm and FOA. In addition, the MFOA performed better than the particle swarm optimization in regards to the algorithm execution speed, and the forecast ability of the forecasting model built using the MFOA's GRNN parameters was better than that of the other three forecasting models.

  14. Factors associated with completion of bowel cancer screening and the potential effects of simplifying the screening test algorithm

    PubMed Central

    Kearns, Benjamin; Whyte, Sophie; Seaman, Helen E; Snowball, Julia; Halloran, Stephen P; Butler, Piers; Patnick, Julietta; Nickerson, Claire; Chilcott, Jim

    2016-01-01

    Background: The primary colorectal cancer screening test in England is a guaiac faecal occult blood test (gFOBt). The NHS Bowel Cancer Screening Programme (BCSP) interprets tests on six samples on up to three test kits to determine a definitive positive or negative result. However, the test algorithm fails to achieve a definitive result for a significant number of participants because they do not comply with the programme requirements. This study identifies factors associated with failed compliance and modifications to the screening algorithm that will improve the clinical effectiveness of the screening programme. Methods: The BCSP Southern Hub data for screening episodes started in 2006–2012 were analysed for participants aged 60–69 years. The variables included age, sex, level of deprivation, gFOBt results and clinical outcome. Results: The data set included 1 409 335 screening episodes; 95.08% of participants had a definitively normal result on kit 1 (no positive spots). Among participants asked to complete a second or third gFOBt, 5.10% and 4.65%, respectively, failed to return a valid kit. Among participants referred for follow up, 13.80% did not comply. Older age was associated with compliance at repeat testing, but non-compliance at follow up. Increasing levels of deprivation were associated with non-compliance at repeat testing and follow up. Modelling a reduction in the threshold for immediate referral led to a small increase in completion of the screening pathway. Conclusions: Reducing the number of positive spots required on the first gFOBt kit for referral for follow-up and targeted measures to improve compliance with follow-up may improve completion of the screening pathway. PMID:26766733

  15. Knowledge-based interpretation of toxoplasmosis serology test results including fuzzy temporal concepts--the ToxoNet system.

    PubMed

    Kopecky, D; Hayde, M; Prusa, A R; Adlassnig, K P

    2001-01-01

    Transplacental transmission of Toxoplasma gondii from an infected, pregnant woman to the unborn that occurs with a probability of about 60 percent [1] results in fetal damage to a degree depending on the gestational age. The computer system ToxoNet processes the results of serological antibody tests having been performed during pregnancy by means of a knowledge base containing medical knowledge on the interpretation of Toxoplasmosis serology tests. By applying this knowledge ToxoNet generates interpretive reports consisting of a diagnostic interpretation and recommendations for therapy and further testing. For that purpose it matches the results of all serological investigations of maternal blood with the content of the knowledge base returning complete textual interpretations for all given findings. The interpretation algorithm derives the stage of maternal infection from these that is used to infer the degree of fetal threat. To consider varying immune responses of particular patients, certain time intervals have to be kept between two subsequent tests in order to guarantee a correct interpretation of the test results. These time intervals are modelled as fuzzy sets, since they allow the formal description of the temporal uncertainties. ToxoNet comprises the knowledge base, an interpretation system, and a program for the creation and modification of the knowledge base. It is available from the World Wide Web by starting a standard browser like the Internet Explorer or the Netscape Navigator. Thus ToxoNet supports the physician in Toxoplasmosis diagnostics and in addition allows to adopt the way of making decisions to the characteristics of the particular laboratory by modifying the underlying knowledge base.

  16. Development of a Comprehensive Human Immunodeficiency Virus Type 1 Screening Algorithm for Discovery and Preclinical Testing of Topical Microbicides▿

    PubMed Central

    Lackman-Smith, Carol; Osterling, Clay; Luckenbaugh, Katherine; Mankowski, Marie; Snyder, Beth; Lewis, Gareth; Paull, Jeremy; Profy, Albert; Ptak, Roger G.; Buckheit, Robert W.; Watson, Karen M.; Cummins, James E.; Sanders-Beer, Brigitte E.

    2008-01-01

    Topical microbicides are self-administered, prophylactic products for protection against sexually transmitted pathogens. A large number of compounds with known anti-human immunodeficiency virus type 1 (HIV-1) inhibitory activity have been proposed as candidate topical microbicides. To identify potential leads, an in vitro screening algorithm was developed to evaluate candidate microbicides in assays that assess inhibition of cell-associated and cell-free HIV-1 transmission, entry, and fusion. The algorithm advances compounds by evaluation in a series of defined assays that generate measurements of relative antiviral potency to determine advancement or failure. Initial testing consists of a dual determination of inhibitory activity in the CD4-dependent CCR5-tropic cell-associated transmission inhibition assay and in the CD4/CCR5-mediated HIV-1 entry assay. The activity is confirmed by repeat testing, and identified actives are advanced to secondary screens to determine their effect on transmission of CXCR4-tropic viruses in the presence or absence of CD4 and their ability to inhibit CXCR4- and CCR5-tropic envelope-mediated cell-to-cell fusion. In addition, confirmed active compounds are also evaluated in the presence of human seminal plasma, in assays incorporating a pH 4 to 7 transition, and for growth inhibition of relevant strains of lactobacilli. Leads may then be advanced for specialized testing, including determinations in human cervical explants and in peripheral blood mononuclear cells against primary HIV subtypes, combination testing with other inhibitors, and additional cytotoxicity assays. PRO 2000 and SPL7013 (the active component of VivaGel), two microbicide products currently being evaluated in human clinical trials, were tested in this in vitro algorithm and were shown to be highly active against CCR5- and CXCR4-tropic HIV-1 infection. PMID:18316528

  17. Formal analysis, hardness, and algorithms for extracting internal structure of test-based problems.

    PubMed

    Jaśkowski, Wojciech; Krawiec, Krzysztof

    2011-01-01

    Problems in which some elementary entities interact with each other are common in computational intelligence. This scenario, typical for coevolving artificial life agents, learning strategies for games, and machine learning from examples, can be formalized as a test-based problem and conveniently embedded in the common conceptual framework of coevolution. In test-based problems, candidate solutions are evaluated on a number of test cases (agents, opponents, examples). It has been recently shown that every test of such problem can be regarded as a separate objective, and the whole problem as multi-objective optimization. Research on reducing the number of such objectives while preserving the relations between candidate solutions and tests led to the notions of underlying objectives and internal problem structure, which can be formalized as a coordinate system that spatially arranges candidate solutions and tests. The coordinate system that spans the minimal number of axes determines the so-called dimension of a problem and, being an inherent property of every problem, is of particular interest. In this study, we investigate in-depth the formalism of a coordinate system and its properties, relate them to properties of partially ordered sets, and design an exact algorithm for finding a minimal coordinate system. We also prove that this problem is NP-hard and come up with a heuristic which is superior to the best algorithm proposed so far. Finally, we apply the algorithms to three abstract problems and demonstrate that the dimension of the problem is typically much lower than the number of tests, and for some problems converges to the intrinsic parameter of the problem--its a priori dimension.

  18. Developments of aerosol retrieval algorithm for Geostationary Environmental Monitoring Spectrometer (GEMS) and the retrieval accuracy test

    NASA Astrophysics Data System (ADS)

    KIM, M.; Kim, J.; Jeong, U.; Ahn, C.; Bhartia, P. K.; Torres, O.

    2013-12-01

    A scanning UV-Visible spectrometer, the GEMS (Geostationary Environment Monitoring Spectrometer) onboard the GEO-KOMPSAT2B (Geostationary Korea Multi-Purpose Satellite) is planned to be launched in geostationary orbit in 2018. The GEMS employs hyper-spectral imaging with 0.6 nm resolution to observe solar backscatter radiation in the UV and Visible range. In the UV range, the low surface contribution to the backscattered radiation and strong interaction between aerosol absorption and molecular scattering can be advantageous in retrieving aerosol optical properties such as aerosol optical depth (AOD) and single scattering albedo (SSA). By taking the advantage, the OMI UV aerosol algorithm has provided information on the absorbing aerosol (Torres et al., 2007; Ahn et al., 2008). This study presents a UV-VIS algorithm to retrieve AOD and SSA from GEMS. The algorithm is based on the general inversion method, which uses pre-calculated look-up table with assumed aerosol properties and measurement condition. To obtain the retrieval accuracy, the error of the look-up table method occurred by the interpolation of pre-calculated radiances is estimated by using the reference dataset, and the uncertainties about aerosol type and height are evaluated. Also, the GEMS aerosol algorithm is tested with measured normalized radiance from OMI, a provisional data set for GEMS measurement, and the results are compared with the values from AERONET measurements over Asia. Additionally, the method for simultaneous retrieve of the AOD and aerosol height is discussed.

  19. In Situ Estuarine and Marine Toxicity Testing: A Review, Including Recommendations for Future Use in Ecological Risk Assessment

    DTIC Science & Technology

    2009-09-01

    Polychlorinated Biphenyl PCE Tetrachloroethylene PMT Photomultiplier Tube PSU Practical Salinity Unit PVC Polyvinyl Chloride RDX...29 5. CAGE MATERIALS AND DESIGN FEATURES 5.1 CAGE MATERIALS A typical in situ test chamber consists of a polycarbonate, polyvinyl chloride (PVC...remove endemic organisms. These include sieving, autoclaving, freezing, antibiotics, mercuric chloride , and gamma irradiation of sediments (ASTM 2000

  20. Complex Demodulation in Monitoring Earth Rotation by VLBI: Testing the Algorithm by Analysis of Long Periodic EOP Components

    NASA Astrophysics Data System (ADS)

    Wielgosz, A.; Brzeziński, A.; Böhm, S.

    2016-12-01

    The complex demodulation (CD) algorithm is an efficient tool for extracting the diurnal and subdiurnal components of Earth rotation from the routine VLBI observations (Brzeziński, 2012). This algorithm was implemented by Böhm et al (2012b) into a dedicated version of the VLBI analysis software VieVs. The authors processed around 3700 geodetic 24-hour observing sessions in 1984.0-2010.5 and estimated simultaneously the time series of the long period components as well as diurnal, semidiurnal, terdiurnal and quarterdiurnal components of polar motion (PM) and universal time UT1. This paper describes the tests of the CD algorithm by checking consistency of the low frequency components of PM and UT1 estimated by VieVS CD and those from the IERS and IVS combined solutions. Moreover, the retrograde diurnal component of PM demodulated from VLBI observations has been compared to the celestial pole offsets series included in the IERS and IVS solutions. We found for all three components a good agreement of the results based on the CD approach and those based on the standard parameterization recommended by the IERS Conventions (IERS, 2010) and applied by the IERS and IVS. We conclude that an application of the CD parameterization in VLBI data analysis does not change those components of EOP which are included in the standard adjustment, while enabling simultaneous estimation of the high frequency components from the routine VLBI observations. Moreover, we deem that the CD algorithm can also be implemented in analysis of other space geodetic observations, like GNSS or SLR, enabling retrieval of subdiurnal signals in EOP from the past data.

  1. Scoring Divergent Thinking Tests by Computer With a Semantics-Based Algorithm

    PubMed Central

    Beketayev, Kenes; Runco, Mark A.

    2016-01-01

    Divergent thinking (DT) tests are useful for the assessment of creative potentials. This article reports the semantics-based algorithmic (SBA) method for assessing DT. This algorithm is fully automated: Examinees receive DT questions on a computer or mobile device and their ideas are immediately compared with norms and semantic networks. This investigation compared the scores generated by the SBA method with the traditional methods of scoring DT (i.e., fluency, originality, and flexibility). Data were collected from 250 examinees using the “Many Uses Test” of DT. The most important finding involved the flexibility scores from both scoring methods. This was critical because semantic networks are based on conceptual structures, and thus a high SBA score should be highly correlated with the traditional flexibility score from DT tests. Results confirmed this correlation (r = .74). This supports the use of algorithmic scoring of DT. The nearly-immediate computation time required by SBA method may make it the method of choice, especially when it comes to moderate- and large-scale DT assessment investigations. Correlations between SBA scores and GPA were insignificant, providing evidence of the discriminant and construct validity of SBA scores. Limitations of the present study and directions for future research are offered. PMID:27298632

  2. Cost-effectiveness of algorithms for confirmation test of human African trypanosomiasis.

    PubMed

    Lutumba, Pascal; Meheus, Filip; Robays, Jo; Miaka, Constantin; Kande, Victor; Büscher, Philippe; Dujardin, Bruno; Boelaert, Marleen

    2007-10-01

    The control of Trypanosoma brucei gambiense human African trypanosomiasis (HAT) is compromised by low sensitivity of the routinely used parasitologic confirmation tests. More sensitive alternatives, such as mini-anion exchange centrifugation technique (mAECT) or capillary tube centrifugation (CTC), are more expensive. We used formal decision analysis to assess the cost-effectiveness of alternative HAT confirmation algorithms in terms of cost per life saved. The effectiveness of the standard method, a combination of lymph node puncture (LNP), fresh blood examination (FBE), and thick blood film (TBF), was 36.8%; the LNP-FBE-CTC-mAECT sequence reached almost 80%. The cost per person examined ranged from euro1.56 for LNP-FBE-TBF to euro2.99 for LNP-TBF-CTC-mAECT-CATT (card agglutination test for trypanosomiasis) titration. LNP-TBF-CTC-mAECT was the most cost-effective in terms of cost per life saved. HAT confirmation algorithms that incorporate concentration techniques are more effective and efficient than the algorithms that are currently and routinely used by several T.b. gambiense control programs.

  3. Activity recognition in planetary navigation field tests using classification algorithms applied to accelerometer data.

    PubMed

    Song, Wen; Ade, Carl; Broxterman, Ryan; Barstow, Thomas; Nelson, Thomas; Warren, Steve

    2012-01-01

    Accelerometer data provide useful information about subject activity in many different application scenarios. For this study, single-accelerometer data were acquired from subjects participating in field tests that mimic tasks that astronauts might encounter in reduced gravity environments. The primary goal of this effort was to apply classification algorithms that could identify these tasks based on features present in their corresponding accelerometer data, where the end goal is to establish methods to unobtrusively gauge subject well-being based on sensors that reside in their local environment. In this initial analysis, six different activities that involve leg movement are classified. The k-Nearest Neighbors (kNN) algorithm was found to be the most effective, with an overall classification success rate of 90.8%.

  4. The Research on Web-Based Testing Environment Using Simulated Annealing Algorithm

    PubMed Central

    2014-01-01

    The computerized evaluation is now one of the most important methods to diagnose learning; with the application of artificial intelligence techniques in the field of evaluation, the computerized adaptive testing gradually becomes one of the most important evaluation methods. In this test, the computer dynamic updates the learner's ability level and selects tailored items from the item pool. In order to meet the needs of the test it requires that the system has a relatively high efficiency of the implementation. To solve this problem, we proposed a novel method of web-based testing environment based on simulated annealing algorithm. In the development of the system, through a series of experiments, we compared the simulated annealing method and other methods of the efficiency and efficacy. The experimental results show that this method ensures choosing nearly optimal items from the item bank for learners, meeting a variety of assessment needs, being reliable, and having valid judgment in the ability of learners. In addition, using simulated annealing algorithm to solve the computing complexity of the system greatly improves the efficiency of select items from system and near-optimal solutions. PMID:24959600

  5. Evaluation and Comparison of Multiple Test Methods, Including Real-time PCR, for Legionella Detection in Clinical Specimens

    PubMed Central

    Peci, Adriana; Winter, Anne-Luise; Gubbay, Jonathan B.

    2016-01-01

    Legionella is a Gram-negative bacterium that can cause Pontiac fever, a mild upper respiratory infection and Legionnaire’s disease, a more severe illness. We aimed to compare the performance of urine antigen, culture, and polymerase chain reaction (PCR) test methods and to determine if sputum is an acceptable alternative to the use of more invasive bronchoalveolar lavage (BAL). Data for this study included specimens tested for Legionella at Public Health Ontario Laboratories from 1st January, 2010 to 30th April, 2014, as part of routine clinical testing. We found sensitivity of urinary antigen test (UAT) compared to culture to be 87%, specificity 94.7%, positive predictive value (PPV) 63.8%, and negative predictive value (NPV) 98.5%. Sensitivity of UAT compared to PCR was 74.7%, specificity 98.3%, PPV 77.7%, and NPV 98.1%. Out of 146 patients who had a Legionella-positive result by PCR, only 66 (45.2%) also had a positive result by culture. Sensitivity for culture was the same using either sputum or BAL (13.6%); sensitivity for PCR was 10.3% for sputum and 12.8% for BAL. Both sputum and BAL yield similar results regardless testing methods (Fisher Exact p-values = 1.0, for each test). In summary, all test methods have inherent weaknesses in identifying Legionella; therefore, more than one testing method should be used. Obtaining a single specimen type from patients with pneumonia limits the ability to diagnose Legionella, particularly when urine is the specimen type submitted. Given ease of collection and similar sensitivity to BAL, clinicians are encouraged to submit sputum in addition to urine when BAL submission is not practical from patients being tested for Legionella. PMID:27630979

  6. Antiphospholipid antibody testing for the antiphospholipid syndrome: a comprehensive practical review including a synopsis of challenges and recent guidelines.

    PubMed

    Favaloro, Emmanuel J; Wong, Richard C W

    2014-10-01

    The antiphospholipid (antibody) syndrome (APS) is an autoimmune condition characterised by a wide range of clinical features, but primarily identified as thrombotic and/or obstetric related adverse events. APS is associated with the presence of antiphospholipid antibodies (aPL), including the so-called lupus anticoagulant (LA). These aPL are heterogeneous in nature, detected with varying sensitivity and specificity by a diverse range of laboratory tests. All these tests are unfortunately imperfect, suffer from poor assay reproducibility (inter-method and inter-laboratory) and a lack of standardisation and harmonisation. Clinicians and laboratory personnel may struggle to keep abreast of these factors, as well as the expanding range of available aPL tests, and consequent result interpretation. Therefore, APS remains a significant diagnostic challenge for many clinicians across a wide range of clinical specialities, due to these issues related to laboratory testing as well as the ever-expanding range of reported clinical manifestations. This review is primarily focussed on issues related to laboratory testing for APS in regards to the currently available assays, and summarises recent international consensus guidelines for aPL testing, both for the liquid phase functional LA assays and the solid phase assays (anticardiolipin and anti-beta-2-Glycoprotein-I).

  7. Development and Implementation of a Hardware In-the-Loop Test Bed for Unmanned Aerial Vehicle Control Algorithms

    NASA Technical Reports Server (NTRS)

    Nyangweso, Emmanuel; Bole, Brian

    2014-01-01

    Successful prediction and management of battery life using prognostic algorithms through ground and flight tests is important for performance evaluation of electrical systems. This paper details the design of test beds suitable for replicating loading profiles that would be encountered in deployed electrical systems. The test bed data will be used to develop and validate prognostic algorithms for predicting battery discharge time and battery failure time. Online battery prognostic algorithms will enable health management strategies. The platform used for algorithm demonstration is the EDGE 540T electric unmanned aerial vehicle (UAV). The fully designed test beds developed and detailed in this paper can be used to conduct battery life tests by controlling current and recording voltage and temperature to develop a model that makes a prediction of end-of-charge and end-of-life of the system based on rapid state of health (SOH) assessment.

  8. Fast conical surface evaluation via randomized algorithm in the null-screen test

    NASA Astrophysics Data System (ADS)

    Aguirre-Aguirre, D.; Díaz-Uribe, R.; Villalobos-Mendoza, B.

    2017-01-01

    This work shows a method to recover the shape of the surface via randomized algorithms when the null-screen test is used, instead of the integration process that is commonly performed. This, because the majority of the errors are added during the reconstruction of the surface (or the integration process). This kind of large surfaces are widely used in the aerospace sector and industry in general, and a big problem exists when these surfaces have to be tested. The null-screen method is a low-cost test, and a complete surface analysis can be done by using this method. In this paper, we show the simulations done for the analysis of fast conic surfaces, where it was proved that the quality and shape of a surface under study can be recovered with a percentage error < 2.

  9. GUEST EDITORS' INTRODUCTION: Testing inversion algorithms against experimental data: inhomogeneous targets

    NASA Astrophysics Data System (ADS)

    Belkebir, Kamal; Saillard, Marc

    2005-12-01

    This special section deals with the reconstruction of scattering objects from experimental data. A few years ago, inspired by the Ipswich database [1 4], we started to build an experimental database in order to validate and test inversion algorithms against experimental data. In the special section entitled 'Testing inversion algorithms against experimental data' [5], preliminary results were reported through 11 contributions from several research teams. (The experimental data are free for scientific use and can be downloaded from the web site.) The success of this previous section has encouraged us to go further and to design new challenges for the inverse scattering community. Taking into account the remarks formulated by several colleagues, the new data sets deal with inhomogeneous cylindrical targets and transverse electric (TE) polarized incident fields have also been used. Among the four inhomogeneous targets, three are purely dielectric, while the last one is a `hybrid' target mixing dielectric and metallic cylinders. Data have been collected in the anechoic chamber of the Centre Commun de Ressources Micro-ondes in Marseille. The experimental setup as well as the layout of the files containing the measurements are presented in the contribution by J-M Geffrin, P Sabouroux and C Eyraud. The antennas did not change from the ones used previously [5], namely wide-band horn antennas. However, improvements have been achieved by refining the mechanical positioning devices. In order to enlarge the scope of applications, both TE and transverse magnetic (TM) polarizations have been carried out for all targets. Special care has been taken not to move the target under test when switching from TE to TM measurements, ensuring that TE and TM data are available for the same configuration. All data correspond to electric field measurements. In TE polarization the measured component is orthogonal to the axis of invariance. Contributions A Abubakar, P M van den Berg and T M

  10. LOTOS code for local earthquake tomographic inversion: benchmarks for testing tomographic algorithms

    NASA Astrophysics Data System (ADS)

    Koulakov, I. Yu.

    2009-04-01

    We present the LOTOS-07 code for performing local earthquake tomographic (LET) inversion, which is freely available at www.ivan-art.com/science/LOTOS_07. The initial data for the code are the arrival times from local seismicity and coordinates of the stations. It does not require any information about the sources. The calculations start from absolute location of sources and estimates of an optimal 1D velocity model. Then the sources are relocated simultaneously with the 3D velocity distribution during iterative coupled tomographic inversions. The code allows results to be compared based on node or cell parameterizations. Both Vp-Vs and Vp - Vp/Vs inversion schemes can be performed by the LOTOS code. The working ability of the LOTOS code is illustrated with different real and synthetic datasets. Some of the tests are used to disprove existing stereotypes of LET schemes such as using trade-off curves for evaluation of damping parameters and GAP criterion for selection of events. We also present a series of synthetic datasets with unknown sources and velocity models (www.ivan-art.com/science/benchmark) that can be used as blind benchmarks for testing different tomographic algorithms. We encourage other users of tomography algorithms to join the program on creating benchmarks that can be used to check existing codes. The program codes and testing datasets will be freely distributed during the poster presentation.

  11. Application of the HWVP measurement error model and feed test algorithms to pilot scale feed testing

    SciTech Connect

    Adams, T.L.

    1996-03-01

    The purpose of the feed preparation subsystem in the Hanford Waste Vitrification Plant (HWVP) is to provide, for control of the properties of the slurry that are sent to the melter. The slurry properties are adjusted so that two classes of constraints are satisfied. Processability constraints guarantee that the process conditions required by the melter can be obtained. For example, there are processability constraints associated with electrical conductivity and viscosity. Acceptability constraints guarantee that the processed glass can be safely stored in a repository. An example of an acceptability constraint is the durability of the product glass. The primary control focus for satisfying both processability and acceptability constraints is the composition of the slurry. The primary mechanism for adjusting the composition of the slurry is mixing the waste slurry with frit of known composition. Spent frit from canister decontamination is also recycled by adding it to the melter feed. A number of processes in addition to mixing are used to condition the waste slurry prior to melting, including evaporation and the addition of formic acid. These processes also have an effect on the feed composition.

  12. Implementation and Initial Testing of Advanced Processing and Analysis Algorithms for Correlated Neutron Counting

    SciTech Connect

    Santi, Peter Angelo; Cutler, Theresa Elizabeth; Favalli, Andrea; Koehler, Katrina Elizabeth; Henzl, Vladimir; Henzlova, Daniela; Parker, Robert Francis; Croft, Stephen

    2015-12-01

    In order to improve the accuracy and capabilities of neutron multiplicity counting, additional quantifiable information is needed in order to address the assumptions that are present in the point model. Extracting and utilizing higher order moments (Quads and Pents) from the neutron pulse train represents the most direct way of extracting additional information from the measurement data to allow for an improved determination of the physical properties of the item of interest. The extraction of higher order moments from a neutron pulse train required the development of advanced dead time correction algorithms which could correct for dead time effects in all of the measurement moments in a self-consistent manner. In addition, advanced analysis algorithms have been developed to address specific assumptions that are made within the current analysis model, namely that all neutrons are created at a single point within the item of interest, and that all neutrons that are produced within an item are created with the same energy distribution. This report will discuss the current status of implementation and initial testing of the advanced dead time correction and analysis algorithms that have been developed in an attempt to utilize higher order moments to improve the capabilities of correlated neutron measurement techniques.

  13. Preliminary test results of a flight management algorithm for fuel conservative descents in a time based metered traffic environment. [flight tests of an algorithm to minimize fuel consumption of aircraft based on flight time

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Cannon, D. G.

    1979-01-01

    A flight management algorithm designed to improve the accuracy of delivering the airplane fuel efficiently to a metering fix at a time designated by air traffic control is discussed. The algorithm provides a 3-D path with time control (4-D) for a test B 737 airplane to make an idle thrust, clean configured descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path is calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithms and the results of the flight tests are discussed.

  14. A topography-based scaling algorithm for soil hydraulic parameters at hillslope scales: Field testing

    NASA Astrophysics Data System (ADS)

    Jana, Raghavendra B.; Mohanty, Binayak P.

    2012-02-01

    Soil hydraulic parameters were upscaled from a 30 m resolution to a 1 km resolution using a new aggregation scheme (described in the companion paper) where the scale parameter was based on the topography. When soil hydraulic parameter aggregation or upscaling schemes ignore the effect of topography, their application becomes limited at hillslope scales and beyond, where topography plays a dominant role in soil deposition and formation. Hence the new upscaling algorithm was tested at the hillslope scale (1 km) across two locations: (1) the Little Washita watershed in Oklahoma, and (2) the Walnut Creek watershed in Iowa. The watersheds were divided into pixels of 1 km resolution and the effective soil hydraulic parameters obtained for each pixel. Each pixel/domain was then simulated using the physically based HYDRUS-3-D modeling platform. In order to account for the surface (runoff/on) and subsurface fluxes between pixels, an algorithm to route infiltration-excess runoff onto downstream pixels at daily time steps and to update the soil moisture states of the downstream pixels was applied. Simulated soil moisture states were compared across scales, and the coarse scale values compared against the airborne soil moisture data products obtained during the hydrology experiment field campaign periods (SGP97 and SMEX02) for selected pixels with different topographic complexities, soil distributions, and land cover. Results from these comparisons show good correlations between simulated and observed soil moisture states across time, topographic variations, location, elevation, and land cover. Stream discharge comparisons made at two gauging stations in the Little Washita watershed also provide reasonably good results as to the suitability of the upscaling algorithm used. Based only on the topography of the domain, the new upscaling algorithm was able to provide coarse resolution values for soil hydraulic parameters which effectively captured the variations in soil moisture

  15. A Super-Resolution Algorithm for Enhancement of FLASH LIDAR Data: Flight Test Results

    NASA Technical Reports Server (NTRS)

    Bulyshev, Alexander; Amzajerdian, Farzin; Roback, Eric; Reisse Robert

    2014-01-01

    This paper describes the results of a 3D super-resolution algorithm applied to the range data obtained from a recent Flash Lidar helicopter flight test. The flight test was conducted by the NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project over a simulated lunar terrain facility at NASA Kennedy Space Center. ALHAT is developing the technology for safe autonomous landing on the surface of celestial bodies: Moon, Mars, asteroids. One of the test objectives was to verify the ability of 3D super-resolution technique to generate high resolution digital elevation models (DEMs) and to determine time resolved relative positions and orientations of the vehicle. 3D super-resolution algorithm was developed earlier and tested in computational modeling, and laboratory experiments, and in a few dynamic experiments using a moving truck. Prior to the helicopter flight test campaign, a 100mX100m hazard field was constructed having most of the relevant extraterrestrial hazard: slopes, rocks, and craters with different sizes. Data were collected during the flight and then processed by the super-resolution code. The detailed DEM of the hazard field was constructed using independent measurement to be used for comparison. ALHAT navigation system data were used to verify abilities of super-resolution method to provide accurate relative navigation information. Namely, the 6 degree of freedom state vector of the instrument as a function of time was restored from super-resolution data. The results of comparisons show that the super-resolution method can construct high quality DEMs and allows for identifying hazards like rocks and craters within the accordance of ALHAT requirements.

  16. Laboratory detection of intestinal carriage of carbapenemase-producing Enterobacteriaceae - A comparison of algorithms using the Carba NP test.

    PubMed

    Knox, James; Gregory, Claire; Prendergast, Louise; Perera, Chandrika; Robson, Jennifer; Waring, Lynette

    2017-01-01

    Stool specimens spiked with a panel of 46 carbapenemase-producing Enterobacteriaceae (CPE) and 59 non-carbapenemase producers were used to compare the diagnostic accuracy of 4 testing algorithms for the detection of intestinal carriage of CPE: (1) culture on Brilliance ESBL agar followed by the Carba NP test; (2) Brilliance ESBL followed by the Carba NP test, plus chromID OXA-48 agar with no Carba NP test; (3) chromID CARBA agar followed by the Carba NP test; (4) chromID CARBA followed by the Carba NP test, plus chromID OXA-48 with no Carba NP test. All algorithms were 100% specific. When comparing algorithms (1) and (3), Brilliance ESBL agar followed by the Carba NP test was significantly more sensitive than the equivalent chromID CARBA algorithm at the lower of 2 inoculum strengths tested (84.8% versus 63.0%, respectively [P<0.02]). With the addition of chromID OXA-48 agar, the sensitivity of these algorithms was marginally increased.

  17. The Cyborg Astrobiologist: testing a novelty detection algorithm on two mobile exploration systems at Rivas Vaciamadrid in Spain and at the Mars Desert Research Station in Utah

    NASA Astrophysics Data System (ADS)

    McGuire, P. C.; Gross, C.; Wendt, L.; Bonnici, A.; Souza-Egipsy, V.; Ormö, J.; Díaz-Martínez, E.; Foing, B. H.; Bose, R.; Walter, S.; Oesker, M.; Ontrup, J.; Haschke, R.; Ritter, H.

    2010-01-01

    In previous work, a platform was developed for testing computer-vision algorithms for robotic planetary exploration. This platform consisted of a digital video camera connected to a wearable computer for real-time processing of images at geological and astrobiological field sites. The real-time processing included image segmentation and the generation of interest points based upon uncommonness in the segmentation maps. Also in previous work, this platform for testing computer-vision algorithms has been ported to a more ergonomic alternative platform, consisting of a phone camera connected via the Global System for Mobile Communications (GSM) network to a remote-server computer. The wearable-computer platform has been tested at geological and astrobiological field sites in Spain (Rivas Vaciamadrid and Riba de Santiuste), and the phone camera has been tested at a geological field site in Malta. In this work, we (i) apply a Hopfield neural-network algorithm for novelty detection based upon colour, (ii) integrate a field-capable digital microscope on the wearable computer platform, (iii) test this novelty detection with the digital microscope at Rivas Vaciamadrid, (iv) develop a Bluetooth communication mode for the phone-camera platform, in order to allow access to a mobile processing computer at the field sites, and (v) test the novelty detection on the Bluetooth-enabled phone camera connected to a netbook computer at the Mars Desert Research Station in Utah. This systems engineering and field testing have together allowed us to develop a real-time computer-vision system that is capable, for example, of identifying lichens as novel within a series of images acquired in semi-arid desert environments. We acquired sequences of images of geologic outcrops in Utah and Spain consisting of various rock types and colours to test this algorithm. The algorithm robustly recognized previously observed units by their colour, while requiring only a single image or a few images to

  18. Some steady and oscillating airfoil test results, including the effects of sweep, from the tunnel spanning wing

    NASA Technical Reports Server (NTRS)

    Carta, F. O.; St.hilaire, A. O.; Rorke, J. B.; Jepson, W. D.

    1979-01-01

    A large scale tunnel spanning wing was built and tested. The model can be operated as either a swept or unswept wing and can be tested in steady state or oscillated sinusoidally in pitch about its quarter chord. Data is taken at mid-span with an internal 6-component balance and is also obtained from miniature pressure transducers distributed near the center span region. A description is given of the system and a brief discussion of some of the steady and unsteady results obtained to date. These are the steady load behavior to Mach numbers of approximately 1.1 and unsteady loads, including drag, at a reduced frequency of approximately 0.1.

  19. Angles-centroids fitting calibration and the centroid algorithm applied to reverse Hartmann test

    NASA Astrophysics Data System (ADS)

    Zhao, Zhu; Hui, Mei; Xia, Zhengzheng; Dong, Liquan; Liu, Ming; Liu, Xiaohua; Kong, Lingqin; Zhao, Yuejin

    2017-02-01

    In this paper, we develop an angles-centroids fitting (ACF) system and the centroid algorithm to calibrate the reverse Hartmann test (RHT) with sufficient precision. The essence of ACF calibration is to establish the relationship between ray angles and detector coordinates. Centroids computation is used to find correspondences between the rays of datum marks and detector pixels. Here, the point spread function of RHT is classified as circle of confusion (CoC), and the fitting of a CoC spot with 2D Gaussian profile to identify the centroid forms the basis of the centroid algorithm. Theoretical and experimental results of centroids computation demonstrate that the Gaussian fitting method has a less centroid shift or the shift grows at a slower pace when the quality of the image is reduced. In ACF tests, the optical instrumental alignments reach an overall accuracy of 0.1 pixel with the application of laser spot centroids tracking program. Locating the crystal at different positions, the feasibility and accuracy of ACF calibration are further validated to 10-6-10-4 rad root-mean-square error of the calibrations differences.

  20. A procedure for testing the quality of LANDSAT atmospheric correction algorithms

    NASA Technical Reports Server (NTRS)

    Dias, L. A. V. (Principal Investigator); Vijaykumar, N. L.; Neto, G. C.

    1982-01-01

    There are two basic methods for testing the quality of an algorithm to minimize atmospheric effects on LANDSAT imagery: (1) test the results a posteriori, using ground truth or control points; (2) use a method based on image data plus estimation of additional ground and/or atmospheric parameters. A procedure based on the second method is described. In order to select the parameters, initially the image contrast is examined for a series of parameter combinations. The contrast improves for better corrections. In addition the correlation coefficient between two subimages, taken at different times, of the same scene is used for parameter's selection. The regions to be correlated should not have changed considerably in time. A few examples using this proposed procedure are presented.

  1. A runs-test algorithm: contingent reinforcement and response run structures.

    PubMed

    Hachiga, Yosuke; Sakagami, Takayuki

    2010-01-01

    Four rats' choices between two levers were differentially reinforced using a runs-test algorithm. On each trial, a runs-test score was calculated based on the last 20 choices. In Experiment 1, the onset of stimulus lights cued when the runs score was smaller than criterion. Following cuing, the correct choice was occasionally reinforced with food, and the incorrect choice resulted in a blackout. Results indicated that this contingency reduced sequential dependencies among successive choice responses. With one exception, subjects' choice rule was well described as biased coin flipping. In Experiment 2, cuing was removed and the reinforcement criterion was changed to a percentile score based on the last 20 reinforced responses. The results replicated those of Experiment 1 in successfully eliminating first-order dependencies in all subjects. For 2 subjects, choice allocation was approximately consistent with nonbiased coin flipping. These results suggest that sequential dependencies may be a function of reinforcement contingency.

  2. DATA SUMMARY REPORT SMALL SCALE MELTER TESTING OF HLW ALGORITHM GLASSES MATRIX1 TESTS VSL-07S1220-1 REV 0 7/25/07

    SciTech Connect

    KRUGER AA; MATLACK KS; PEGG IL

    2011-12-29

    Eight tests using different HLW feeds were conducted on the DM100-BL to determine the effect of variations in glass properties and feed composition on processing rates and melter conditions (off-gas characteristics, glass processing, foaming, cold cap, etc.) at constant bubbling rate. In over seven hundred hours of testing, the property extremes of glass viscosity, electrical conductivity, and T{sub 1%}, as well as minimum and maximum concentrations of several major and minor glass components were evaluated using glass compositions that have been tested previously at the crucible scale. Other parameters evaluated with respect to glass processing properties were +/-15% batching errors in the addition of glass forming chemicals (GFCs) to the feed, and variation in the sources of boron and sodium used in the GFCs. Tests evaluating batching errors and GFC source employed variations on the HLW98-86 formulation (a glass composition formulated for HLW C-106/AY-102 waste and processed in several previous melter tests) in order to best isolate the effect of each test variable. These tests are outlined in a Test Plan that was prepared in response to the Test Specification for this work. The present report provides summary level data for all of the tests in the first test matrix (Matrix 1) in the Test Plan. Summary results from the remaining tests, investigating minimum and maximum concentrations of major and minor glass components employing variations on the HLW98-86 formulation and glasses generated by the HLW glass formulation algorithm, will be reported separately after those tests are completed. The test data summarized herein include glass production rates, the type and amount of feed used, a variety of measured melter parameters including temperatures and electrode power, feed sample analysis, measured glass properties, and gaseous emissions rates. More detailed information and analysis from the melter tests with complete emission chemistry, glass durability, and

  3. Flight test results of a vector-based failure detection and isolation algorithm for a redundant strapdown inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Morrell, F. R.; Bailey, M. L.; Motyka, P. R.

    1988-01-01

    Flight test results of a vector-based fault-tolerant algorithm for a redundant strapdown inertial measurement unit are presented. Because the inertial sensors provide flight-critical information for flight control and navigation, failure detection and isolation is developed in terms of a multi-level structure. Threshold compensation techniques for gyros and accelerometers, developed to enhance the sensitivity of the failure detection process to low-level failures, are presented. Four flight tests, conducted in a commercial transport type environment, were used to determine the ability of the failure detection and isolation algorithm to detect failure signals, such a hard-over, null, or bias shifts. The algorithm provided timely detection and correct isolation of flight control- and low-level failures. The flight tests of the vector-based algorithm demonstrated its capability to provide false alarm free dual fail-operational performance for the skewed array of inertial sensors.

  4. Experimental test of a hot water storage system including a macro-encapsulated phase change material (PCM)

    NASA Astrophysics Data System (ADS)

    Mongibello, L.; Atrigna, M.; Bianco, N.; Di Somma, M.; Graditi, G.; Risi, N.

    2017-01-01

    Thermal energy storage systems (TESs) are of fundamental importance for many energetic systems, essentially because they permit a certain degree of decoupling between the heat or cold production and the use of the heat or cold produced. In the last years, many works have analysed the addition of a PCM inside a hot water storage tank, as it can allow a reduction of the size of the storage tank due to the possibility of storing thermal energy as latent heat, and as a consequence its cost and encumbrance. The present work focuses on experimental tests realized by means of an indoor facility in order to analyse the dynamic behaviour of a hot water storage tank including PCM modules during a charging phase. A commercial bio-based PCM has been used for the purpose, with a melting temperature of 58°C. The experimental results relative to the hot water tank including the PCM modules are presented in terms of temporal evolution of the axial temperature profile, heat transfer and stored energy, and are compared with the ones obtained by using only water as energy storage material. Interesting insights, relative to the estimation of the percentage of melted PCM at the end of the experimental test, are presented and discussed.

  5. Synthetic Source Inversion Tests with the Full Complexity of Earthquake Source Processes, Including Both Supershear Rupture and Slip Reactivation

    NASA Astrophysics Data System (ADS)

    Song, Seok Goo; Dalguer, Luis A.

    2017-03-01

    Recent studies in dynamic source modeling and kinematic source inversion show that earthquake rupture may contain greater complexity than we previously anticipated, including multiple slipping at a given point on a fault. Finite source inversion methods suffer from the nonuniqueness of solutions, and it may become more serious if we aim to resolve more complex rupture models. In this study, we perform synthetic inversion tests with dynamically generated complex rupture models, including both supershear rupture and slip reactivation, to understand the possibility of resolving complex rupture processes by inverting seismic waveform data. We adopt a linear source inversion method with multiple windows, allowing for slipping from the nucleation of rupture to the termination at all locations along a fault. We regularize the model space effectively in the Bayesian framework and perform multiple inversion tests by considering the effect of inaccurate Green's functions and station distributions. We also perform a spectral stability analysis. Our results show that it may be possible to resolve both a supershear rupture front and reactivated secondary slipping using the linear inversion method if those complex features are well separated from the main rupture and produce a fair amount of seismic energy. It may be desirable to assume the full complexity of an earthquake rupture when we first develop finite source models after a major event occurs and then assume a simple rupture model for stability if the estimated models do not show a clear pattern of complex rupture processes.

  6. Overview of Non-nuclear Testing of the Safe, Affordable 30-kW Fission Engine, Including End-to-End Demonstrator Testing

    NASA Technical Reports Server (NTRS)

    VanDyke, M. K.; Martin, J. J.; Houts, M. G.

    2003-01-01

    Successful development of space fission systems will require an extensive program of affordable and realistic testing. In addition to tests related to design/development of the fission system, realistic testing of the actual flight unit must also be performed. At the power levels under consideration (3-300 kW electric power), almost all technical issues are thermal or stress related and will not be strongly affected by the radiation environment. These issues can be resolved more thoroughly, less expensively, and in a more timely fashing with nonnuclear testing, provided it is prototypic of the system in question. This approach was used for the safe, affordable fission engine test article development program and accomplished viz cooperative efforts with Department of Energy labs, industry, universiites, and other NASA centers. This Technical Memorandum covers the analysis, testing, and data reduction of a 30-kW simulated reactor as well as an end-to-end demonstrator, including a power conversion system and an electric propulsion engine, the first of its kind in the United States.

  7. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASAs Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The engineering development of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS) requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The nominal and off-nominal characteristics of SLS's elements and subsystems must be understood and matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex systems engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model-based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model-based algorithms and their development lifecycle from inception through FSW certification are an important focus of SLS's development effort to further ensure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. To test and validate these M&FM algorithms a dedicated test-bed was developed for full Vehicle Management End-to-End Testing (VMET). For addressing fault management (FM

  8. An E-M algorithm and testing strategy for multiple-locus haplotypes.

    PubMed Central

    Long, J C; Williams, R C; Urbanek, M

    1995-01-01

    This paper gives an expectation maximization (EM) algorithm to obtain allele frequencies, haplotype frequencies, and gametic disequilibrium coefficients for multiple-locus systems. It permits high polymorphism and null alleles at all loci. This approach effectively deals with the primary estimation problems associated with such systems; that is, there is not a one-to-one correspondence between phenotypic and genotypic categories, and sample sizes tend to be much smaller than the number of phenotypic categories. The EM method provides maximum-likelihood estimates and therefore allows hypothesis tests using likelihood ratio statistics that have chi 2 distributions with large sample sizes. We also suggest a data resampling approach to estimate test statistic sampling distributions. The resampling approach is more computer intensive, but it is applicable to all sample sizes. A strategy to test hypotheses about aggregate groups of gametic disequilibrium coefficients is recommended. This strategy minimizes the number of necessary hypothesis tests while at the same time describing the structure of disequilibrium. These methods are applied to three unlinked dinucleotide repeat loci in Navajo Indians and to three linked HLA loci in Gila River (Pima) Indians. The likelihood functions of both data sets are shown to be maximized by the EM estimates, and the testing strategy provides a useful description of the structure of gametic disequilibrium. Following these applications, a number of simulation experiments are performed to test how well the likelihood-ratio statistic distributions are approximated by chi 2 distributions. In most circumstances the chi 2 grossly underestimated the probability of type I errors. However, at times they also overestimated the type 1 error probability. Accordingly, we recommended hypothesis tests that use the resampling method. PMID:7887436

  9. Field tests and machine learning approaches for refining algorithms and correlations of driver's model parameters.

    PubMed

    Tango, Fabio; Minin, Luca; Tesauri, Francesco; Montanari, Roberto

    2010-03-01

    This paper describes the field tests on a driving simulator carried out to validate the algorithms and the correlations of dynamic parameters, specifically driving task demand and drivers' distraction, able to predict drivers' intentions. These parameters belong to the driver's model developed by AIDE (Adaptive Integrated Driver-vehicle InterfacE) European Integrated Project. Drivers' behavioural data have been collected from the simulator tests to model and validate these parameters using machine learning techniques, specifically the adaptive neuro fuzzy inference systems (ANFIS) and the artificial neural network (ANN). Two models of task demand and distraction have been developed, one for each adopted technique. The paper provides an overview of the driver's model, the description of the task demand and distraction modelling and the tests conducted for the validation of these parameters. A test comparing predicted and expected outcomes of the modelled parameters for each machine learning technique has been carried out: for distraction, in particular, promising results (low prediction errors) have been obtained by adopting an artificial neural network.

  10. Doppler Imaging with a Clean-Like Approach - Part One - a Newly Developed Algorithm Simulations and Tests

    NASA Astrophysics Data System (ADS)

    Kurster, M.

    1993-07-01

    A newly developed method for the Doppler imaging of star spot distributions on active late-type stars is presented. It comprises an algorithm particularly adapted to the (discrete) Doppler imaging problem (including eclipses) and is very efficient in determining the positions and shapes of star spots. A variety of tests demonstrates the capabilities as well as the limitations of the method by investigating the effects that uncertainties in various stellar parameters have on the image reconstruction. Any systematic errors within the reconstructed image are found to be a result of the ill-posed nature of the Doppler imaging problem and not a consequence of the adopted approach. The largest uncertainties are found with respect to the dynamical range of the image (brightness or temperature contrast). This kind of uncertainty is of little effect for studies of star spot migrations with the objectives of determining differential rotation and butterfly diagrams for late-type stars.

  11. Building a LiDAR point cloud simulator: Testing algorithms for high resolution topographic change

    NASA Astrophysics Data System (ADS)

    Carrea, Dario; Abellán, Antonio; Derron, Marc-Henri; Jaboyedoff, Michel

    2014-05-01

    Terrestrial laser technique (TLS) is becoming a common tool in Geosciences, with clear applications ranging from the generation of a high resolution 3D models to the monitoring of unstable slopes and the quantification of morphological changes. Nevertheless, like every measurement techniques, TLS still has some limitations that are not clearly understood and affect the accuracy of the dataset (point cloud). A challenge in LiDAR research is to understand the influence of instrumental parameters on measurement errors during LiDAR acquisition. Indeed, different critical parameters interact with the scans quality at different ranges: the existence of shadow areas, the spatial resolution (point density), and the diameter of the laser beam, the incidence angle and the single point accuracy. The objective of this study is to test the main limitations of different algorithms usually applied on point cloud data treatment, from alignment to monitoring. To this end, we built in MATLAB(c) environment a LiDAR point cloud simulator able to recreate the multiple sources of errors related to instrumental settings that we normally observe in real datasets. In a first step we characterized the error from single laser pulse by modelling the influence of range and incidence angle on single point data accuracy. In a second step, we simulated the scanning part of the system in order to analyze the shifting and angular error effects. Other parameters have been added to the point cloud simulator, such as point spacing, acquisition window, etc., in order to create point clouds of simple and/or complex geometries. We tested the influence of point density and vitiating point of view on the Iterative Closest Point (ICP) alignment and also in some deformation tracking algorithm with same point cloud geometry, in order to determine alignment and deformation detection threshold. We also generated a series of high resolution point clouds in order to model small changes on different environments

  12. Near infrared spectrometric technique for testing fruit quality: optimisation of regression models using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Isingizwe Nturambirwe, J. Frédéric; Perold, Willem J.; Opara, Umezuruike L.

    2016-02-01

    Near infrared (NIR) spectroscopy has gained extensive use in quality evaluation. It is arguably one of the most advanced spectroscopic tools in non-destructive quality testing of food stuff, from measurement to data analysis and interpretation. NIR spectral data are interpreted through means often involving multivariate statistical analysis, sometimes associated with optimisation techniques for model improvement. The objective of this research was to explore the extent to which genetic algorithms (GA) can be used to enhance model development, for predicting fruit quality. Apple fruits were used, and NIR spectra in the range from 12000 to 4000 cm-1 were acquired on both bruised and healthy tissues, with different degrees of mechanical damage. GAs were used in combination with partial least squares regression methods to develop bruise severity prediction models, and compared to PLS models developed using the full NIR spectrum. A classification model was developed, which clearly separated bruised from unbruised apple tissue. GAs helped improve prediction models by over 10%, in comparison with full spectrum-based models, as evaluated in terms of error of prediction (Root Mean Square Error of Cross-validation). PLS models to predict internal quality, such as sugar content and acidity were developed and compared to the versions optimized by genetic algorithm. Overall, the results highlighted the potential use of GA method to improve speed and accuracy of fruit quality prediction.

  13. Scattering correction algorithm for neutron radiography and tomography tested at facilities with different beam characteristics

    NASA Astrophysics Data System (ADS)

    Hassanein, René; de Beer, Frikkie; Kardjilov, Nikolay; Lehmann, Eberhard

    2006-11-01

    A precise quantitative analysis with the neutron radiography technique of materials with a high-neutron scattering cross section, imaged at small distances from the detector, is impossible if the scattering contribution from the investigated material onto the detector is not eliminated in the right way. Samples with a high-neutron scattering cross section, e.g. hydrogenous materials such as water, cause a significant scattering component in their radiographs. Background scattering, spectral effects and detector characteristics are identified as additional causes for disturbances. A scattering correction algorithm based on Monte Carlo simulations has been developed and implemented to take these effects into account. The corrected radiographs can be used for a subsequent tomographic reconstruction. From the results one can obtain quantitative information, in order to detect e.g. inhomogeneity patterns within materials, or to measure differences of the mass thickness in these materials. Within an IAEA-CRP collaboration the algorithms have been tested for applicability on results obtained at the South African SANRAD facility at Necsa, the Swiss NEUTRA facilities at PSI as well as the German CONRAD facility at HMI, all with different initial neutron spectra. Results of a set of dedicated neutron radiography experiments are being reported.

  14. Subsonic flight test evaluation of a performance seeking control algorithm on an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Gilyard, Glenn B.; Orme, John S.

    1992-01-01

    The subsonic flight test evaluation phase of the NASA F-15 (powered by F 100 engines) performance seeking control program was completed for single-engine operation at part- and military-power settings. The subsonic performance seeking control algorithm optimizes the quasi-steady-state performance of the propulsion system for three modes of operation. The minimum fuel flow mode minimizes fuel consumption. The minimum thrust mode maximizes thrust at military power. Decreases in thrust-specific fuel consumption of 1 to 2 percent were measured in the minimum fuel flow mode; these fuel savings are significant, especially for supersonic cruise aircraft. Decreases of up to approximately 100 degree R in fan turbine inlet temperature were measured in the minimum temperature mode. Temperature reductions of this magnitude would more than double turbine life if inlet temperature was the only life factor. Measured thrust increases of up to approximately 15 percent in the maximum thrust mode cause substantial increases in aircraft acceleration. The system dynamics of the closed-loop algorithm operation were good. The subsonic flight phase has validated the performance seeking control technology, which can significantly benefit the next generation of fighter and transport aircraft.

  15. ZEUS-2D: A Radiation Magnetohydrodynamics Code for Astrophysical Flows in Two Space Dimensions. II. The Magnetohydrodynamic Algorithms and Tests

    NASA Astrophysics Data System (ADS)

    Stone, James M.; Norman, Michael L.

    1992-06-01

    In this, the second of a series of three papers, we continue a detailed description of ZEUS-2D, a numerical code for the simulation of fluid dynamical flows in astrophysics including a self-consistent treatment of the effects of magnetic fields and radiation transfer. In this paper, we give a detailed description of the magnetohydrodynamical (MHD) algorithms in ZEUS-2D. The recently developed constrained transport (CT) algorithm is implemented for the numerical evolution of the components of the magnetic field for MHD simulations. This formalism guarantees the numerically evolved field components will satisfy the divergence-free constraint at all times. We find, however, that the method used to compute the electromotive forces must be chosen carefully to propagate accurately all modes of MHD wave families (in particular shear Alfvén waves). A new method of computing the electromotive force is developed using the method of characteristics (MOC). It is demonstrated through the results of an extensive series of MHD test problems that the resulting hybrid MOC-CT method provides for the accurate evolution of all modes of MHD wave families.

  16. Rainfall estimation from soil moisture data: crash test for SM2RAIN algorithm

    NASA Astrophysics Data System (ADS)

    Brocca, Luca; Albergel, Clement; Massari, Christian; Ciabatta, Luca; Moramarco, Tommaso; de Rosnay, Patricia

    2015-04-01

    Soil moisture governs the partitioning of mass and energy fluxes between the land surface and the atmosphere and, hence, it represents a key variable for many applications in hydrology and earth science. In recent years, it was demonstrated that soil moisture observations from ground and satellite sensors contain important information useful for improving rainfall estimation. Indeed, soil moisture data have been used for correcting rainfall estimates from state-of-the-art satellite sensors (e.g. Crow et al., 2011), and also for improving flood prediction through a dual data assimilation approach (e.g. Massari et al., 2014; Chen et al., 2014). Brocca et al. (2013; 2014) developed a simple algorithm, called SM2RAIN, which allows estimating rainfall directly from soil moisture data. SM2RAIN has been applied successfully to in situ and satellite observations. Specifically, by using three satellite soil moisture products from ASCAT (Advanced SCATterometer), AMSR-E (Advanced Microwave Scanning Radiometer for Earth Observation) and SMOS (Soil Moisture and Ocean Salinity); it was found that the SM2RAIN-derived rainfall products are as accurate as state-of-the-art products, e.g., the real-time version of the TRMM (Tropical Rainfall Measuring Mission) product. Notwithstanding these promising results, a detailed study investigating the physical basis of the SM2RAIN algorithm, its range of applicability and its limitations on a global scale has still to be carried out. In this study, we carried out a crash test for SM2RAIN algorithm on a global scale by performing a synthetic experiment. Specifically, modelled soil moisture data are obtained from HTESSEL model (Hydrology Tiled ECMWF Scheme for Surface Exchanges over Land) forced by ERA-Interim near-surface meteorology. Afterwards, the modelled soil moisture data are used as input into SM2RAIN algorithm for testing weather or not the resulting rainfall estimates are able to reproduce ERA-Interim rainfall data. Correlation, root

  17. A comparison of two position estimate algorithms that use ILS localizer and DME information. Simulation and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Vicroy, D. D.; Scanlon, C.

    1984-01-01

    Simulation and flight tests were conducted to compare the accuracy of two algorithms designed to compute a position estimate with an airborne navigation computer. Both algorithms used ILS localizer and DME radio signals to compute a position difference vector to be used as an input to the navigation computer position estimate filter. The results of these tests show that the position estimate accuracy and response to artificially induced errors are improved when the position estimate is computed by an algorithm that geometrically combines DME and ILS localizer information to form a single component of error rather than by an algorithm that produces two independent components of error, one from a DMD input and the other from the ILS localizer input.

  18. Nondestructive characterization of tie-rods by means of dynamic testing, added masses and genetic algorithms

    NASA Astrophysics Data System (ADS)

    Gentilini, C.; Marzani, A.; Mazzotti, M.

    2013-01-01

    The structural characterization of tie-rods is crucial for the safety assessments of historical buildings. The main parameters that characterize the behavior of tie-rods are the tensile force, the modulus of elasticity of the material and the rotational stiffness at both restraints. Several static, static-dynamic and pure dynamic nondestructive methods have been proposed in the last decades to identify such parameters. However, none of them is able to characterize all the four mentioned parameters. To fill this gap, in this work a procedure based on dynamic testing, added masses and genetic algorithms (GA) is proposed. The identification is driven by GA where the objective function is a metric of the discrepancy between the experimentally determined (by dynamic impact testing) and the numerically computed (by a fast and reliable finite element formulation) frequencies of vibration of some modified systems obtained from the tie-rod by adding a concentrated mass in specific positions. It is shown by a comprehensive numerical testing campaign in which several cases spanning from short, low-stressed, and almost hinged tie-rods to long, high-tensioned, and nearly clamped tie-rods, that the proposed strategy is reliable in the identification of the four unknowns. Finally, the procedure has been applied to characterize a metallic tie-rod located in Palazzo Paleotti, Bologna (Italy).

  19. Tests of a Semi-Analytical Case 1 and Gelbstoff Case 2 SeaWiFS Algorithm with a Global Data Set

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.; Hawes, Steve K.; Lee, Zhongping

    1997-01-01

    A semi-analytical algorithm was tested with a total of 733 points of either unpackaged or packaged-pigment data, with corresponding algorithm parameters for each data type. The 'unpackaged' type consisted of data sets that were generally consistent with the Case 1 CZCS algorithm and other well calibrated data sets. The 'packaged' type consisted of data sets apparently containing somewhat more packaged pigments, requiring modification of the absorption parameters of the model consistent with the CalCOFI study area. This resulted in two equally divided data sets. A more thorough scrutiny of these and other data sets using a semianalytical model requires improved knowledge of the phytoplankton and gelbstoff of the specific environment studied. Since the semi-analytical algorithm is dependent upon 4 spectral channels including the 412 nm channel, while most other algorithms are not, a means of testing data sets for consistency was sought. A numerical filter was developed to classify data sets into the above classes. The filter uses reflectance ratios, which can be determined from space. The sensitivity of such numerical filters to measurement resulting from atmospheric correction and sensor noise errors requires further study. The semi-analytical algorithm performed superbly on each of the data sets after classification, resulting in RMS1 errors of 0.107 and 0.121, respectively, for the unpackaged and packaged data-set classes, with little bias and slopes near 1.0. In combination, the RMS1 performance was 0.114. While these numbers appear rather sterling, one must bear in mind what mis-classification does to the results. Using an average or compromise parameterization on the modified global data set yielded an RMS1 error of 0.171, while using the unpackaged parameterization on the global evaluation data set yielded an RMS1 error of 0.284. So, without classification, the algorithm performs better globally using the average parameters than it does using the unpackaged

  20. Assessing the performance of vessel wall tracking algorithms: the importance of the test phantom

    NASA Astrophysics Data System (ADS)

    Ramnarine, K. V.; Kanber, B.; Panerai, R. B.

    2004-01-01

    There is widespread clinical interest in assessing the mechanical properties of tissues and vessel walls. This study investigated the importance of the test phantom in providing a realistic assessment of clinical wall tracking performance for a variety of ultrasound modalities. B-mode, colour Doppler and Tissue Doppler Imaging (TDI) cineloop images were acquired using a Philips HDI5000 scanner and L12-5 probe. In-vivo longitudinal sections of 30 common carotid arteries and in-vitro images of pulsatile flow of a blood mimicking fluid through walled and wall-less tissue and vessel mimicking flow phantoms were analysed. Vessel wall tracking performance was assessed for our new probabilistic B-mode algorithm (PROBAL), and 3 different techniques implemented by Philips Medical Systems, based on B-mode edge detection (LDOT), colour Doppler (CVIQ) and TDI (TDIAWM). Precision (standard deviation/mean) of the peak systole dilations for respective PROBAL, LDOT, CVIQ and TDIAWM techniques were: 15.4 +/- 8.4%, 23 +/- 12.7%, 10 +/- 10% and 10.3 +/- 8.1% for the common carotid arteries; 6.4%, 22%, 11.6% and 34.5% for the wall-less flow phantom, 5.3%, 9.8%, 23.4% and 2.7% for the C-flex walled phantom and 3.9%, 2.6%, 1% and 3.2% for the latex walled phantom. The test phantom design and construction had a significant effect on the measurement of wall tracking performance.

  1. Test of a processing algorithm for NIR-laser-diode-based pulse oximetry

    NASA Astrophysics Data System (ADS)

    Lopez Silva, Sonnia M.; Silveira, Juan Pedro; Dotor, Maria Luisa

    2003-04-01

    Pulse oximeters are used for the non-invasive monitoring of arterial blood hemoglobin oxygen saturation. This technique is based on the time variable optical attenuation by a vascular bed due to the cardiac pumping action (photoplethysmography) and the differential optical absorption of the oxy- and deoxy-hemoglobin. The photoplethysmographic (PPG) signals measured at two specific wavelengths are decomposed into its variable or pulsating component (EAC) and the constant or non-pulsating component (EDC) for deriving a parameter related to the arterial blood oxygen saturation (So2). Previously it has been reported a signal processing algorithm for a near infrared (NIR) laser diodes based transmittance pulse oximetry system. The main difficulties in the extraction of the information from the PPG signals are the small value of the signals variation related to their constant values, and the presence of artefacts caused by macro- and micro- movements of the part under analysis. The proposed algorithm permits the numeric separation of the variable and constant parts of the signals for both wavelengths. The EDC is obtained by a low pass filtering, and EAC by a pass-band one, followed by a non-linear filtering based on histogram reduction. In the present work is exposed the analysis of the influence of processing parameters like filters cut-off frequencies and histogram reduction percentage, on the derived So2 values. The test has been conducted both on real and simulated PPG signals. The real PPG has been recorded through experimental studies with human subjects using the NIR laser diodes based transmittance pulse oximetry system. The sources of artefacts and noise in the laser diodes PPG signals are discussed.

  2. Flight Testing of the Space Launch System (SLS) Adaptive Augmenting Control (AAC) Algorithm on an F/A-18

    NASA Technical Reports Server (NTRS)

    Dennehy, Cornelius J.; VanZwieten, Tannen S.; Hanson, Curtis E.; Wall, John H.; Miller, Chris J.; Gilligan, Eric T.; Orr, Jeb S.

    2014-01-01

    The Marshall Space Flight Center (MSFC) Flight Mechanics and Analysis Division developed an adaptive augmenting control (AAC) algorithm for launch vehicles that improves robustness and performance on an as-needed basis by adapting a classical control algorithm to unexpected environments or variations in vehicle dynamics. This was baselined as part of the Space Launch System (SLS) flight control system. The NASA Engineering and Safety Center (NESC) was asked to partner with the SLS Program and the Space Technology Mission Directorate (STMD) Game Changing Development Program (GCDP) to flight test the AAC algorithm on a manned aircraft that can achieve a high level of dynamic similarity to a launch vehicle and raise the technology readiness of the algorithm early in the program. This document reports the outcome of the NESC assessment.

  3. Testing a Firefly-Inspired Synchronization Algorithm in a Complex Wireless Sensor Network.

    PubMed

    Hao, Chuangbo; Song, Ping; Yang, Cheng; Liu, Xiongjun

    2017-03-08

    Data acquisition is the foundation of soft sensor and data fusion. Distributed data acquisition and its synchronization are the important technologies to ensure the accuracy of soft sensors. As a research topic in bionic science, the firefly-inspired algorithm has attracted widespread attention as a new synchronization method. Aiming at reducing the design difficulty of firefly-inspired synchronization algorithms for Wireless Sensor Networks (WSNs) with complex topologies, this paper presents a firefly-inspired synchronization algorithm based on a multiscale discrete phase model that can optimize the performance tradeoff between the network scalability and synchronization capability in a complex wireless sensor network. The synchronization process can be regarded as a Markov state transition, which ensures the stability of this algorithm. Compared with the Miroll and Steven model and Reachback Firefly Algorithm, the proposed algorithm obtains better stability and performance. Finally, its practicality has been experimentally confirmed using 30 nodes in a real multi-hop topology with low quality links.

  4. [Interpretation of pair diagnosis with the Giessen test. An algorithm and a computer program to determine types].

    PubMed

    Kubinger, K D; Wagner, M M; Alexandrowicz, R

    1999-07-01

    An algorithm is given in order to quantify the similarity of a couple's test profile in the Giessen-Test as concerns that 16 typical test profiles discovered by Brähler and Brähler (1993). The German Giessen-Test is a personality-inventory based on psychoanalysis. The Euklidian distance was chosen as a measuring unit. The identification of that typical test profile to which any couple belongs succeeds very easily, a task which is otherwise only possible with difficulty. However, any allocation is merely for reasons of description, not based on statistical decisions. As a special service, the respective computer programme is placed at everyone's disposal.

  5. Exploring New Ways to Deliver Value to Healthcare Organizations: Algorithmic Testing, Data Integration, and Diagnostic E-consult Service.

    PubMed

    Risin, Semyon A; Chang, Brian N; Welsh, Kerry J; Kidd, Laura R; Moreno, Vanessa; Chen, Lei; Tholpady, Ashok; Wahed, Amer; Nguyen, Nghia; Kott, Marylee; Hunter, Robert L

    2015-01-01

    As the USA Health Care System undergoes transformation and transitions to value-based models it is critical for laboratory medicine/clinical pathology physicians to explore opportunities and find new ways to deliver value, become an integral part of the healthcare team. This is also essential for ensuring financial health and stability of the profession when the payment paradigm changes from fee-for-service to fee-for-performance. About 5 years ago we started searching for ways to achieve this goal. Among other approaches, the search included addressing the laboratory work-ups for specialists' referrals in the HarrisHealth System, a major safety net health care organization serving mostly indigent and underserved population of Harris County, TX. We present here our experience in improving the efficiency of laboratory testing for the referral process and in building a prototype of a diagnostic e-consult service using rheumatologic diseases as a starting point. The service incorporates algorithmic testing, integration of clinical, laboratory and imaging data, issuing structured comprehensive consultation reports, incorporating all the relevant information, and maintaining personal contacts and an e-line of communications with the primary providers and referral center personnel. Ongoing survey of providers affords testimony of service value in terms of facilitating their work and increasing productivity. Analysis of the cost effectiveness and of other value indicators is currently underway. We also discuss our pioneering experience in building pathology residents and fellows training in integrated diagnostic consulting service.

  6. Industrial Sites Work Plan for Leachfield Corrective Action Units: Nevada Test Site and Tonopah Test Range, Nevada (including Record of Technical Change Nos. 1, 2, 3, and 4)

    SciTech Connect

    DOE /NV

    1998-12-18

    This Leachfield Corrective Action Units (CAUs) Work Plan has been developed in accordance with the Federal Facility Agreement and Consent Order (FFACO) that was agreed to by the U.S. Department of Energy, Nevada Operations Office (DOE/NV); the State of Nevada Division of Environmental Protection (NDEP); and the U.S. Department of Defense (FFACO, 1996). Under the FFACO, a work plan is an optional planning document that provides information for a CAU or group of CAUs where significant commonality exists. A work plan may be developed that can be referenced by leachfield Corrective Action Investigation Plans (CAIPs) to eliminate redundant CAU documentation. This Work Plan includes FFACO-required management, technical, quality assurance (QA), health and safety, public involvement, field sampling, and waste management documentation common to several CAUs with similar site histories and characteristics, namely the leachfield systems at the Nevada Test Site (NTS) and the Tonopah Test Range (TT R). For each CAU, a CAIP will be prepared to present detailed, site-specific information regarding contaminants of potential concern (COPCs), sampling locations, and investigation methods.

  7. A sequential nonparametric pattern classification algorithm based on the Wald SPRT. [Sequential Probability Ratio Test

    NASA Technical Reports Server (NTRS)

    Poage, J. L.

    1975-01-01

    A sequential nonparametric pattern classification procedure is presented. The method presented is an estimated version of the Wald sequential probability ratio test (SPRT). This method utilizes density function estimates, and the density estimate used is discussed, including a proof of convergence in probability of the estimate to the true density function. The classification procedure proposed makes use of the theory of order statistics, and estimates of the probabilities of misclassification are given. The procedure was tested on discriminating between two classes of Gaussian samples and on discriminating between two kinds of electroencephalogram (EEG) responses.

  8. Covariance Structure Model Fit Testing under Missing Data: An Application of the Supplemented EM Algorithm

    ERIC Educational Resources Information Center

    Cai, Li; Lee, Taehun

    2009-01-01

    We apply the Supplemented EM algorithm (Meng & Rubin, 1991) to address a chronic problem with the "two-stage" fitting of covariance structure models in the presence of ignorable missing data: the lack of an asymptotically chi-square distributed goodness-of-fit statistic. We show that the Supplemented EM algorithm provides a…

  9. Universal test fixture for monolithic mm-wave integrated circuits calibrated with an augmented TRD algorithm

    NASA Technical Reports Server (NTRS)

    Romanofsky, Robert R.; Shalkhauser, Kurt A.

    1989-01-01

    The design and evaluation of a novel fixturing technique for characterizing millimeter wave solid state devices is presented. The technique utilizes a cosine-tapered ridge guide fixture and a one-tier de-embedding procedure to produce accurate and repeatable device level data. Advanced features of this technique include nondestructive testing, full waveguide bandwidth operation, universality of application, and rapid, yet repeatable, chip-level characterization. In addition, only one set of calibration standards is required regardless of the device geometry.

  10. Evaluation of a New Method of Fossil Retrodeformation by Algorithmic Symmetrization: Crania of Papionins (Primates, Cercopithecidae) as a Test Case

    PubMed Central

    Tallman, Melissa; Amenta, Nina; Delson, Eric; Frost, Stephen R.; Ghosh, Deboshmita; Klukkert, Zachary S.; Morrow, Andrea; Sawyer, Gary J.

    2014-01-01

    Diagenetic distortion can be a major obstacle to collecting quantitative shape data on paleontological specimens, especially for three-dimensional geometric morphometric analysis. Here we utilize the recently -published algorithmic symmetrization method of fossil reconstruction and compare it to the more traditional reflection & averaging approach. In order to have an objective test of this method, five casts of a female cranium of Papio hamadryas kindae were manually deformed while the plaster hardened. These were subsequently “retrodeformed” using both algorithmic symmetrization and reflection & averaging and then compared to the original, undeformed specimen. We found that in all cases, algorithmic retrodeformation improved the shape of the deformed cranium and in four out of five cases, the algorithmically symmetrized crania were more similar in shape to the original crania than the reflected & averaged reconstructions. In three out of five cases, the difference between the algorithmically symmetrized crania and the original cranium could be contained within the magnitude of variation among individuals in a single subspecies of Papio. Instances of asymmetric distortion, such as breakage on one side, or bending in the axis of symmetry, were well handled, whereas symmetrical distortion remained uncorrected. This technique was further tested on a naturally deformed and fossilized cranium of Paradolichopithecus arvernensis. Results, based on a principal components analysis and Procrustes distances, showed that the algorithmically symmetrized Paradolichopithecus cranium was more similar to other, less-deformed crania from the same species than was the original. These results illustrate the efficacy of this method of retrodeformation by algorithmic symmetrization for the correction of asymmetrical distortion in fossils. Symmetrical distortion remains a problem for all currently developed methods of retrodeformation. PMID:24992483

  11. Evaluation of a new method of fossil retrodeformation by algorithmic symmetrization: crania of papionins (Primates, Cercopithecidae) as a test case.

    PubMed

    Tallman, Melissa; Amenta, Nina; Delson, Eric; Frost, Stephen R; Ghosh, Deboshmita; Klukkert, Zachary S; Morrow, Andrea; Sawyer, Gary J

    2014-01-01

    Diagenetic distortion can be a major obstacle to collecting quantitative shape data on paleontological specimens, especially for three-dimensional geometric morphometric analysis. Here we utilize the recently-published algorithmic symmetrization method of fossil reconstruction and compare it to the more traditional reflection & averaging approach. In order to have an objective test of this method, five casts of a female cranium of Papio hamadryas kindae were manually deformed while the plaster hardened. These were subsequently "retrodeformed" using both algorithmic symmetrization and reflection & averaging and then compared to the original, undeformed specimen. We found that in all cases, algorithmic retrodeformation improved the shape of the deformed cranium and in four out of five cases, the algorithmically symmetrized crania were more similar in shape to the original crania than the reflected & averaged reconstructions. In three out of five cases, the difference between the algorithmically symmetrized crania and the original cranium could be contained within the magnitude of variation among individuals in a single subspecies of Papio. Instances of asymmetric distortion, such as breakage on one side, or bending in the axis of symmetry, were well handled, whereas symmetrical distortion remained uncorrected. This technique was further tested on a naturally deformed and fossilized cranium of Paradolichopithecus arvernensis. Results, based on a principal components analysis and Procrustes distances, showed that the algorithmically symmetrized Paradolichopithecus cranium was more similar to other, less-deformed crania from the same species than was the original. These results illustrate the efficacy of this method of retrodeformation by algorithmic symmetrization for the correction of asymmetrical distortion in fossils. Symmetrical distortion remains a problem for all currently developed methods of retrodeformation.

  12. Compilation, design tests: Energetic particles Satellite S-3 including design tests for S-3A, S-3B and S-3C

    NASA Technical Reports Server (NTRS)

    Ledoux, F. N.

    1973-01-01

    A compilation of engineering design tests which were conducted in support of the Energetic Particle Satellite S-3, S-3A, and S-3b programs. The purpose for conducting the tests was to determine the adequacy and reliability of the Energetic Particles Series of satellites designs. The various tests consisted of: (1) moments of inertia, (2) functional reliability, (3) component and structural integrity, (4) initiators and explosives tests, and (5) acceptance tests.

  13. 78 FR 28633 - Prometric, Inc., a Subsidiary of Educational Testing Service, Including On-Site Leased Workers...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-15

    ..., Including On-Site Leased Workers From Office Team St. Paul, Minnesota; Amended Certification Regarding... workers of the subject firm. The company reports that workers leased from Office Team were employed on... these findings, the Department is amending this certification to include workers leased from Office...

  14. Objective markers for sleep propensity: comparison between the Multiple Sleep Latency Test and the Vigilance Algorithm Leipzig.

    PubMed

    Olbrich, Sebastian; Fischer, Marie M; Sander, Christian; Hegerl, Ulrich; Wirtz, Hubert; Bosse-Henck, Andrea

    2015-08-01

    The regulation of wakefulness is important for high-order organisms. Its dysregulation is involved in the pathomechanism of several psychiatric disorders. Thus, a tool for its objective but little time-consuming assessment would be of importance. The Vigilance Algorithm Leipzig allows the objective measurement of sleep propensity, based on a single resting state electroencephalogram. To compare the Vigilance Algorithm Leipzig with the standard for objective assessment of excessive daytime sleepiness, a four-trial Multiple Sleep Latency Test in 25 healthy subjects was conducted. Between the first two trials, a 15-min, 25-channel resting electroencephalogram was recorded, and Vigilance Algorithm Leipzig was used to classify the sleep propensity (i.e., type of vigilance regulation) of each subject. The results of both methods showed significant correlations with the Epworth Sleepiness Scale (ρ = -0.70; ρ = 0.45, respectively) and correlated with each other (ρ = -0.54). Subjects with a stable electroencephalogram-vigilance regulation yielded significant increased sleep latencies compared with an unstable regulation (multiple sleep latency 898.5 s versus 549.9 s; P = 0.03). Further, Vigilance Algorithm Leipzig classifications allowed the identification of subjects with average sleep latencies <6 min with a sensitivity of 100% and a specificity of 77%. Thus, Vigilance Algorithm Leipzig provides similar information on wakefulness regulation in comparison to the much more cost- and time-consuming Multiple Sleep Latency Test. Due to its high sensitivity and specificity for large sleep propensity, Vigilance Algorithm Leipzig could be an effective and reliable alternative to the Multiple Sleep Latency Test, for example for screening purposes in large cohorts, where objective information about wakefulness regulation is needed.

  15. Disk diffusion antimicrobial susceptibility testing of members of the family Legionellaceae including erythromycin-resistant variants of Legionella micdadei.

    PubMed Central

    Dowling, J N; McDevitt, D A; Pasculle, A W

    1984-01-01

    Disk diffusion antimicrobial susceptibility testing of members of the family Legionellaceae was accomplished on buffered charcoal yeast extract agar by allowing the bacteria to grow for 6 h before placement of the disks, followed by an additional 42-h incubation period before the inhibitory zones were measured. This system was standardized by comparing the zone sizes with the MICs for 20 antimicrobial agents of nine bacterial strains in five Legionella species and of 19 laboratory-derived, erythromycin-resistant variants of Legionella micdadei. A high, linear correlation between zone size and MIC was found for erythromycin, trimethoprim, penicillin, ampicillin, carbenicillin, cephalothin, cefamandole, cefoxitin, moxalactam, chloramphenicol, vancomycin, and clindamycin. Disk susceptibility testing could be employed to screen Legionella isolates for resistance to any of these antimicrobial agents, of which only erythromycin is known to be efficacious in the treatment of legionellosis. With selected antibiotics, disk susceptibility patterns also appeared to accurately identify to the species level the legionellae. The range of the MICs of the legionellae for rifampin and the aminoglycosides was too small to determine whether the correlation of zone size with MIC was linear. However, laboratory-derived, high-level rifampin-resistant variants of L. micdadei demonstrated no inhibition zone around the rifampin disk, indicating that disk susceptibility testing would likely identify a rifampin-resistant clinical isolate. Of the antimicrobial agents tested, the only agents for which disk susceptibility testing was definitely not possible on buffered charcoal yeast extract agar were oxacillin, the tetracyclines, and the sulfonamides. PMID:6565706

  16. Segmentation of diesel spray images with log-likelihood ratio test algorithm for non-Gaussian distributions.

    PubMed

    Pastor, José V; Arrègle, Jean; García, José M; Zapata, L Daniel

    2007-02-20

    A methodology for processing images of diesel sprays under different experimental situations is presented. The new approach has been developed for cases where the background does not follow a Gaussian distribution but a positive bias appears. In such cases, the lognormal and the gamma probability density functions have been considered for the background digital level distributions. Two different algorithms have been compared with the standard log-likelihood ratio test (LRT): a threshold defined from the cumulative probability density function of the background shows a sensitive improvement, but the best results are obtained with modified versions of the LRT algorithm adapted to non-Gaussian cases.

  17. Evaluation of a wind-tunnel gust response technique including correlations with analytical and flight test results

    NASA Technical Reports Server (NTRS)

    Redd, L. T.; Hanson, P. W.; Wynne, E. C.

    1979-01-01

    A wind tunnel technique for obtaining gust frequency response functions for use in predicting the response of flexible aircraft to atmospheric turbulence is evaluated. The tunnel test results for a dynamically scaled cable supported aeroelastic model are compared with analytical and flight data. The wind tunnel technique, which employs oscillating vanes in the tunnel throat section to generate a sinusoidally varying flow field around the model, was evaluated by use of a 1/30 scale model of the B-52E airplane. Correlation between the wind tunnel results, flight test results, and analytical predictions for response in the short period and wing first elastic modes of motion are presented.

  18. Social and Prevocational Information Battery. [Includes Test Book, User's Guide, Examiner's Manual, Technical Report, Answer Key, and Class Record Sheet].

    ERIC Educational Resources Information Center

    Halpern, Andrew; And Others

    The Social and Prevocational Information Battery (SPIB) consists of a series of nine tests designed to assess knowledge of skills and competencies widely regarded as important for the ultimate community adjustment of educable mentally retarded students. The nine areas are purchasing, budgeting, banking, job related behavior, job search skills,…

  19. Spectral-Based Volume Sensor Prototype, Post-VS4 Test Series Algorithm Development

    DTIC Science & Technology

    2009-04-30

    Å (NIR), solar - blind UV ( UV ), and 4.3 μm (IR)) and five EVENT algorithms (EVENT, PDSMOKE, FIRE, FIRE_FOV, and WELDING) generating alarm events for... detector are not currently used by any algorithm and, where present, are recorded only for future research and development. The UV units (upper unit...in Figure 2-1) are designed around a standard UV -only OFD (Vibrometer, Inc.). The OmniGuard 860 Optical Flame Detector (Vibrometer, Inc.) used in

  20. Multispectral imaging burn wound tissue classification system: a comparison of test accuracies between several common machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Squiers, John J.; Li, Weizhi; King, Darlene R.; Mo, Weirong; Zhang, Xu; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.

    2016-03-01

    The clinical judgment of expert burn surgeons is currently the standard on which diagnostic and therapeutic decisionmaking regarding burn injuries is based. Multispectral imaging (MSI) has the potential to increase the accuracy of burn depth assessment and the intraoperative identification of viable wound bed during surgical debridement of burn injuries. A highly accurate classification model must be developed using machine-learning techniques in order to translate MSI data into clinically-relevant information. An animal burn model was developed to build an MSI training database and to study the burn tissue classification ability of several models trained via common machine-learning algorithms. The algorithms tested, from least to most complex, were: K-nearest neighbors (KNN), decision tree (DT), linear discriminant analysis (LDA), weighted linear discriminant analysis (W-LDA), quadratic discriminant analysis (QDA), ensemble linear discriminant analysis (EN-LDA), ensemble K-nearest neighbors (EN-KNN), and ensemble decision tree (EN-DT). After the ground-truth database of six tissue types (healthy skin, wound bed, blood, hyperemia, partial injury, full injury) was generated by histopathological analysis, we used 10-fold cross validation to compare the algorithms' performances based on their accuracies in classifying data against the ground truth, and each algorithm was tested 100 times. The mean test accuracy of the algorithms were KNN 68.3%, DT 61.5%, LDA 70.5%, W-LDA 68.1%, QDA 68.9%, EN-LDA 56.8%, EN-KNN 49.7%, and EN-DT 36.5%. LDA had the highest test accuracy, reflecting the bias-variance tradeoff over the range of complexities inherent to the algorithms tested. Several algorithms were able to match the current standard in burn tissue classification, the clinical judgment of expert burn surgeons. These results will guide further development of an MSI burn tissue classification system. Given that there are few surgeons and facilities specializing in burn care

  1. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: algorithm development and flight test results

    SciTech Connect

    Knox, C.E.; Vicroy, D.D.; Simmon, D.A.

    1985-05-01

    A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

  2. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: Algorithm development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Vicroy, D. D.; Simmon, D. A.

    1985-01-01

    A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

  3. Testing a Firefly-Inspired Synchronization Algorithm in a Complex Wireless Sensor Network

    PubMed Central

    Hao, Chuangbo; Song, Ping; Yang, Cheng; Liu, Xiongjun

    2017-01-01

    Data acquisition is the foundation of soft sensor and data fusion. Distributed data acquisition and its synchronization are the important technologies to ensure the accuracy of soft sensors. As a research topic in bionic science, the firefly-inspired algorithm has attracted widespread attention as a new synchronization method. Aiming at reducing the design difficulty of firefly-inspired synchronization algorithms for Wireless Sensor Networks (WSNs) with complex topologies, this paper presents a firefly-inspired synchronization algorithm based on a multiscale discrete phase model that can optimize the performance tradeoff between the network scalability and synchronization capability in a complex wireless sensor network. The synchronization process can be regarded as a Markov state transition, which ensures the stability of this algorithm. Compared with the Miroll and Steven model and Reachback Firefly Algorithm, the proposed algorithm obtains better stability and performance. Finally, its practicality has been experimentally confirmed using 30 nodes in a real multi-hop topology with low quality links. PMID:28282899

  4. Determination of the relative economic impact of different molecular-based laboratory algorithms for respiratory viral pathogen detection, including Pandemic (H1N1), using a secure web based platform

    PubMed Central

    2011-01-01

    Background During period of crisis, laboratory planners may be faced with a need to make operational and clinical decisions in the face of limited information. To avoid this dilemma, our laboratory utilizes a secure web based platform, Data Integration for Alberta Laboratories (DIAL) to make near real-time decisions. This manuscript utilizes the data collected by DIAL as well as laboratory test cost modeling to identify the relative economic impact of four proposed scenarios of testing for Pandemic H1N1 (2009) and other respiratory viral pathogens. Methods Historical data was collected from the two waves of the pandemic using DIAL. Four proposed molecular testing scenarios were generated: A) Luminex respiratory virus panel (RVP) first with/without US centers for Disease Control Influenza A Matrix gene assay (CDC-M), B) CDC-M first with/without RVP, C) RVP only, and D) CDC-M only. Relative cost estimates of different testing algorithm were generated from a review of historical costs in the lab and were based on 2009 Canadian dollars. Results Scenarios A and B had similar costs when the rate of influenza A was low (< 10%) with higher relative cost in Scenario A with increasing incidence. Scenario A provided more information about mixed respiratory virus infection as compared with Scenario B. Conclusions No one approach is applicable to all conditions. Testing costs will vary depending on the test volume, prevalence of influenza A strains, as well as other circulating viruses and a more costly algorithm involving a combination of different tests may be chosen to ensure that tests results are returned to the clinician in a quicker manner. Costing should not be the only consideration for determination of laboratory algorithms. PMID:21645365

  5. Statistical Analysis of a Large Sample Size Pyroshock Test Data Set Including Post Flight Data Assessment. Revision 1

    NASA Technical Reports Server (NTRS)

    Hughes, William O.; McNelis, Anne M.

    2010-01-01

    The Earth Observing System (EOS) Terra spacecraft was launched on an Atlas IIAS launch vehicle on its mission to observe planet Earth in late 1999. Prior to launch, the new design of the spacecraft's pyroshock separation system was characterized by a series of 13 separation ground tests. The analysis methods used to evaluate this unusually large amount of shock data will be discussed in this paper, with particular emphasis on population distributions and finding statistically significant families of data, leading to an overall shock separation interface level. The wealth of ground test data also allowed a derivation of a Mission Assurance level for the flight. All of the flight shock measurements were below the EOS Terra Mission Assurance level thus contributing to the overall success of the EOS Terra mission. The effectiveness of the statistical methodology for characterizing the shock interface level and for developing a flight Mission Assurance level from a large sample size of shock data is demonstrated in this paper.

  6. Including Bioconcentration Kinetics for the Prioritization and Interpretation of Regulatory Aquatic Toxicity Tests of Highly Hydrophobic Chemicals.

    PubMed

    Kwon, Jung-Hwan; Lee, So-Young; Kang, Hyun-Joong; Mayer, Philipp; Escher, Beate I

    2016-11-01

    Worldwide, regulations of chemicals require short-term toxicity data for evaluating hazards and risks of the chemicals. Current data requirements on the registration of chemicals are primarily based on tonnage and do not yet consider properties of chemicals. For example, short-term ecotoxicity data are required for chemicals with production volume greater than 1 or 10 ton/y according to REACH, without considering chemical properties. Highly hydrophobic chemicals are characterized by low water solubility and slow bioconcentration kinetics, which may hamper the interpretation of short-term toxicity experiments. In this work, internal concentrations of highly hydrophobic chemicals were predicted for standard acute ecotoxicity tests at three trophic levels, algae, invertebrate, and fish. As demonstrated by comparison with maximum aqueous concentrations at water solubility, chemicals with an octanol-water partition coefficient (Kow) greater than 10(6) are not expected to reach sufficiently high internal concentrations for exerting effects within the test duration of acute tests with fish and invertebrates, even though they might be intrinsically toxic. This toxicity cutoff was explained by the slow uptake, i.e., by kinetics, not by thermodynamic limitations. Predictions were confirmed by data entries of the OECD's screening information data set (SIDS) (n = 746), apart from a few exceptions concerning mainly organometallic substances and those with inconsistency between water solubility and Kow. Taking error propagation and model assumptions into account, we thus propose a revision of data requirements for highly hydrophobic chemicals with log Kow > 7.4: Short-term toxicity tests can be limited to algae that generally have the highest uptake rate constants, whereas the primary focus of the assessment should be on persistence, bioaccumulation, and long-term effects.

  7. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Patterson, Jonathan; Teare, David; Johnson, Stephen

    2015-01-01

    The engineering development of the new Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these spacecraft systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex system engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in specialized Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model based algorithms and their development lifecycle from inception through Flight Software certification are an important focus of this development effort to further insure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. NASA formed a dedicated M&FM team for addressing fault management early in the development lifecycle for the SLS initiative. As part of the development of the M&FM capabilities, this team has developed a dedicated testbed that

  8. 40 CFR 1048.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... test, perform the steady-state test according to this section after an appropriate warm-up period... 6 Intermediate test 10 0.10 7 Warm idle 0 0.15 1 Speed terms are defined in 40 CFR part 1065. 2 The... Torque(percent) 2,3 1a Steady-state 119 Warm idle 0 1b Transition 20 Linear transition Linear...

  9. 40 CFR 1048.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... test, perform the steady-state test according to this section after an appropriate warm-up period... 6 Intermediate test 10 0.10 7 Warm idle 0 0.15 1 Speed terms are defined in 40 CFR part 1065. 2 The... Torque(percent) 2,3 1a Steady-state 119 Warm idle 0 1b Transition 20 Linear transition Linear...

  10. Potential sources of 2-aminoacetophenone to confound the Pseudomonas aeruginosa breath test, including analysis of a food challenge study.

    PubMed

    Scott-Thomas, Amy; Pearson, John; Chambers, Stephen

    2011-12-01

    2-Aminoacetophenone can be detected in the breath of Pseudomonas aeruginosa colonized cystic fibrosis patients; however, low levels were also detected in a small proportion of healthy subjects. It was hypothesized that food, beverages, cosmetics or medications could be a source of contamination of 2-aminoacetophenone in breath. To determine the potential confounding of these products on 2-aminoacetophenone breath analysis, screening for this volatile was performed in the laboratory by gas chromatography/mass spectrometry and a food challenge study carried out. 2-Aminoacetophenone was detected in four of the 78 samples tested in vitro: corn chips and canned tuna (high pmol mol(-1)) and egg white and one of the three beers (low pmol mol(-1)). No 2-aminoacetophenone was detected in the CF medication or cosmetics tested. Twenty-eight out of 30 environmental air samples were negative for 2-aminoacetophenone (below 50 pmol mol(-1)). A challenge study with ten healthy subjects was performed to determine if 2-aminoacetophenone from corn chips was detectable on the breath after consumption. Analysis of mixed breath samples reported that the levels of 2-aminoacetophenone were immediately elevated after corn chip consumption, but after 2 h the level of 2-aminoacetophenone had reduced back to the 'baseline' for each subject.

  11. Quantum Algorithms

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.

  12. Nondestructive tablet hardness testing by near-infrared spectroscopy: a new and robust spectral best-fit algorithm.

    PubMed

    Kirsch, J D; Drennen, J K

    1999-03-01

    A new algorithm using common statistics was proposed for nondestructive near-infrared (near-IR) spectroscopic tablet hardness testing over a range of tablet potencies. The spectral features that allow near-IR tablet hardness testing were evaluated. Cimetidine tablets of 1-20% potency and 1-7 kp hardness were used for the development and testing of a new spectral best-fit algorithm for tablet hardness prediction. Actual tablet hardness values determined via a destructive diametral crushing test were used for construction of calibration models using principal component analysis/principal component regression (PCA/PCR) or the new algorithm. Both methods allowed the prediction of tablet hardness over the range of potencies studied. The spectral best-fit method compared favorably to the multivariate PCA/PCR method, but was easier to develop. The new approach offers advantages over wavelength-based regression models because the calculation of a spectral slope averages out the influence of individual spectral absorbance bands. The ability to generalize the hardness calibration over a range of potencies confirms the robust nature of the method.

  13. Numerical Solution and Algorithm Analysis for the Unsteady Navier-Stokes Equations on Dynamic Multiblock Grids Including Chemical Equilibrium. Volume 2

    DTIC Science & Technology

    1992-10-01

    a set comprising air, Ar, CO2 , CO, N2, 02, H2 and steam. Debugging and testing of the code was accomplished using an ideally dissociating oxygen model...complex air model employs seventeen species, the thirteen above plus C, C +, CO and CO2 . Three more independent reactions are needed 2CO2 * 2CO + 0 2...conserving this element, and the additional nonelemental species are C +, CO and CO2 . The 17-Species Air Model employs consistent equilibrium constants only

  14. Generalization of the Lord-Wingersky Algorithm to Computing the Distribution of Summed Test Scores Based on Real-Number Item Scores

    ERIC Educational Resources Information Center

    Kim, Seonghoon

    2013-01-01

    With known item response theory (IRT) item parameters, Lord and Wingersky provided a recursive algorithm for computing the conditional frequency distribution of number-correct test scores, given proficiency. This article presents a generalized algorithm for computing the conditional distribution of summed test scores involving real-number item…

  15. Automatic Traffic Advisory and Resolution Service (ATARS) Algorithms Including Resolution-Advisory-Register Logic. Volume 2. Sections 12 through 19. Appendices,

    DTIC Science & Technology

    1981-06-01

    resolutionadvisoryaddition for controlled_ air craft*, ILSf PERFOPM p siltiveneg ativeresolution advisory transition_ test; IF (resolution advisories have not...PPvC.acl.SEND ?.2 S-RUZ 21 PV1EC.ac2.SEWD LQ MUI) Tl~W .&L lPDTSECTOV1D IYU (ACIiII, ACID2, conflict table, * air record):. U(PIEC. HDOPP r& SVALSI Ln...ACIBS_? entry found with END status AIR next entry does not have same ICS_Trackno) yove subject entry after one with 119 status% Assign subject entry

  16. 40 CFR 1039.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...-state test according to this section after an appropriate warm-up period, consistent with 40 CFR part... idle mode, operate the engine at its warm idle speed as described in 40 CFR part 1065. (d) For...

  17. 40 CFR 1039.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...-state test according to this section after an appropriate warm-up period, consistent with 40 CFR part... idle mode, operate the engine at its warm idle speed as described in 40 CFR part 1065. (d) For...

  18. Corrective Action Decision Document for Corrective Action Unit 204: Storage Bunkers, Nevada Test Site, Nevada: Revision 0, Including Errata Sheet

    SciTech Connect

    U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office

    2004-04-01

    This Corrective Action Decision Document identifies the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office's corrective action alternative recommendation for each of the corrective action sites (CASs) within Corrective Action Unit (CAU) 204: Storage Bunkers, Nevada Test Site (NTS), Nevada, under the Federal Facility Agreement and Consent Order. An evaluation of analytical data from the corrective action investigation, review of current and future operations at each CAS, and a detailed comparative analysis of potential corrective action alternatives were used to determine the appropriate corrective action for each CAS. There are six CASs in CAU 204, which are all located between Areas 1, 2, 3, and 5 on the NTS. The No Further Action alternative was recommended for CASs 01-34-01, 02-34-01, 03-34-01, and 05-99-02; and a Closure in Place with Administrative Controls recommendation was the preferred corrective action for CASs 05-18-02 and 05-33-01. These alternatives were judged to meet all requirements for the technical components evaluated as well as applicable state and federal regulations for closure of the sites and will eliminate potential future exposure pathways to the contaminated media at CAU 204.

  19. A new test statistic for climate models that includes field and spatial dependencies using Gaussian Markov random fields

    DOE PAGES

    Nosedal-Sanchez, Alvaro; Jackson, Charles S.; Huerta, Gabriel

    2016-07-20

    A new test statistic for climate model evaluation has been developed that potentially mitigates some of the limitations that exist for observing and representing field and space dependencies of climate phenomena. Traditionally such dependencies have been ignored when climate models have been evaluated against observational data, which makes it difficult to assess whether any given model is simulating observed climate for the right reasons. The new statistic uses Gaussian Markov random fields for estimating field and space dependencies within a first-order grid point neighborhood structure. We illustrate the ability of Gaussian Markov random fields to represent empirical estimates of fieldmore » and space covariances using "witch hat" graphs. We further use the new statistic to evaluate the tropical response of a climate model (CAM3.1) to changes in two parameters important to its representation of cloud and precipitation physics. Overall, the inclusion of dependency information did not alter significantly the recognition of those regions of parameter space that best approximated observations. However, there were some qualitative differences in the shape of the response surface that suggest how such a measure could affect estimates of model uncertainty.« less

  20. Inventory of forest and rangeland resources, including forest stress. [Atlanta, Georgia, Black Hills, and Manitou, Colorado test sites

    NASA Technical Reports Server (NTRS)

    Heller, R. C.; Aldrich, R. C.; Weber, F. P.; Driscoll, R. S. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. Some current beetle-killed ponderosa pine can be detected on S190-B photography imaged over the Bear Lodge mountains in the Black Hills National Forest. Detections were made on SL-3 imagery (September 13, 1973) using a zoom lens microscope to view the photography. At this time correlations have not been made to all of the known infestation spots in the Bear Lodge mountains; rather, known infestations have been located on the SL-3 imagery. It was determined that the beetle-killed trees were current kills by stereo viewing of SL-3 imagery on one side and SL-2 on the other. A successful technique was developed for mapping current beetle-killed pine using MSS imagery from mission 247 flown by the C-130 over the Black Hills test site in September 1973. Color enhancement processing on the NASA/JSC, DAS system using three MSS channels produced an excellent quality detection map for current kill pine. More importantly it provides a way to inventory the dead trees by relating PCM counts to actual numbers of dead trees.

  1. Optimizing tuning masses for helicopter rotor blade vibration reduction including computed airloads and comparison with test data

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.; Walsh, Joanne L.; Wilbur, Matthew L.

    1992-01-01

    The development and validation of an optimization procedure to systematically place tuning masses along a rotor blade span to minimize vibratory loads are described. The masses and their corresponding locations are the design variables that are manipulated to reduce the harmonics of hub shear for a four-bladed rotor system without adding a large mass penalty. The procedure incorporates a comprehensive helicopter analysis to calculate the airloads. Predicting changes in airloads due to changes in design variables is an important feature of this research. The procedure was applied to a one-sixth, Mach-scaled rotor blade model to place three masses and then again to place six masses. In both cases the added mass was able to achieve significant reductions in the hub shear. In addition, the procedure was applied to place a single mass of fixed value on a blade model to reduce the hub shear for three flight conditions. The analytical results were compared to experimental data from a wind tunnel test performed in the Langley Transonic Dynamics Tunnel. The correlation of the mass location was good and the trend of the mass location with respect to flight speed was predicted fairly well. However, it was noted that the analysis was not entirely successful at predicting the absolute magnitudes of the fixed system loads.

  2. Testing an alternative search algorithm for compound identification with the 'Wiley Registry of Tandem Mass Spectral Data, MSforID'.

    PubMed

    Oberacher, Herbert; Whitley, Graeme; Berger, Bernd; Weinmann, Wolfgang

    2013-04-01

    A tandem mass spectral database system consists of a library of reference spectra and a search program. State-of-the-art search programs show a high tolerance for variability in compound-specific fragmentation patterns produced by collision-induced decomposition and enable sensitive and specific 'identity search'. In this communication, performance characteristics of two search algorithms combined with the 'Wiley Registry of Tandem Mass Spectral Data, MSforID' (Wiley Registry MSMS, John Wiley and Sons, Hoboken, NJ, USA) were evaluated. The search algorithms tested were the MSMS search algorithm implemented in the NIST MS Search program 2.0g (NIST, Gaithersburg, MD, USA) and the MSforID algorithm (John Wiley and Sons, Hoboken, NJ, USA). Sample spectra were acquired on different instruments and, thus, covered a broad range of possible experimental conditions or were generated in silico. For each algorithm, more than 30,000 matches were performed. Statistical evaluation of the library search results revealed that principally both search algorithms can be combined with the Wiley Registry MSMS to create a reliable identification tool. It appears, however, that a higher degree of spectral similarity is necessary to obtain a correct match with the NIST MS Search program. This characteristic of the NIST MS Search program has a positive effect on specificity as it helps to avoid false positive matches (type I errors), but reduces sensitivity. Thus, particularly with sample spectra acquired on instruments differing in their setup from tandem-in-space type fragmentation, a comparably higher number of false negative matches (type II errors) were observed by searching the Wiley Registry MSMS.

  3. Design and Initial In-Water Testing of Advanced Non-Linear Control Algorithms onto an Unmanned Underwater Vehicle (UUV)

    DTIC Science & Technology

    2007-10-01

    Design and initial in-water testing of advanced non- linear control algorithms onto an Unmanned Underwater Vehicle (UUV) Vladimir Djapic Unmanned...attitude or translating in a direction different from that of the surface. Non- linear controller that compensates for non-linear forces (such as drag...loop” non- linear controller (outputs the appropriate thrust values) is the same for all mission scenarios while an appropriate ”outer-loop” non

  4. Open framework for objective evaluation of crater detection algorithms with first test-field subsystem based on MOLA data

    NASA Astrophysics Data System (ADS)

    Salamunićcar, G.; Lončarić, S.

    2008-07-01

    Crater Detection Algorithms (CDAs) applications range from estimation of lunar/planetary surface age to autonomous landing on planets and asteroids and advanced statistical analyses. A large amount of work on CDAs has already been published. However, problems arise when evaluation results of some new CDA have to be compared with already published evaluation results. The problem is that different authors use different test-fields, different Ground-Truth (GT) catalogues, and even different methodologies for evaluation of their CDAs. Re-implementation of already published CDAs or its evaluation environment is a time-consuming and unpractical solution to this problem. In addition, implementation details are often insufficiently described in publications. As a result, there is a need in research community to develop a framework for objective evaluation of CDAs. A scientific question is how CDAs should be evaluated so that the results are easily and reliably comparable. In attempt to solve this issue we first analyzed previously published work on CDAs. In this paper, we propose a framework for solution of the problem of objective CDA evaluation. The framework includes: (1) a definition of the measure for differences between craters; (2) test-field topography based on the 1/64° MOLA data; (3) the GT catalogue wherein each of 17,582 craters is aligned with MOLA data and confirmed with catalogues by N.G. Barlow et al. and J.F. Rodionova et al.; (4) selection of methodology for training and testing; and (5) a Free-response Receiver Operating Characteristics (F-ROC) curves as a way to measure CDA performance. The handling of possible improvements of the framework in the future is additionally addressed as a part of discussion of results. Possible extensions with additional test-field subsystems based on visual images, data sets for other planets, evaluation methodologies for CDAs developed for different purposes than cataloguing of craters, are proposed as well. The goal of

  5. Development of gemifloxacin in vitro susceptibility test methods for gonococci including quality control guidelines. The Quality Control Study Group.

    PubMed

    Jones, R N; Erwin, M E

    2000-07-01

    Gemifloxacin (formerly SB-265805 or LB20304a) is a new fluoronapthyridone with documented activity against Gram-positive and -negative organisms. The activity of gemifloxacin was tested against 150 Neisseria gonorrhoeae strains, using reference agar dilution, standardized disk diffusion, and Etest (AB BIODISK, Solna, Sweden) methods. Gemifloxacin was very potent against ciprofloxacin (CIPRO)-susceptible strains (MIC(90,) 0.008 microg/ml) but was significantly less active against the CIPRO-resistant gonococci (MIC(90,) 0.12 microg/ml). Etest and reference agar dilution MIC results showed excellent correlation (r = 0.96), and 98.7% MICs were within +/- one log(2) dilution. Agar dilution MICs were also compared to zone diameters obtained using gemifloxacin 5-microg disks; and complete intermethod categorical agreement (100%) was achieved applying breakpoints proposed as follows: < or =0.25 microg/ml (zone, > or =25 mm) for susceptible and > or =1 microg/ml (zone, < or =21 mm) for resistant. Gemifloxacin MIC and disk diffusion te quality control (QC) ranges were established for N. gonorrhoeae ATCC 49226. Data were collected from > or = seven laboratories, three GC agar medium lots for both agar MICs and disk methods, and two lots each of the 5- and 10-microg disks. The proposed MIC QC range was 0.002 to 0.016 microg/ml and the calculated mm zone ranges (median +/- 0.5x average mm range) for both disks were similar, but contained only 88.1 to 91.9% of participant results. To achieve the acceptable > or = 95% of all study results within range, a 43 to 54 mm limits (5-microg disks) were necessary. The excellent broad-spectrum activity and a low reported adverse effects profile of gemifloxacin shows a potential for treatment of fluoroquinolone-resistant gonorrhea.

  6. A platform for testing and comparing of real-time decision-support algorithms in mobile environments.

    PubMed

    Khitrov, Maxim Y; Rutishauser, Matthew; Montgomery, Kevin; Reisner, Andrew T; Reifman, Jaques

    2009-01-01

    The unavailability of a flexible system for realtime testing of decision-support algorithms in a pre-hospital clinical setting has limited their use. In this study, we describe a plug-and-play platform for real-time testing of decision-support algorithms during the transport of trauma casualties en route to a hospital. The platform integrates a standard-of-care vital-signs monitor, which collects numeric and waveform physiologic time-series data, with a rugged ultramobile personal computer. The computer time-stamps and stores data received from the monitor, and performs analysis on the collected data in real-time. Prior to field deployment, we assessed the performance of each component of the platform by using an emulator to simulate a number of possible fault scenarios that could be encountered in the field. Initial testing with the emulator allowed us to identify and fix software inconsistencies and showed that the platform can support a quick development cycle for real-time decision-support algorithms.

  7. Multi-color space threshold segmentation and self-learning k-NN algorithm for surge test EUT status identification

    NASA Astrophysics Data System (ADS)

    Huang, Jian; Liu, Gui-xiong

    2016-09-01

    The identification of targets varies in different surge tests. A multi-color space threshold segmentation and self-learning k-nearest neighbor algorithm ( k-NN) for equipment under test status identification was proposed after using feature matching to identify equipment status had to train new patterns every time before testing. First, color space (L*a*b*, hue saturation lightness (HSL), hue saturation value (HSV)) to segment was selected according to the high luminance points ratio and white luminance points ratio of the image. Second, the unknown class sample S r was classified by the k-NN algorithm with training set T z according to the feature vector, which was formed from number of pixels, eccentricity ratio, compactness ratio, and Euler's numbers. Last, while the classification confidence coefficient equaled k, made S r as one sample of pre-training set T z '. The training set T z increased to T z+1 by T z ' if T z ' was saturated. In nine series of illuminant, indicator light, screen, and disturbances samples (a total of 21600 frames), the algorithm had a 98.65%identification accuracy, also selected five groups of samples to enlarge the training set from T 0 to T 5 by itself.

  8. Effect of yoga practices on pulmonary function tests including transfer factor of lung for carbon monoxide (TLCO) in asthma patients.

    PubMed

    Singh, Savita; Soni, Ritu; Singh, K P; Tandon, O P

    2012-01-01

    Prana is the energy, when the self-energizing force embraces the body with extension and expansion and control, it is pranayama. It may affect the milieu at the bronchioles and the alveoli particularly at the alveolo-capillary membrane to facilitate diffusion and transport of gases. It may also increase oxygenation at tissue level. Aim of our study is to compare pulmonary functions and diffusion capacity in patients of bronchial asthma before and after yogic intervention of 2 months. Sixty stable asthmatic-patients were randomized into two groups i.e group 1 (Yoga training group) and group 2 (control group). Each group included thirty patients. Lung functions were recorded on all patients at baseline, and then after two months. Group 1 subjects showed a statistically significant improvement (P<0.001) in Transfer factor of the lung for carbon monoxide (TLCO), forced vital capacity (FVC), forced expiratory volume in 1st sec (FEV1), peak expiratory flow rate (PEFR), maximum voluntary ventilation (MVV) and slow vital capacity (SVC) after yoga practice. Quality of life also increased significantly. It was concluded that pranayama & yoga breathing and stretching postures are used to increase respiratory stamina, relax the chest muscles, expand the lungs, raise energy levels, and calm the body.

  9. Space shuttle orbiter avionics software: Post review report for the entry FACI (First Article Configuration Inspection). [including orbital flight tests integrated system

    NASA Technical Reports Server (NTRS)

    Markos, H.

    1978-01-01

    Status of the computer programs dealing with space shuttle orbiter avionics is reported. Specific topics covered include: delivery status; SSW software; SM software; DL software; GNC software; level 3/4 testing; level 5 testing; performance analysis, SDL readiness for entry first article configuration inspection; and verification assessment.

  10. Comparison of GenomEra C. difficile and Xpert C. difficile as confirmatory tests in a multistep algorithm for diagnosis of Clostridium difficile infection.

    PubMed

    Alcalá, Luis; Reigadas, Elena; Marín, Mercedes; Fernández-Chico, Antonia; Catalán, Pilar; Bouza, Emilio

    2015-01-01

    We compared two multistep diagnostic algorithms based on C. Diff Quik Chek Complete and, as confirmatory tests, GenomEra C. difficile and Xpert C. difficile. The sensitivity, specificity, positive predictive value, and negative predictive value were 87.2%, 99.7%, 97.1%, and 98.3%, respectively, for the GenomEra-based algorithm and 89.7%, 99.4%, 95.5%, and 98.6%, respectively, for the Xpert-based algorithm. GenomEra represents an alternative to Xpert as a confirmatory test of a multistep algorithm for Clostridium difficile infection (CDI) diagnosis.

  11. 40 CFR 1048.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... confirm that the test is valid. Operate the engine and sampling system as follows: (i) Engines with lean NO X aftertreatment. For lean-burn engines that depend on aftertreatment to meet the NOX emission...) Engines without lean NO X aftertreatment. For other engines, operate the engine for at least 5...

  12. Using a Multitest Algorithm to Improve the Positive Predictive Value of Rapid HIV Testing and Linkage to HIV Care in Nonclinical HIV Test Sites

    PubMed Central

    Delaney, Kevin P.; Rurangirwa, Jacqueline; Facente, Shelley; Dowling, Teri; Janson, Mike; Knoble, Thomas; Vu, Annie; Hu, Yunyin W.; Kerndt, Peter R.; King, Jan; Scheer, Susan

    2016-01-01

    Background Use of a rapid HIV testing algorithm (RTA) in which all tests are conducted within one client appointment could eliminate off-site confirmatory testing and reduce the number of persons not receiving confirmed results. Methods An RTA was implemented in 9 sites in Los Angeles and San Francisco; results of testing at these sites were compared with 23 sites conducting rapid HIV testing with off-site confirmation. RTA clients with reactive results on more than 1 rapid test were considered HIV+ and immediately referred for HIV care. The positive predictive values (PPVs) of a single rapid HIV test and the RTA were calculated compared with laboratory-based confirmatory testing. A Poisson risk regression model was used to assess the effect of RTA on the proportion of HIV+ persons linked to HIV care within 90 days of a reactive rapid test. Results The PPV of the RTA was 100% compared with 86.4% for a single rapid test. The time between testing and receipt of RTA results was on average 8 days shorter than laboratory-based confirmatory testing. For risk groups other than men who had sex with men, the RTA increased the probability of being in care within 90 days compared with standard testing practice. Conclusions The RTA increased the PPV of rapid testing to 100%, giving providers, clients, and HIV counselors timely information about a client’s HIV-positive serostatus. Use of RTA could reduce loss to follow-up between testing positive and confirmation and increase the proportion of HIV-infected persons receiving HIV care. PMID:26284530

  13. 40 CFR 1048.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... discrete-mode testing: Table 1 of § 1048.505 C2 mode No. Engine speed 1 Torque(percent) 2 Weightingfactors... are defined in 40 CFR part 1065. 2 The percent torque is relative to the maximum torque at the given... mode Time in mode(seconds) Engine speed 1 2 Torque(percent) 2 3 1a Steady-state 119 Warm idle 0...

  14. 40 CFR 1048.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... § 1048.505 C2 mode No. Engine speed 1 Torque(percent) 2 Weightingfactors 1 Maximum test speed 25 0.06 2... percent torque is relative to the maximum torque at the given engine speed. (ii) The following duty cycle... Torque(percent) 2,3 1a Steady-state 119 Warm idle 0 1b Transition 20 Linear transition Linear...

  15. Testing the Generalization Efficiency of Oil Slick Classification Algorithm Using Multiple SAR Data for Deepwater Horizon Oil Spill

    NASA Astrophysics Data System (ADS)

    Ozkan, C.; Osmanoglu, B.; Sunar, F.; Staples, G.; Kalkan, K.; Balık Sanlı, F.

    2012-07-01

    Marine oil spills due to releases of crude oil from tankers, offshore platforms, drilling rigs and wells, etc. are seriously affecting the fragile marine and coastal ecosystem and cause political and environmental concern. A catastrophic explosion and subsequent fire in the Deepwater Horizon oil platform caused the platform to burn and sink, and oil leaked continuously between April 20th and July 15th of 2010, releasing about 780,000 m3 of crude oil into the Gulf of Mexico. Today, space-borne SAR sensors are extensively used for the detection of oil spills in the marine environment, as they are independent from sun light, not affected by cloudiness, and more cost-effective than air patrolling due to covering large areas. In this study, generalization extent of an object based classification algorithm was tested for oil spill detection using multiple SAR imagery data. Among many geometrical, physical and textural features, some more distinctive ones were selected to distinguish oil and look alike objects from each others. The tested classifier was constructed from a Multilayer Perception Artificial Neural Network trained by ABC, LM and BP optimization algorithms. The training data to train the classifier were constituted from SAR data consisting of oil spill originated from Lebanon in 2007. The classifier was then applied to the Deepwater Horizon oil spill data in the Gulf of Mexico on RADARSAT-2 and ALOS PALSAR images to demonstrate the generalization efficiency of oil slick classification algorithm.

  16. Testing a polarimetric cloud imager aboard research vessel Polarstern: comparison of color-based and polarimetric cloud detection algorithms.

    PubMed

    Barta, András; Horváth, Gábor; Horváth, Ákos; Egri, Ádám; Blahó, Miklós; Barta, Pál; Bumke, Karl; Macke, Andreas

    2015-02-10

    Cloud cover estimation is an important part of routine meteorological observations. Cloudiness measurements are used in climate model evaluation, nowcasting solar radiation, parameterizing the fluctuations of sea surface insolation, and building energy transfer models of the atmosphere. Currently, the most widespread ground-based method to measure cloudiness is based on analyzing the unpolarized intensity and color distribution of the sky obtained by digital cameras. As a new approach, we propose that cloud detection can be aided by the additional use of skylight polarization measured by 180° field-of-view imaging polarimetry. In the fall of 2010, we tested such a novel polarimetric cloud detector aboard the research vessel Polarstern during expedition ANT-XXVII/1. One of our goals was to test the durability of the measurement hardware under the extreme conditions of a trans-Atlantic cruise. Here, we describe the instrument and compare the results of several different cloud detection algorithms, some conventional and some newly developed. We also discuss the weaknesses of our design and its possible improvements. The comparison with cloud detection algorithms developed for traditional nonpolarimetric full-sky imagers allowed us to evaluate the added value of polarimetric quantities. We found that (1) neural-network-based algorithms perform the best among the investigated schemes and (2) global information (the mean and variance of intensity), nonoptical information (e.g., sun-view geometry), and polarimetric information (e.g., the degree of polarization) improve the accuracy of cloud detection, albeit slightly.

  17. NWRA AVOSS Wake Vortex Prediction Algorithm. 3.1.1

    NASA Technical Reports Server (NTRS)

    Robins, R. E.; Delisi, D. P.; Hinton, David (Technical Monitor)

    2002-01-01

    This report provides a detailed description of the wake vortex prediction algorithm used in the Demonstration Version of NASA's Aircraft Vortex Spacing System (AVOSS). The report includes all equations used in the algorithm, an explanation of how to run the algorithm, and a discussion of how the source code for the algorithm is organized. Several appendices contain important supplementary information, including suggestions for enhancing the algorithm and results from test cases.

  18. Comments on "Including the effects of temperature-dependent opacities in the implicit Monte Carlo algorithm" by N.A. Gentile [J. Comput. Phys. 230 (2011) 5100-5114

    NASA Astrophysics Data System (ADS)

    Ghosh, Karabi

    2017-02-01

    We briefly comment on a paper by N.A. Gentile [J. Comput. Phys. 230 (2011) 5100-5114] in which the Fleck factor has been modified to include the effects of temperature-dependent opacities in the implicit Monte Carlo algorithm developed by Fleck and Cummings [1,2]. Instead of the Fleck factor, f = 1 / (1 + βcΔtσP), the author derived the modified Fleck factor g = 1 / (1 + βcΔtσP - min [σP‧ (a Tr4 - aT4) cΔt/ρCV, 0 ]) to be used in the Implicit Monte Carlo (IMC) algorithm in order to obtain more accurate solutions with much larger time steps. Here β = 4 aT3 / ρCV, σP is the Planck opacity and the derivative of Planck opacity w.r.t. the material temperature is σP‧ = dσP / dT.

  19. An algorithm for circular test and improved optical configuration by two-dimensional (2D) laser heterodyne interferometer.

    PubMed

    Tang, Shanzhi; Yu, Shengrui; Han, Qingfu; Li, Ming; Wang, Zhao

    2016-09-01

    Circular test is an important tactic to assess motion accuracy in many fields especially machine tool and coordinate measuring machine. There are setup errors due to using directly centring of the measuring instrument for both of contact double ball bar and existed non-contact methods. To solve this problem, an algorithm for circular test using function construction based on matrix operation is proposed, which is not only used for the solution of radial deviation (F) but also should be applied to obtain two other evaluation parameters especially circular hysteresis (H). Furthermore, an improved optical configuration with a single laser is presented based on a 2D laser heterodyne interferometer. Compared with the existed non-contact method, it has a more pure homogeneity of the laser sources of 2D displacement sensing for advanced metrology. The algorithm and modeling are both illustrated. And error budget is also achieved. At last, to validate them, test experiments for motion paths are implemented based on a gantry machining center. Contrast test results support the proposal.

  20. An algorithm for circular test and improved optical configuration by two-dimensional (2D) laser heterodyne interferometer

    NASA Astrophysics Data System (ADS)

    Tang, Shanzhi; Yu, Shengrui; Han, Qingfu; Li, Ming; Wang, Zhao

    2016-09-01

    Circular test is an important tactic to assess motion accuracy in many fields especially machine tool and coordinate measuring machine. There are setup errors due to using directly centring of the measuring instrument for both of contact double ball bar and existed non-contact methods. To solve this problem, an algorithm for circular test using function construction based on matrix operation is proposed, which is not only used for the solution of radial deviation (F) but also should be applied to obtain two other evaluation parameters especially circular hysteresis (H). Furthermore, an improved optical configuration with a single laser is presented based on a 2D laser heterodyne interferometer. Compared with the existed non-contact method, it has a more pure homogeneity of the laser sources of 2D displacement sensing for advanced metrology. The algorithm and modeling are both illustrated. And error budget is also achieved. At last, to validate them, test experiments for motion paths are implemented based on a gantry machining center. Contrast test results support the proposal.

  1. A Parametric Testing Environment for Finding the Operational Envelopes of Simulated Guidance Algorithms

    NASA Technical Reports Server (NTRS)

    Barrett, Anthony

    2011-01-01

    The Problem: As NASA missions become ever more complex and subsystems become ever more complicated, testing for correctness becomes progressively more difficult. Exhaustive testing is usually impractical, so how does one select a smaller set of test cases that is effective at finding/analyzing bugs? Solution:(1) Let an analyst pose test-space coverage requirements and then refine these requirements to focus on regions of interest in response to visualized test results. (2) Instead of validating correctness around set points (with Monte Carlo analysis) find and characterize the margins of the performance envelop where the system starts to fail.

  2. Parkinson’s Disease and the Stroop Color Word Test: Processing Speed and Interference Algorithms

    PubMed Central

    Sisco, S.; Slonena, E.; Okun, M.S.; Bowers, D.; Price, C.C.

    2016-01-01

    OBJECTIVE Processing speed alters the traditional Stroop calculations of interference. Consequently, alternative algorithms for calculating Stroop interference have been introduced to control for processing speed, and have done so in a multiple sclerosis sample. This study examined how these processing speed correction algorithms change interference scores for individuals with idiopathic Parkinson’s Disease (PD, n= 58) and non-PD peers (n= 68). METHOD Linear regressions controlling for demographics predicted group (PD vs. non-PD) differences for Jensen’s, Golden’s, relative, ratio, and residualized interference scores. To examine convergent and divergent validity, interference scores were correlated to standardized measures of processing speed and executive function. RESULTS PD - non-PD differences were found for Jensen’s interference score, but not Golden’s score, or the relative, ratio, and residualized interference scores. Jensens’ score correlated significantly with standardized processing speed but not executive function measures. Relative, ratio and residualized scores correlated with executive function but not processing speed measures. Golden’s score did not correlate with any other standardized measures. CONCLUSIONS The relative, ratio, and residualized scores were comparable for measuring Stroop interference in processing speed-impaired populations. Overall, the ratio interference score may be the most useful calculation method to control for processing speed in this population. PMID:27264121

  3. Using Lagrangian-based process studies to test satellite algorithms of vertical carbon flux in the eastern North Pacific Ocean

    NASA Astrophysics Data System (ADS)

    Stukel, M. R.; Kahru, M.; Benitez-Nelson, C. R.; Décima, M.; Goericke, R.; Landry, M. R.; Ohman, M. D.

    2015-11-01

    The biological carbon pump is responsible for the transport of ˜5-20 Pg C yr-1 from the surface into the deep ocean but its variability is poorly understood due to an incomplete mechanistic understanding of the complex underlying planktonic processes. In fact, algorithms designed to estimate carbon export from satellite products incorporate fundamentally different assumptions about the relationships between plankton biomass, productivity, and export efficiency. To test the alternate formulations of export efficiency in remote-sensing algorithms formulated by Dunne et al. (2005), Laws et al. (2011), Henson et al. (2011), and Siegel et al. (2014), we have compiled in situ measurements (temperature, chlorophyll, primary production, phytoplankton biomass and size structure, grazing rates, net chlorophyll change, and carbon export) made during Lagrangian process studies on seven cruises in the California Current Ecosystem and Costa Rica Dome. A food-web based approach formulated by Siegel et al. (2014) performs as well or better than other empirical formulations, while simultaneously providing reasonable estimates of protozoan and mesozooplankton grazing rates. By tuning the Siegel et al. (2014) algorithm to match in situ grazing rates more accurately, we also obtain better in situ carbon export measurements. Adequate representations of food-web relationships and grazing dynamics are therefore crucial to improving the accuracy of export predictions made from satellite-derived products. Nevertheless, considerable unexplained variance in export remains and must be explored before we can reliably use remote sensing products to assess the impact of climate change on biologically mediated carbon sequestration.

  4. Characterizing and hindcasting ripple bedform dynamics: Field test of non-equilibrium models utilizing a fingerprint algorithm

    NASA Astrophysics Data System (ADS)

    DuVal, Carter B.; Trembanis, Arthur C.; Skarke, Adam

    2016-03-01

    Ripple bedform response to near bed forcing has been found to be asynchronous with rapidly changing hydrodynamic conditions. Recent models have attempted to account for this time variance through the introduction of a time offset between hydrodynamic forcing and seabed response with varying success. While focusing on temporal ripple evolution, spatial ripple variation has been partly neglected. With the fingerprint algorithm ripple bedform parameterization technique, spatial variation can be quickly and precisely characterized, and as such, this method is particularly useful for evaluation of ripple model spatio-temporal validity. Using time-series hydrodynamic data and synoptic acoustic imagery collected at an inner continental shelf site, this study compares an adapted time-varying ripple geometric model to observed field observations in light of the fingerprint algorithm results. Multiple equilibrium ripple predictors are tested within the time-varying model, with the algorithm results serving as the baseline geometric values. Results indicate that ripple bedforms, in the presence of rapidly changing high-energy conditions, reorganize at a slower rate than predicted by the models. Relict ripples were found to be near peak-forcing wavelengths after rapidly decaying storm events, and still present after months of sub-critical flow conditions.

  5. Model-based testing with UML applied to a roaming algorithm for bluetooth devices.

    PubMed

    Dai, Zhen Ru; Grabowski, Jens; Neukirchen, Helmut; Pals, Holger

    2004-11-01

    In late 2001, the Object Management Group issued a Request for Proposal to develop a testing profile for UML 2.0. In June 2003, the work on the UML 2.0 Testing Profile was finally adopted by the OMG. Since March 2004, it has become an official standard of the OMG. The UML 2.0 Testing Profile provides support for UML based model-driven testing. This paper introduces a methodology on how to use the testing profile in order to modify and extend an existing UML design model for test issues. The application of the methodology will be explained by applying it to an existing UML Model for a Bluetooth device.

  6. An MDI (Minimum Discrimination Information) Model and an Algorithm for Composite Hypotheses Testing and Estimation in Marketing. Revision 2.

    DTIC Science & Technology

    1982-09-01

    MARKETING * by A. Charnes W. W. Cooper *D. B. Learner*F. Y. Phillips* CENTER FOR CYBEI! NTIC STU[)IES The Universityof Texas Austin,Texas 78712 CM a a I 0...34,.. u.. 29 061. L1..<i’ I Research Report CCS 397 AN MDI MODEL AND AN ALGORITHM FOR COMPOSITE HYPOTHESES TESTING AND ESTIMATION IN MARKETING * by A...Charn~s W. V. Cooper D. E. Lea-nir* .. F. ".Philli,.)s* --. Original: July 1981 Revispd: September.1981.Second Revision: Septembr 1982 0, 0 gt’, * Market

  7. Testing earthquake prediction algorithms: Statistically significant advance prediction of the largest earthquakes in the Circum-Pacific, 1992-1997

    USGS Publications Warehouse

    Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.

    1999-01-01

    Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier

  8. Testing the robustness of the genetic algorithm on the floating building block representation

    SciTech Connect

    Lindsay, R.K.; Wu, A.S.

    1996-12-31

    Recent studies on a floating building block representation for the genetic algorithm (GA) suggest that there are many advantages to using the floating representation. This paper investigates the behavior of the GA on floating representation problems in response to three different types of pressures: (1) a reduction in the amount of genetic material available to the GA during the problem solving process, (2) functions which have negative-valued building blocks, and (3) randomizing non-coding segments. Results indicate that the GA`s performance on floating representation problems is very robust. Significant reductions in genetic material (genome length) may be made with relatively small decrease in performance. The GA can effectively solve problems with negative building blocks. Randomizing non-coding segments appears to improve rather than harm GA performance.

  9. Surface evaluation with Ronchi test by using Malacara formula, genetic algorithms, and cubic splines

    NASA Astrophysics Data System (ADS)

    Cordero-Dávila, Alberto; González-García, Jorge

    2010-08-01

    In the manufacturing process of an optical surface with rotational symmetry the ideal ronchigram is simulated and compared with the experimental ronchigram. From this comparison the technician, based on your experience, estimated the error on the surface. Quantitatively, the error on the surface can be described by a polynomial e(ρ2) and the coefficients can be estimated from data of the ronchigrams (real and ideal) to solve a system of nonlinear differential equations which are related to the Malacara formula of the transversal aberration. To avoid the problems inherent in the use of polynomials it proposed to describe the errors on the surface by means of cubic splines. The coefficients of each spline are estimated from a discrete set of errors (ρi,ei) and these are evaluated by means of genetic algorithms to reproduce the experimental ronchigrama starting from the ideal.

  10. Synthetic tests of passive microwave brightness temperature assimilation over snow covered land using machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Forman, B. A.

    2015-12-01

    A novel data assimilation framework is evaluated that assimilates passive microwave (PMW) brightness temperature (Tb) observations into an advanced land surface model for the purpose of improving snow depth and snow water equivalent (SWE) estimates across regional- and continental-scales. The multifrequency, multipolarization framework employs machine learning algorithms to predict PMW Tb as a function of land surface model state information and subsequently merges the predicted PMW Tb with observed PMW Tb from the Advanced Microwave Scanning Radiometer (AMSR-E). The merging procedure is predicated on conditional probabilities computed within a Bayesian statistical framework using either an Ensemble Kalman Filter (EnKF) or an Ensemble Kalman Smoother (EnKS). The data assimilation routine produces a conditioned (updated) estimate of modeled SWE that is more accurate and contains less uncertainty than the model without assimilation. A synthetic case study is presented for select locations in North America that compares model results with and without assimilation against synthetic observations of snow depth and SWE. It is shown that the data assimilation framework improves modeled estimates of snow depth and SWE during both the accumulation and ablation phases of the snow season. Further, it is demonstrated that the EnKS outperforms the EnKF implementation due to its ability to better modulate high frequency noise into the conditioned estimates. The overarching findings from this study demonstrate the feasibility of machine learning algorithms for use as an observation model operator within a data assimilation framework in order to improve model estimates of snow depth and SWE across regional- and continental-scales.

  11. Testing the portal imager GLAaS algorithm for machine quality assurance

    PubMed Central

    Nicolini, G; Vanetti, E; Clivio, A; Fogliata, A; Boka, G; Cozzi, L

    2008-01-01

    Background To report about enhancements introduced in the GLAaS calibration method to convert raw portal imaging images into absolute dose matrices and to report about application of GLAaS to routine radiation tests for linac quality assurance procedures programmes. Methods Two characteristic effects limiting the general applicability of portal imaging based dosimetry are the over-flattening of images (eliminating the "horns" and "holes" in the beam profiles induced by the presence of flattening filters) and the excess of backscattered radiation originated by the detector robotic arm supports. These two effects were corrected for in the new version of GLAaS formalism and results are presented to prove the improvements for different beams, detectors and support arms. GLAaS was also tested for independence from dose rate (fundamental to measure dynamic wedges). With the new corrections, it is possible to use GLAaS to perform standard tasks of linac quality assurance. Data were acquired to analyse open and wedged fields (mechanical and dynamic) in terms of output factors, MU/Gy, wedge factors, profile penumbrae, symmetry and homogeneity. In addition also 2D Gamma Evaluation was applied to measurement to expand the standard QA methods. GLAaS based data were compared against calculations on the treatment planning system (the Varian Eclipse) and against ion chamber measurements as consolidated benchmark. Measurements were performed mostly on 6 MV beams from Varian linacs. Detectors were the PV-as500/IAS2 and the PV-as1000/IAS3 equipped with either the robotic R- or Exact- arms. Results Corrections for flattening filter and arm backscattering were successfully tested. Percentage difference between PV-GLAaS measurements and Eclipse calculations relative doses at the 80% of the field size, for square and rectangular fields larger than 5 × 5 cm2 showed a maximum range variation of -1.4%, + 1.7% with a mean variation of <0.5%. For output factors, average percentage

  12. SU-E-T-347: Validation of the Condensed History Algorithm of Geant4 Using the Fano Test

    SciTech Connect

    Lee, H; Mathis, M; Sawakuchi, G

    2014-06-01

    Purpose: To validate the condensed history algorithm and physics of the Geant4 Monte Carlo toolkit for simulations of ionization chambers (ICs). This study is the first step to validate Geant4 for calculations of photon beam quality correction factors under the presence of a strong magnetic field for magnetic resonance guided linac system applications. Methods: The electron transport and boundary crossing algorithms of Geant4 version 9.6.p02 were tested under Fano conditions using the Geant4 example/application FanoCavity. User-defined parameters of the condensed history and multiple scattering algorithms were investigated under Fano test conditions for three scattering models (physics lists): G4UrbanMscModel95 (PhysListEmStandard-option3), G4GoudsmitSaundersonMsc (PhysListEmStandard-GS), and G4WentzelVIModel/G4CoulombScattering (PhysListEmStandard-WVI). Simulations were conducted using monoenergetic photon beams, ranging from 0.5 to 7 MeV and emphasizing energies from 0.8 to 3 MeV. Results: The GS and WVI physics lists provided consistent Fano test results (within ±0.5%) for maximum step sizes under 0.01 mm at 1.25 MeV, with improved performance at 3 MeV (within ±0.25%). The option3 physics list provided consistent Fano test results (within ±0.5%) for maximum step sizes above 1 mm. Optimal parameters for the option3 physics list were 10 km maximum step size with default values for other user-defined parameters: 0.2 dRoverRange, 0.01 mm final range, 0.04 range factor, 2.5 geometrical factor, and 1 skin. Simulations using the option3 physics list were ∼70 – 100 times faster compared to GS and WVI under optimal parameters. Conclusion: This work indicated that the option3 physics list passes the Fano test within ±0.5% when using a maximum step size of 10 km for energies suitable for IC calculations in a 6 MV spectrum without extensive computational times. Optimal user-defined parameters using the option3 physics list will be used in future IC simulations to

  13. On-sky tests of the CuReD and HWR fast wavefront reconstruction algorithms with CANARY

    NASA Astrophysics Data System (ADS)

    Bitenc, Urban; Basden, Alastair; Bharmal, Nazim Ali; Morris, Tim; Dipper, Nigel; Gendron, Eric; Vidal, Fabrice; Gratadour, Damien; Rousset, Gérard; Myers, Richard

    2015-04-01

    CuReD (Cumulative Reconstructor with domain Decomposition) and HWR (Hierarchical Wavefront Reconstructor) are novel wavefront reconstruction algorithms for the Shack-Hartmann wavefront sensor, used in the single-conjugate adaptive optics. For a high-order system they are much faster than the traditional matrix-vector-multiplication method. We have developed three methods for mapping the reconstructed phase into the deformable mirror actuator commands and have tested both reconstructors with the CANARY instrument. We find out that the CuReD reconstructor runs stably only if the feedback loop is operated as a leaky integrator, whereas HWR runs stably with the conventional integrator control. Using the CANARY telescope simulator we find that the Strehl ratio (SR) obtained with CuReD is slightly higher than that of the traditional least-squares estimator (LSE). We demonstrate that this is because the CuReD algorithm has a smoothing effect on the output wavefront. The SR of HWR is slightly lower than that of LSE. We have tested both reconstructors extensively on-sky. They perform well and CuReD achieves a similar SR as LSE. We compare the CANARY results with those from a computer simulation and find good agreement between the two.

  14. Global Precipitation Measurement: GPM Microwave Imager (GMI) Algorithm Development Approach

    NASA Technical Reports Server (NTRS)

    Stocker, Erich Franz

    2009-01-01

    This slide presentation reviews the approach to the development of the Global Precipitation Measurement algorithm. This presentation includes information about the responsibilities for the development of the algorithm, and the calibration. Also included is information about the orbit, and the sun angle. The test of the algorithm code will be done with synthetic data generated from the Precipitation Processing System (PPS).

  15. Parasitological diagnosis combining an internally controlled real-time PCR assay for the detection of four protozoa in stool samples with a testing algorithm for microscopy.

    PubMed

    Bruijnesteijn van Coppenraet, L E S; Wallinga, J A; Ruijs, G J H M; Bruins, M J; Verweij, J J

    2009-09-01

    Molecular detection of gastrointestinal protozoa is more sensitive and more specific than microscopy but, to date, has not routinely replaced time-consuming microscopic analysis. Two internally controlled real-time PCR assays for the combined detection of Entamoeba histolytica, Giardia lamblia, Cryptosporidium spp. and Dientamoeba fragilis in single faecal samples were compared with Triple Faeces Test (TFT) microscopy results from 397 patient samples. Additionally, an algorithm for complete parasitological diagnosis was created. Real-time PCR revealed 152 (38.3%) positive cases, 18 of which were double infections: one (0.3%) sample was positive for E. histolytica, 44 (11.1%) samples were positive for G. lamblia, 122 (30.7%) samples were positive for D. fragilis, and three (0.8%) samples were positive for Cryptosporidium. TFT microscopy yielded 96 (24.2%) positive cases, including five double infections: one sample was positive for E. histolytica/Entamoeba dispar, 29 (7.3%) samples were positive for G. lamblia, 69 (17.4%) samples were positive for D. fragilis, and two (0.5%) samples were positive for Cryptosporidium hominis/Cryptosporidium parvum. Retrospective analysis of the clinical patient information of 2887 TFT sets showed that eosinophilia, elevated IgE levels, adoption and travelling to (sub)tropical areas are predisposing factors for infection with non-protozoal gastrointestinal parasites. The proposed diagnostic algorithm includes application of real-time PCR to all samples, with the addition of microscopy on an unpreserved faecal sample in cases of a predisposing factor, or a repeat request for parasitological examination. Application of real-time PCR improved the diagnostic yield by 18%. A single stool sample is sufficient for complete parasitological diagnosis when an algorithm based on clinical information is applied.

  16. Evaluation of the Repeatability of the Delta Q Duct Leakage Testing TechniqueIncluding Investigation of Robust Analysis Techniques and Estimates of Weather Induced Uncertainty

    SciTech Connect

    Dickerhoff, Darryl; Walker, Iain

    2008-08-01

    for the pressure station approach. Walker and Dickerhoff also included estimates of DeltaQ test repeatability based on the results of field tests where two houses were tested multiple times. The two houses were quite leaky (20-25 Air Changes per Hour at 50Pa (0.2 in. water) (ACH50)) and were located in the San Francisco Bay area. One house was tested on a calm day and the other on a very windy day. Results were also presented for two additional houses that were tested by other researchers in Minneapolis, MN and Madison, WI, that had very tight envelopes (1.8 and 2.5 ACH50). These tight houses had internal duct systems and were tested without operating the central blower--sometimes referred to as control tests. The standard deviations between the multiple tests for all four houses were found to be about 1% of the envelope air flow at 50 Pa (0.2 in. water) (Q50) that led to the suggestion of this as a rule of thumb for estimating DeltaQ uncertainty. Because DeltaQ is based on measuring envelope air flows it makes sense for uncertainty to scale with envelope leakage. However, these tests were on a limited data set and one of the objectives of the current study is to increase the number of tested houses. This study focuses on answering two questions: (1) What is the uncertainty associated with changes in weather (primarily wind) conditions during DeltaQ testing? (2) How can these uncertainties be reduced? The first question is addressing issues of repeatability. To study this five houses were tested as many times as possible over a day. Weather data was recorded on-site--including the local windspeed. The result from these five houses were combined with the two Bay Area homes from the previous studies. The variability of the tests (represented by the standard deviation) is the repeatability of the test method for that house under the prevailing weather conditions. Because the testing was performed over a day a wide range of wind speeds was achieved following typical

  17. The feasibility test of state-of-the-art face detection algorithms for vehicle occupant detection

    NASA Astrophysics Data System (ADS)

    Makrushin, Andrey; Dittmann, Jana; Vielhauer, Claus; Langnickel, Mirko; Kraetzer, Christian

    2010-01-01

    Vehicle seat occupancy detection systems are designed to prevent the deployment of airbags at unoccupied seats, thus avoiding the considerable cost imposed by the replacement of airbags. Occupancy detection can also improve passenger comfort, e.g. by activating air-conditioning systems. The most promising development perspectives are seen in optical sensing systems which have become cheaper and smaller in recent years. The most plausible way to check the seat occupancy by occupants is the detection of presence and location of heads, or more precisely, faces. This paper compares the detection performances of the three most commonly used and widely available face detection algorithms: Viola- Jones, Kienzle et al. and Nilsson et al. The main objective of this work is to identify whether one of these systems is suitable for use in a vehicle environment with variable and mostly non-uniform illumination conditions, and whether any one face detection system can be sufficient for seat occupancy detection. The evaluation of detection performance is based on a large database comprising 53,928 video frames containing proprietary data collected from 39 persons of both sexes and different ages and body height as well as different objects such as bags and rearward/forward facing child restraint systems.

  18. Modification of the BAX Salmonella test kit to include a hot start functionality (modification of AOAC Official Method 2003.09).

    PubMed

    Wallace, F Morgan; DiCosimo, Deana; Farnum, Andrew; Tice, George; Andaloro, Bridget; Davis, Eugene; Burns, Frank R

    2011-01-01

    In 2010, the BAX System PCR assay for Salmonella was modified to include a hot start functionality designed to keep the reaction enzyme inactive until PCR begins. To validate the assay's Official Methods of Analysis status to include this procedure modification, an evaluation was conducted on four food types that were simultaneously analyzed with the BAX System and either the U.S. Food and Drug Administration's Bacteriological Analytical Manual or the U.S. Department of Agriculture-Food Safety and Inspection Service Microbiology Laboratory Guidebook reference method for detecting Salmonella. Identical performance between the BAX System method and the reference methods was observed. Additionally, lysates were analyzed using both the BAX System Classic and BAX System Q7 instruments with identical results using both platforms for all samples tested. Of the 100 samples analyzed, 34 samples were positive for both the BAX System and reference methods, and 66 samples were negative by both the BAX System and reference methods, demonstrating 100% correlation. No instrument platform variation was observed. Additional inclusivity and exclusivity testing using the modified test kit demonstrated the test kit to be 100% accurate in evaluation of test panels of 352 Salmonella strains and 46 non-Salmonella strains.

  19. Test and evaluation of the HIDEC engine uptrim algorithm. [Highly Integrated Digital Electronic Control for aircraft

    NASA Technical Reports Server (NTRS)

    Ray, R. J.; Myers, L. P.

    1986-01-01

    The highly integrated digital electronic control (HIDEC) program will demonstrate and evaluate the improvements in performance and mission effectiveness that result from integrated engine-airframe control systems. Performance improvements will result from an adaptive engine stall margin mode, a highly integrated mode that uses the airplane flight conditions and the resulting inlet distortion to continuously compute engine stall margin. When there is excessive stall margin, the engine is uptrimmed for more thrust by increasing engine pressure ratio (EPR). The EPR uptrim logic has been evaluated and implemente into computer simulations. Thrust improvements over 10 percent are predicted for subsonic flight conditions. The EPR uptrim was successfully demonstrated during engine ground tests. Test results verify model predictions at the conditions tested.

  20. Ground Testing of Prototype Hardware and Processing Algorithms for a Wide Area Space Surveillance System (WASSS)

    DTIC Science & Technology

    2013-09-01

    Observatory ( MRO ) using the prototype WASSS camera, which has a 4×60 field-of-view , < 0.05 resolution, a 2.8 cm 2 aperture, and the ability to view...pixels, depending on the location within the field-of-view. Figure 4 compares the resolution during a night sky measurement at MRO with one...Degradation of resolution of star imagery for the MRO field test (right) compared to imagery from October 2012 (left). 3. MRO FIELD TEST A series of

  1. Genetic Algorithm Based Multi-Agent System Applied to Test Generation

    ERIC Educational Resources Information Center

    Meng, Anbo; Ye, Luqing; Roy, Daniel; Padilla, Pierre

    2007-01-01

    Automatic test generating system in distributed computing context is one of the most important links in on-line evaluation system. Although the issue has been argued long since, there is not a perfect solution to it so far. This paper proposed an innovative approach to successfully addressing such issue by the seamless integration of genetic…

  2. 40 CFR 51.357 - Test procedures and standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... invalid test condition, unsafe conditions, fast pass/fail algorithms, or, in the case of the on-board... using approved fast pass or fast fail algorithms and multiple pass/fail algorithms may be used during the test cycle to eliminate false failures. The transient test procedure, including algorithms...

  3. Improved pulse laser ranging algorithm based on high speed sampling

    NASA Astrophysics Data System (ADS)

    Gao, Xuan-yi; Qian, Rui-hai; Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; He, Shi-jie; Guo, Xiao-kang

    2016-10-01

    Narrow pulse laser ranging achieves long-range target detection using laser pulse with low divergent beams. Pulse laser ranging is widely used in military, industrial, civil, engineering and transportation field. In this paper, an improved narrow pulse laser ranging algorithm is studied based on the high speed sampling. Firstly, theoretical simulation models have been built and analyzed including the laser emission and pulse laser ranging algorithm. An improved pulse ranging algorithm is developed. This new algorithm combines the matched filter algorithm and the constant fraction discrimination (CFD) algorithm. After the algorithm simulation, a laser ranging hardware system is set up to implement the improved algorithm. The laser ranging hardware system includes a laser diode, a laser detector and a high sample rate data logging circuit. Subsequently, using Verilog HDL language, the improved algorithm is implemented in the FPGA chip based on fusion of the matched filter algorithm and the CFD algorithm. Finally, the laser ranging experiment is carried out to test the improved algorithm ranging performance comparing to the matched filter algorithm and the CFD algorithm using the laser ranging hardware system. The test analysis result demonstrates that the laser ranging hardware system realized the high speed processing and high speed sampling data transmission. The algorithm analysis result presents that the improved algorithm achieves 0.3m distance ranging precision. The improved algorithm analysis result meets the expected effect, which is consistent with the theoretical simulation.

  4. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  5. Advanced Transport Delay Compensation Algorithms: Results of Delay Measurement and Piloted Performance Tests

    NASA Technical Reports Server (NTRS)

    Guo, Liwen; Cardullo, Frank M.; Kelly, Lon C.

    2007-01-01

    This report summarizes the results of delay measurement and piloted performance tests that were conducted to assess the effectiveness of the adaptive compensator and the state space compensator for alleviating the phase distortion of transport delay in the visual system in the VMS at the NASA Langley Research Center. Piloted simulation tests were conducted to assess the effectiveness of two novel compensators in comparison to the McFarland predictor and the baseline system with no compensation. Thirteen pilots with heterogeneous flight experience executed straight-in and offset approaches, at various delay configurations, on a flight simulator where different predictors were applied to compensate for transport delay. The glideslope and touchdown errors, power spectral density of the pilot control inputs, NASA Task Load Index, and Cooper-Harper rating of the handling qualities were employed for the analyses. The overall analyses show that the adaptive predictor results in slightly poorer compensation for short added delay (up to 48 ms) and better compensation for long added delay (up to 192 ms) than the McFarland compensator. The analyses also show that the state space predictor is fairly superior for short delay and significantly superior for long delay than the McFarland compensator.

  6. The remote sensing of ocean primary productivity - Use of a new data compilation to test satellite algorithms

    NASA Technical Reports Server (NTRS)

    Balch, William; Evans, Robert; Brown, Jim; Feldman, Gene; Mcclain, Charles; Esaias, Wayne

    1992-01-01

    Global pigment and primary productivity algorithms based on a new data compilation of over 12,000 stations occupied mostly in the Northern Hemisphere, from the late 1950s to 1988, were tested. The results showed high variability of the fraction of total pigment contributed by chlorophyll, which is required for subsequent predictions of primary productivity. Two models, which predict pigment concentration normalized to an attenuation length of euphotic depth, were checked against 2,800 vertical profiles of pigments. Phaeopigments consistently showed maxima at about one optical depth below the chlorophyll maxima. CZCS data coincident with the sea truth data were also checked. A regression of satellite-derived pigment vs ship-derived pigment had a coefficient of determination. The satellite underestimated the true pigment concentration in mesotrophic and oligotrophic waters and overestimated the pigment concentration in eutrophic waters. The error in the satellite estimate showed no trends with time between 1978 and 1986.

  7. Testing Nelder-Mead based repulsion algorithms for multiple roots of nonlinear systems via a two-level factorial design of experiments.

    PubMed

    Ramadas, Gisela C V; Rocha, Ana Maria A C; Fernandes, Edite M G P

    2015-01-01

    This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as 'erf', is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm.

  8. Traces of dissolved particles, including coccoliths, in the tests of agglutinated foraminifera from the Challenger Deep (10,897 m water depth, western equatorial Pacific)

    NASA Astrophysics Data System (ADS)

    Gooday, A. J.; Uematsu, K.; Kitazato, H.; Toyofuku, T.; Young, J. R.

    2010-02-01

    We examined four multilocular agglutinated foraminiferan tests from the Challenger Deep, the deepest point in the world's oceans and well below the depth at which biogenic and most detrital minerals disappear from the sediment. The specimens represent undescribed species. Three are trochamminaceans in which imprints and other traces of dissolved agglutinated particles are visible in the orange or yellowish organic test lining. In Trochamminacean sp. A, a delicate meshwork of organic cement forms ridges between the grain impressions. The remnants of test particles include organic structures identifiable as moulds of coccoliths produced by the genus Helicosphaera. Their random alignment suggests that they were agglutinated individually rather than as fragments of a coccosphere. Trochamminacean sp. C incorporates discoidal structures with a central hole; these probably represent the proximal sides of isolated distal shields of another coccolith species, possibly Hayaster perplexus. Imprints of planktonic foraminiferan test fragments are also present in both these trochamminaceans. In Trochamminacean sp. B, the test surface is densely pitted with deep, often angular imprints ranging from roughly equidimensional to rod-shaped. The surfaces are either smooth, or have prominent longitudinal striations, probably made by cleavage traces. We presume these imprints represent mineral grains of various types that subsequently dissolved. X-ray microanalyses reveal strong peaks for Ca associated with grain impressions and coccolith remains in Trochamminacean sp. C. Minor peaks for this element are associated with coccolith remains and planktonic foraminiferan imprints in Trochamminacean sp. A. These Ca peaks possibly originate from traces of calcite remaining on the test surfaces. Agglutinated particles, presumably clay minerals, survive only in the fourth specimen (' Textularia' sp.). Here, the final 4-5 chambers comprise a pavement of small, irregularly shaped grains with flat

  9. Observables of a test mass along an inclined orbit in a post-Newtonian-approximated Kerr spacetime including the linear and quadratic spin terms.

    PubMed

    Hergt, Steven; Shah, Abhay; Schäfer, Gerhard

    2013-07-12

    The orbital motion is derived for a nonspinning test mass in the relativistic, gravitational field of a rotationally deformed body not restricted to the equatorial plane or spherical orbit. The gravitational field of the central body is represented by the Kerr metric, expanded to second post-Newtonian order including the linear and quadratic spin terms. The orbital period, the intrinsic periastron advance, and the precession of the orbital plane are derived with the aid of novel canonical variables and action-based methods.

  10. Comparative evaluation of the Minimally-Invasive Karyotyping (MINK) algorithm for non-invasive prenatal testing

    PubMed Central

    Chu, Tianjiao; Shaw, Patricia A.; Yeniterzi, Suveyda; Dunkel, Mary; Rajkovic, Aleksander; Hogge, W. Allen; Bunce, Kimberly D.; Peters, David G.

    2017-01-01

    Minimally Invasive Karyotyping (MINK) was communicated in 2009 as a novel method for the non-invasive detection of fetal copy number anomalies in maternal plasma DNA. The original manuscript illustrated the potential of MINK using a model system in which fragmented genomic DNA obtained from a trisomy 21 male individual was mixed with that of his karyotypically normal mother at dilutions representing fetal fractions found in maternal plasma. Although it has been previously shown that MINK is able to non-invasively detect fetal microdeletions, its utility for aneuploidy detection in maternal plasma has not previously been demonstrated. The current study illustrates the ability of MINK to detect common aneuploidy in early gestation, compares its performance to other published third party methods (and related software packages) for prenatal aneuploidy detection and evaluates the performance of these methods across a range of sequencing read inputs. Plasma samples were obtained from 416 pregnant women between gestational weeks 8.1 and 34.4. Shotgun DNA sequencing was performed and data analyzed using MINK RAPIDR and WISECONDOR. MINK performed with greater accuracy than RAPIDR and WISECONDOR, correctly identifying 60 out of 61 true trisomy cases, and reporting only one false positive in 355 normal pregnancies. Significantly, MINK achieved accurate detection of trisomy 21 using just 2 million aligned input reads, whereas WISECONDOR required 6 million reads and RAPIDR did not achieve complete accuracy at any read input tested. In conclusion, we demonstrate that MINK provides an analysis pipeline for the detection of fetal aneuploidy in samples of maternal plasma DNA. PMID:28306738

  11. Running GCM physics and dynamics on different grids: Algorithm and tests

    NASA Astrophysics Data System (ADS)

    Molod, A.

    2006-12-01

    The major drawback in the use of sigma coordinates in atmospheric GCMs, namely the error in the pressure gradient term near sloping terrain, leaves the use of eta coordinates an important alternative. A central disadvantage of an eta coordinate, the inability to retain fine resolution in the vertical as the surface rises above sea level, is addressed here. An `alternate grid' technique is presented which allows the tendencies of state variables due to the physical parameterizations to be computed on a vertical grid (the `physics grid') which retains fine resolution near the surface, while the remaining terms in the equations of motion are computed using an eta coordinate (the `dynamics grid') with coarser vertical resolution. As a simple test of the technique a set of perpetual equinox experiments using a simplified lower boundary condition with no land and no topography were performed. The results show that for both low and high resolution alternate grid experiments, much of the benefit of increased vertical resolution for the near surface meridional wind (and mass streamfield) can be realized by enhancing the vertical resolution of the `physics grid' in the manner described here. In addition, approximately half of the increase in zonal jet strength seen with increased vertical resolution can be realized using the `alternate grid' technique. A pair of full GCM experiments with realistic lower boundary conditions and topography were also performed. It is concluded that the use of the `alternate grid' approach offers a promising way forward to alleviate a central problem associated with the use of the eta coordinate in atmospheric GCMs.

  12. A synthetic Earth Gravity Model Designed Specifically for Testing Regional Gravimetric Geoid Determination Algorithms

    NASA Astrophysics Data System (ADS)

    Baran, I.; Kuhn, M.; Claessens, S. J.; Featherstone, W. E.; Holmes, S. A.; Vaníček, P.

    2006-04-01

    A synthetic [simulated] Earth gravity model (SEGM) of the geoid, gravity and topography has been constructed over Australia specifically for validating regional gravimetric geoid determination theories, techniques and computer software. This regional high-resolution (1-arc-min by 1-arc-min) Australian SEGM (AusSEGM) is a combined source and effect model. The long-wavelength effect part (up to and including spherical harmonic degree and order 360) is taken from an assumed errorless EGM96 global geopotential model. Using forward modelling via numerical Newtonian integration, the short-wavelength source part is computed from a high-resolution (3-arc-sec by 3-arc-sec) synthetic digital elevation model (SDEM), which is a fractal surface based on the GLOBE v1 DEM. All topographic masses are modelled with a constant mass-density of 2,670 kg/m3. Based on these input data, gravity values on the synthetic topography (on a grid and at arbitrarily distributed discrete points) and consistent geoidal heights at regular 1-arc-min geographical grid nodes have been computed. The precision of the synthetic gravity and geoid data (after a first iteration) is estimated to be better than 30 μ Gal and 3 mm, respectively, which reduces to 1 μ Gal and 1 mm after a second iteration. The second iteration accounts for the changes in the geoid due to the superposed synthetic topographic mass distribution. The first iteration of AusSEGM is compared with Australian gravity and GPS-levelling data to verify that it gives a realistic representation of the Earth’s gravity field. As a by-product of this comparison, AusSEGM gives further evidence of the north south-trending error in the Australian Height Datum. The freely available AusSEGM-derived gravity and SDEM data, included as Electronic Supplementary Material (ESM) with this paper, can be used to compute a geoid model that, if correct, will agree to in 3 mm with the AusSEGM geoidal heights, thus offering independent verification of theories

  13. Estimating Implementation and Operational Costs of an Integrated Tiered CD4 Service including Laboratory and Point of Care Testing in a Remote Health District in South Africa

    PubMed Central

    Cassim, Naseem; Coetzee, Lindi M.; Schnippel, Kathryn; Glencross, Deborah K.

    2014-01-01

    Background An integrated tiered service delivery model (ITSDM) has been proposed to provide ‘full-coverage’ of CD4 services throughout South Africa. Five tiers are described, defined by testing volumes and number of referring health-facilities. These include: (1) Tier-1/decentralized point-of-care service (POC) in a single site; Tier-2/POC-hub servicing processing <30–40 samples from 8–10 health-clinics; Tier-3/Community laboratories servicing ∼50 health-clinics, processing <150 samples/day; high-volume centralized laboratories (Tier-4 and Tier-5) processing <300 or >600 samples/day and serving >100 or >200 health-clinics, respectively. The objective of this study was to establish costs of existing and ITSDM-tiers 1, 2 and 3 in a remote, under-serviced district in South Africa. Methods Historical health-facility workload volumes from the Pixley-ka-Seme district, and the total volumes of CD4 tests performed by the adjacent district referral CD4 laboratories, linked to locations of all referring clinics and related laboratory-to-result turn-around time (LTR-TAT) data, were extracted from the NHLS Corporate-Data-Warehouse for the period April-2012 to March-2013. Tiers were costed separately (as a cost-per-result) including equipment, staffing, reagents and test consumable costs. A one-way sensitivity analyses provided for changes in reagent price, test volumes and personnel time. Results The lowest cost-per-result was noted for the existing laboratory-based Tiers- 4 and 5 ($6.24 and $5.37 respectively), but with related increased LTR-TAT of >24–48 hours. Full service coverage with TAT <6-hours could be achieved with placement of twenty-seven Tier-1/POC or eight Tier-2/POC-hubs, at a cost-per-result of $32.32 and $15.88 respectively. A single district Tier-3 laboratory also ensured ‘full service coverage’ and <24 hour LTR-TAT for the district at $7.42 per-test. Conclusion Implementing a single Tier-3/community laboratory to extend and improve delivery

  14. Validation Test Report for a Genetic Algorithm in the Glider Observation STrategies (GOST 1.0) Project: Sensitivity Studies

    DTIC Science & Technology

    2012-08-15

    which also includes a set of 22 CCF in temperature (T), Sonic Layer Depth (SLD), and below layer gradient ( BLG ). The summary maps identify relative...Below Layer Gradient ( BLG ), In- Layer Gradient (ILG), and Sonic Layer Depth (SLD). For this test only Temperature was used. The second set was

  15. Development and Field-Testing of a Study Protocol, including a Web-Based Occupant Survey Tool, for Use in Intervention Studies of Indoor Environmental Quality

    SciTech Connect

    Mendell, Mark; Eliseeva, Ekaterina; Spears, Michael; Fisk, William J.

    2009-06-01

    We developed and pilot-tested an overall protocol for intervention studies to evaluate the effects of indoor environmental changes in office buildings on the health symptoms and comfort of occupants. The protocol includes a web-based survey to assess the occupant's responses, as well as specific features of study design and analysis. The pilot study, carried out on two similar floors in a single building, compared two types of ventilation system filter media. With support from the building's Facilities staff, the implementation of the filter change intervention went well. While the web-based survey tool worked well also, low overall response rates (21-34percent among the three work groups included) limited our ability to evaluate the filter intervention., The total number of questionnaires returned was low even though we extended the study from eight to ten weeks. Because another simultaneous study we conducted elsewhere using the same survey had a high response rate (>70percent), we conclude that the low response here resulted from issues specific to this pilot, including unexpected restrictions by some employing agencies on communication with occupants.

  16. Modeling in the State Flow Environment to Support Launch Vehicle Verification Testing for Mission and Fault Management Algorithms in the NASA Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Berg, Peter; England, Dwight; Johnson, Stephen B.

    2016-01-01

    Analysis methods and testing processes are essential activities in the engineering development and verification of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS). Central to mission success is reliable verification of the Mission and Fault Management (M&FM) algorithms for the SLS launch vehicle (LV) flight software. This is particularly difficult because M&FM algorithms integrate and operate LV subsystems, which consist of diverse forms of hardware and software themselves, with equally diverse integration from the engineering disciplines of LV subsystems. M&FM operation of SLS requires a changing mix of LV automation. During pre-launch the LV is primarily operated by the Kennedy Space Center (KSC) Ground Systems Development and Operations (GSDO) organization with some LV automation of time-critical functions, and much more autonomous LV operations during ascent that have crucial interactions with the Orion crew capsule, its astronauts, and with mission controllers at the Johnson Space Center. M&FM algorithms must perform all nominal mission commanding via the flight computer to control LV states from pre-launch through disposal and also address failure conditions by initiating autonomous or commanded aborts (crew capsule escape from the failing LV), redundancy management of failing subsystems and components, and safing actions to reduce or prevent threats to ground systems and crew. To address the criticality of the verification testing of these algorithms, the NASA M&FM team has utilized the State Flow environment6 (SFE) with its existing Vehicle Management End-to-End Testbed (VMET) platform which also hosts vendor-supplied physics-based LV subsystem models. The human-derived M&FM algorithms are designed and vetted in Integrated Development Teams composed of design and development disciplines such as Systems Engineering, Flight Software (FSW), Safety and Mission Assurance (S&MA) and major subsystems and vehicle elements

  17. Development and test results of a flight management algorithm for fuel conservative descents in a time-based metered traffic environment

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Cannon, D. G.

    1980-01-01

    A simple flight management descent algorithm designed to improve the accuracy of delivering an airplane in a fuel-conservative manner to a metering fix at a time designated by air traffic control was developed and flight tested. This algorithm provides a three dimensional path with terminal area time constraints (four dimensional) for an airplane to make an idle thrust, clean configured (landing gear up, flaps zero, and speed brakes retracted) descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithm is described. The results of the flight tests flown with the Terminal Configured Vehicle airplane are presented.

  18. Planning fuel-conservative descents with or without time constraints using a small programmable calculator: Algorithm development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1983-01-01

    A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight tests flown with a T-39A (Sabreliner) airplane are presented.

  19. Planning fuel-conservative descents with or without time constraints using a small programmable calculator: algorithm development and flight test results

    SciTech Connect

    Knox, C.E.

    1983-03-01

    A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight tests flown with a T-39A (Sabreliner) airplane are presented.

  20. Corrective Action Investigation Plan for Corrective Action Unit 410: Waste Disposal Trenches, Tonopah Test Range, Nevada, Revision 0 (includes ROTCs 1, 2, and 3)

    SciTech Connect

    NNSA /NV

    2002-07-16

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Operations Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 410 under the Federal Facility Agreement and Consent Order. Corrective Action Unit 410 is located on the Tonopah Test Range (TTR), which is included in the Nevada Test and Training Range (formerly the Nellis Air Force Range) approximately 140 miles northwest of Las Vegas, Nevada. This CAU is comprised of five Corrective Action Sites (CASs): TA-19-002-TAB2, Debris Mound; TA-21-003-TANL, Disposal Trench; TA-21-002-TAAL, Disposal Trench; 09-21-001-TA09, Disposal Trenches; 03-19-001, Waste Disposal Site. This CAU is being investigated because contaminants may be present in concentrations that could potentially pose a threat to human health and/or the environment, and waste may have been disposed of with out appropriate controls. Four out of five of these CASs are the result of weapons testing and disposal activities at the TTR, and they are grouped together for site closure based on the similarity of the sites (waste disposal sites and trenches). The fifth CAS, CAS 03-19-001, is a hydrocarbon spill related to activities in the area. This site is grouped with this CAU because of the location (TTR). Based on historical documentation and process know-ledge, vertical and lateral migration routes are possible for all CASs. Migration of contaminants may have occurred through transport by infiltration of precipitation through surface soil which serves as a driving force for downward migration of contaminants. Land-use scenarios limit future use of these CASs to industrial activities. The suspected contaminants of potential concern which have been identified are volatile organic compounds; semivolatile organic compounds; high explosives; radiological constituents including depleted uranium

  1. Lord-Wingersky Algorithm Version 2.0 for Hierarchical Item Factor Models with Applications in Test Scoring, Scale Alignment, and Model Fit Testing. CRESST Report 830

    ERIC Educational Resources Information Center

    Cai, Li

    2013-01-01

    Lord and Wingersky's (1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined…

  2. Using DFX for Algorithm Evaluation

    SciTech Connect

    Beiriger, J.I.; Funkhouser, D.R.; Young, C.J.

    1998-10-20

    Evaluating whether or not a new seismic processing algorithm can improve the performance of the operational system can be problematic: it maybe difficult to isolate the comparable piece of the operational system it maybe necessary to duplicate ancillary timctions; and comparing results to the tuned, full-featured operational system maybe an unsat- isfactory basis on which to draw conclusions. Algorithm development and evaluation in an environment that more closely resembles the operational system can be achieved by integrating the algorithm with the custom user library of the Detection and Feature Extraction (DFX) code, developed by Science Applications kternational Corporation. This integration gives the seismic researcher access to all of the functionality of DFX, such as database access, waveform quality control, and station-specific tuning, and provides a more meaningfid basis for evaluation. The goal of this effort is to make the DFX environment more accessible to seismic researchers for algorithm evalua- tion. Typically, anew algorithm will be developed as a C-language progmm with an ASCII test parameter file. The integration process should allow the researcher to focus on the new algorithm developmen~ with minimum attention to integration issues. Customizing DFX, however, requires soflsvare engineering expertise, knowledge of the Scheme and C programming languages, and familiarity with the DFX source code. We use a C-language spatial coherence processing algorithm with a parameter and recipe file to develop a general process for integrating and evaluating a new algorithm in the DFX environment. To aid in configuring and managing the DFX environment, we develop a simple parameter management tool. We also identifi and examine capabilities that could simplify the process further, thus reducing the barriers facing researchers in using DFX..These capabilities include additional parameter manage- ment features, a Scheme-language template for algorithm testing, a

  3. Corrective Action Investigation Plan for Corrective Action Unit 204: Storage Bunkers, Nevada Test Site, Nevada (December 2002, Revision No.: 0), Including Record of Technical Change No. 1

    SciTech Connect

    NNSA /NSO

    2002-12-12

    The Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Operations Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 204 under the Federal Facility Agreement and Consent Order. Corrective Action Unit 204 is located on the Nevada Test Site approximately 65 miles northwest of Las Vegas, Nevada. This CAU is comprised of six Corrective Action Sites (CASs) which include: 01-34-01, Underground Instrument House Bunker; 02-34-01, Instrument Bunker; 03-34-01, Underground Bunker; 05-18-02, Chemical Explosives Storage; 05-33-01, Kay Blockhouse; 05-99-02, Explosive Storage Bunker. Based on site history, process knowledge, and previous field efforts, contaminants of potential concern for Corrective Action Unit 204 collectively include radionuclides, beryllium, high explosives, lead, polychlorinated biphenyls, total petroleum hydrocarbons, silver, warfarin, and zinc phosphide. The primary question for the investigation is: ''Are existing data sufficient to evaluate appropriate corrective actions?'' To address this question, resolution of two decision statements is required. Decision I is to ''Define the nature of contamination'' by identifying any contamination above preliminary action levels (PALs); Decision II is to ''Determine the extent of contamination identified above PALs. If PALs are not exceeded, the investigation is completed. If PALs are exceeded, then Decision II must be resolved. In addition, data will be obtained to support waste management decisions. Field activities will include radiological land area surveys, geophysical surveys to identify any subsurface metallic and nonmetallic debris, field screening for applicable contaminants of potential concern, collection and analysis of surface and subsurface soil samples from biased locations, and step-out sampling to define the extent of

  4. Testing the next generation of algorithms for geomorphic feature extraction from LiDAR: a case study in the Rio Cordon Basin, Italy

    NASA Astrophysics Data System (ADS)

    Tarolli, P.; Passalacqua, P.; Foufoula-Georgiou, E.; Dietrich, W. E.

    2008-12-01

    The next generation of digital elevation data (sub-meter resolution LiDAR) calls for the development of the next generation of algorithms for objective extraction of geomorphic features, such as channel heads, channel networks, bank geometry, landslide scars, service roads, etc. In this work, we test the performance of a newly developed algorithm for the extraction of channel networks based on wavelets (Lashermes, Foufoula-Georgiou and Dietrich, GRL, 2007) and highlight future challenges. The basin we use is the Rio Cordon basin, a 5 km2 alpine catchment located in the Dolomites, a mountain region in the Eastern Italian Alps. Elevation ranges between 1763 and 2748 m a.s.l., with an average value of 2200 m a.s.l.. The basin is highly dissected with hillslope lengths on the order of 40-60 m. Average slope is 27°, slopes of 30-40° are common, and there is a substantial number of slopes that locally exceed 45°, including subvertical cliffs in the upper part of the basin. The basin is morphologically divided into three parts: the upper part consists of dolomite outcrops and talus slopes bordering the cliffs; the middle part consists of a low-slope belt dominated by colluvial channels; the lower part displays steep slopes and a narrow valley where alluvial channels and shallow landslides are present. Several field surveys were conducted over the study area during the past few years including LiDAR survey (data acquired during snow free conditions in October 2006). A recent campaign effort has also provided new detailed data of field-mapped alluvial and colluvial channels, and channel heads. The LiDAR bare ground points were used for the DTM interpolation at 1, 2, 3, 4, and 5 m grid cell resolution. These DTMs were taken into consideration in order to test the influence of DTM cell size on the channel network extraction methodologies. The results of our analysis highlight the opportunities but also challenges in fully automated methodologies of geomorphic feature

  5. Political violence and child adjustment in Northern Ireland: Testing pathways in a social-ecological model including single-and two-parent families.

    PubMed

    Cummings, E Mark; Schermerhorn, Alice C; Merrilees, Christine E; Goeke-Morey, Marcie C; Shirlow, Peter; Cairns, Ed

    2010-07-01

    Moving beyond simply documenting that political violence negatively impacts children, we tested a social-ecological hypothesis for relations between political violence and child outcomes. Participants were 700 mother-child (M = 12.1 years, SD = 1.8) dyads from 18 working-class, socially deprived areas in Belfast, Northern Ireland, including single- and two-parent families. Sectarian community violence was associated with elevated family conflict and children's reduced security about multiple aspects of their social environment (i.e., family, parent-child relations, and community), with links to child adjustment problems and reductions in prosocial behavior. By comparison, and consistent with expectations, links with negative family processes, child regulatory problems, and child outcomes were less consistent for nonsectarian community violence. Support was found for a social-ecological model for relations between political violence and child outcomes among both single- and two-parent families, with evidence that emotional security and adjustment problems were more negatively affected in single-parent families. The implications for understanding social ecologies of political violence and children's functioning are discussed.

  6. The Doylestown Algorithm: A Test to Improve the Performance of AFP in the Detection of Hepatocellular Carcinoma.

    PubMed

    Wang, Mengjun; Devarajan, Karthik; Singal, Amit G; Marrero, Jorge A; Dai, Jianliang; Feng, Ziding; Rinaudo, Jo Ann S; Srivastava, Sudhir; Evans, Alison; Hann, Hie-Won; Lai, Yinzhi; Yang, Hushan; Block, Timothy M; Mehta, Anand

    2016-02-01

    Biomarkers for the early diagnosis of hepatocellular carcinoma (HCC) are needed to decrease mortality from this cancer. However, as new biomarkers have been slow to be brought to clinical practice, we have developed a diagnostic algorithm that utilizes commonly used clinical measurements in those at risk of developing HCC. Briefly, as α-fetoprotein (AFP) is routinely used, an algorithm that incorporated AFP values along with four other clinical factors was developed. Discovery analysis was performed on electronic data from patients who had liver disease (cirrhosis) alone or HCC in the background of cirrhosis. The discovery set consisted of 360 patients from two independent locations. A logistic regression algorithm was developed that incorporated log-transformed AFP values with age, gender, alkaline phosphatase, and alanine aminotransferase levels. We define this as the Doylestown algorithm. In the discovery set, the Doylestown algorithm improved the overall performance of AFP by 10%. In subsequent external validation in over 2,700 patients from three independent sites, the Doylestown algorithm improved detection of HCC as compared with AFP alone by 4% to 20%. In addition, at a fixed specificity of 95%, the Doylestown algorithm improved the detection of HCC as compared with AFP alone by 2% to 20%. In conclusion, the Doylestown algorithm consolidates clinical laboratory values, with age and gender, which are each individually associated with HCC risk, into a single value that can be used for HCC risk assessment. As such, it should be applicable and useful to the medical community that manages those at risk for developing HCC.

  7. An innovative thinking-based intelligent information fusion algorithm.

    PubMed

    Lu, Huimin; Hu, Liang; Liu, Gang; Zhou, Jin

    2013-01-01

    This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information.

  8. An interactive ontology-driven information system for simulating background radiation and generating scenarios for testing special nuclear materials detection algorithms

    DOE PAGES

    Sorokine, Alexandre; Schlicher, Bob G.; Ward, Richard C.; ...

    2015-05-22

    This paper describes an original approach to generating scenarios for the purpose of testing the algorithms used to detect special nuclear materials (SNM) that incorporates the use of ontologies. Separating the signal of SNM from the background requires sophisticated algorithms. To assist in developing such algorithms, there is a need for scenarios that capture a very wide range of variables affecting the detection process, depending on the type of detector being used. To provide such a cpability, we developed an ontology-driven information system (ODIS) for generating scenarios that can be used in creating scenarios for testing of algorithms for SNMmore » detection. The ontology-driven scenario generator (ODSG) is an ODIS based on information supplied by subject matter experts and other documentation. The details of the creation of the ontology, the development of the ontology-driven information system, and the design of the web user interface (UI) are presented along with specific examples of scenarios generated using the ODSG. We demonstrate that the paradigm behind the ODSG is capable of addressing the problem of semantic complexity at both the user and developer levels. Compared to traditional approaches, an ODIS provides benefits such as faithful representation of the users' domain conceptualization, simplified management of very large and semantically diverse datasets, and the ability to handle frequent changes to the application and the UI. Furthermore, the approach makes possible the generation of a much larger number of specific scenarios based on limited user-supplied information« less

  9. An interactive ontology-driven information system for simulating background radiation and generating scenarios for testing special nuclear materials detection algorithms

    SciTech Connect

    Sorokine, Alexandre; Schlicher, Bob G.; Ward, Richard C.; Wright, Michael C.; Kruse, Kara L.; Bhaduri, Budhendra; Slepoy, Alexander

    2015-05-22

    This paper describes an original approach to generating scenarios for the purpose of testing the algorithms used to detect special nuclear materials (SNM) that incorporates the use of ontologies. Separating the signal of SNM from the background requires sophisticated algorithms. To assist in developing such algorithms, there is a need for scenarios that capture a very wide range of variables affecting the detection process, depending on the type of detector being used. To provide such a cpability, we developed an ontology-driven information system (ODIS) for generating scenarios that can be used in creating scenarios for testing of algorithms for SNM detection. The ontology-driven scenario generator (ODSG) is an ODIS based on information supplied by subject matter experts and other documentation. The details of the creation of the ontology, the development of the ontology-driven information system, and the design of the web user interface (UI) are presented along with specific examples of scenarios generated using the ODSG. We demonstrate that the paradigm behind the ODSG is capable of addressing the problem of semantic complexity at both the user and developer levels. Compared to traditional approaches, an ODIS provides benefits such as faithful representation of the users' domain conceptualization, simplified management of very large and semantically diverse datasets, and the ability to handle frequent changes to the application and the UI. Furthermore, the approach makes possible the generation of a much larger number of specific scenarios based on limited user-supplied information

  10. Quantum algorithms: an overview

    NASA Astrophysics Data System (ADS)

    Montanaro, Ashley

    2016-01-01

    Quantum computers are designed to outperform standard computers by running quantum algorithms. Areas in which quantum algorithms can be applied include cryptography, search and optimisation, simulation of quantum systems and solving large systems of linear equations. Here we briefly survey some known quantum algorithms, with an emphasis on a broad overview of their applications rather than their technical details. We include a discussion of recent developments and near-term applications of quantum algorithms.

  11. Uses of clinical algorithms.

    PubMed

    Margolis, C Z

    1983-02-04

    The clinical algorithm (flow chart) is a text format that is specially suited for representing a sequence of clinical decisions, for teaching clinical decision making, and for guiding patient care. A representative clinical algorithm is described in detail; five steps for writing an algorithm and seven steps for writing a set of algorithms are outlined. Five clinical education and patient care uses of algorithms are then discussed, including a map for teaching clinical decision making and protocol charts for guiding step-by-step care of specific problems. Clinical algorithms are compared as to their clinical usefulness with decision analysis. Three objections to clinical algorithms are answered, including the one that they restrict thinking. It is concluded that methods should be sought for writing clinical algorithms that represent expert consensus. A clinical algorithm could then be written for any area of medical decision making that can be standardized. Medical practice could then be taught more effectively, monitored accurately, and understood better.

  12. Flight tests of three-dimensional path-redefinition algorithms for transition from Radio Navigation (RNAV) to Microwave Landing System (MLS) navigation when flying an aircraft on autopilot

    NASA Technical Reports Server (NTRS)

    Hueschen, Richard M.

    1988-01-01

    This report contains results of flight tests for three path update algorithms designed to provide smooth transition for an aircraft guidance system from DME, VORTAC, and barometric navaids to the more precise MLS by modifying the desired 3-D flight path. The first algorithm, called Zero Cross Track, eliminates the discontinuity in cross-track and altitude error at transition by designating the first valid MLS aircraft position as the desired first waypoint, while retaining all subsequent waypoints. The discontinuity in track angle is left unaltered. The second, called Tangent Path, also eliminates the discontinuity in cross-track and altitude errors and chooses a new desired heading to be tangent to the next oncoming circular arc turn. The third, called Continued Track, eliminates the discontinuity in cross-track, altitude, and track angle errors by accepting the current MLS position and track angle as the desired ones and recomputes the location of the next waypoint. The flight tests were conducted on the Transportation Systems Research Vehicle, a small twin-jet transport aircraft modified for research under the Advanced Transport Operating Systems program at Langley Research Center. The flight tests showed that the algorithms provided a smooth transition to MLS.

  13. SEBAL-A: A remote sensing ET algorithm that accounts for advection with limited data. Part II: Test for transferability

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Because the Surface Energy Balance Algorithm for Land (SEBAL) tends to underestimate ET under conditions of advection, the model was modified by incorporating an advection component as part of the energy usable for crop evapotranspiration (ET). The modification involved the estimation of advected en...

  14. An exact accelerated stochastic simulation algorithm.

    PubMed

    Mjolsness, Eric; Orendorff, David; Chatelain, Philippe; Koumoutsakos, Petros

    2009-04-14

    An exact method for stochastic simulation of chemical reaction networks, which accelerates the stochastic simulation algorithm (SSA), is proposed. The present "ER-leap" algorithm is derived from analytic upper and lower bounds on the multireaction probabilities sampled by SSA, together with rejection sampling and an adaptive multiplicity for reactions. The algorithm is tested on a number of well-quantified reaction networks and is found experimentally to be very accurate on test problems including a chaotic reaction network. At the same time ER-leap offers a substantial speedup over SSA with a simulation time proportional to the 23 power of the number of reaction events in a Galton-Watson process.

  15. A boundary finding algorithm and its applications

    NASA Technical Reports Server (NTRS)

    Gupta, J. N.; Wintz, P. A.

    1975-01-01

    An algorithm for locating gray level and/or texture edges in digitized pictures is presented. The algorithm is based on the concept of hypothesis testing. The digitized picture is first subdivided into subsets of picture elements, e.g., 2 x 2 arrays. The algorithm then compares the first- and second-order statistics of adjacent subsets; adjacent subsets having similar first- and/or second-order statistics are merged into blobs. By continuing this process, the entire picture is segmented into blobs such that the picture elements within each blob have similar characteristics. The boundaries between the blobs comprise the boundaries. The algorithm always generates closed boundaries. The algorithm was developed for multispectral imagery of the earth's surface. Application of this algorithm to various image processing techniques such as efficient coding, information extraction (terrain classification), and pattern recognition (feature selection) are included.

  16. A General Tank Test of NACA Model 11-C Flying-boat Hull, Including the Effect of Changing the Plan Form of the Step

    NASA Technical Reports Server (NTRS)

    Dawson, John R

    1935-01-01

    The results of a general tank test model 11-C, a conventional pointed afterbody type of flying-boat hull, are given in tables and curves. These results are compared with the results of tests on model 11-A, from which model 11-C was derived, and it is found that the resistance of model 11-C is somewhat greater. The effect of changing the plan form of the step on model 11-C is shown from the results of tests made with three swallow-tail and three pointed steps formed by altering the original step of the model. These results show only minor differences from the results obtained with the original model.

  17. Comparison of options for reduction of noise in the test section of the NASA Langley 4x7m wind tunnel, including reduction of nozzle area

    NASA Technical Reports Server (NTRS)

    Hayden, R. E.

    1984-01-01

    The acoustically significant features of the NASA 4X7m wind tunnel and the Dutch-German DNW low speed tunnel are compared to illustrate the reasons for large differences in background noise in the open jet test sections of the two tunnels. Also introduced is the concept of reducing test section noise levels through fan and turning vane source reductions which can be brought about by reducing the nozzle cross sectional area, and thus the circuit mass flow for a particular exit velocity. The costs and benefits of treating sources, paths, and changing nozzle geometry are reviewed.

  18. Design in nonlinear mixed effects models: optimization using the Fedorov-Wynn algorithm and power of the Wald test for binary covariates.

    PubMed

    Retout, Sylvie; Comets, Emmanuelle; Samson, Adeline; Mentré, France

    2007-12-10

    We extend the methodology for designs evaluation and optimization in nonlinear mixed effects models with an illustration of the decrease of human immunodeficiency virus viral load after antiretroviral treatment initiation described by a bi-exponential model. We first show the relevance of the predicted standard errors (SEs) given by the computation of the population Fisher information matrix using the R function PFIM, in comparison to those computed with the stochastic approximation expectation-maximization algorithm, implemented in the Monolix software. We then highlight the usefulness of the Fedorov-Wynn (FW) algorithm for designs optimization compared to the Simplex algorithm. From the predicted SE of PFIM, we compute the predicted power of the Wald test to detect a treatment effect as well as the number of subjects needed to achieve a given power. Using the FW algorithm, we investigate the influence of the design on the power and show that, for optimized designs with the same total number of samples, the power increases when the number of subjects increases and the number of samples per subject decreases. A simulation study is also performed with the nlme function of R to confirm this result and show the relevance of the predicted powers compared to those observed by simulation.

  19. Steering Organoids Toward Discovery: Self-Driving Stem Cells Are Opening a World of Possibilities, Including Drug Testing and Tissue Sourcing.

    PubMed

    Solis, Michele

    2016-01-01

    Since the 1980s, stem cells' shape-shifting abilities have wowed scientists. With proper handling, a few growth factors, and some time, stem cells can be cooked up into specific cell types, including neurons, muscle, and skin.

  20. Use of an Aptitude Test in University Entrance--A Validity Study: Updated Analyses of Higher Education Destinations, Including 2007 Entrants

    ERIC Educational Resources Information Center

    Kirkup, Catherine; Wheater, Rebecca; Morrison, Jo; Durbin, Ben

    2010-01-01

    In 2005, the National Foundation for Educational Research (NFER) was commissioned to evaluate the potential value of using an aptitude test as an additional tool in the selection of candidates for admission to higher education (HE). This five-year study is co-funded by the National Foundation for Educational Research (NFER), the Department for…

  1. Political Violence and Child Adjustment in Northern Ireland: Testing Pathways in a Social-Ecological Model Including Single- and Two-Parent Families

    ERIC Educational Resources Information Center

    Cummings, E. Mark; Schermerhorn, Alice C.; Merrilees, Christine E.; Goeke-Morey, Marcie C.; Shirlow, Peter; Cairns, Ed

    2010-01-01

    Moving beyond simply documenting that political violence negatively impacts children, we tested a social-ecological hypothesis for relations between political violence and child outcomes. Participants were 700 mother-child (M = 12.1 years, SD = 1.8) dyads from 18 working-class, socially deprived areas in Belfast, Northern Ireland, including…

  2. Item Selection in Computerized Adaptive Testing: Improving the a-Stratified Design with the Sympson-Hetter Algorithm

    ERIC Educational Resources Information Center

    Leung, Chi-Keung; Chang, Hua-Hua; Hau, Kit-Tai

    2002-01-01

    Item exposure control, test-overlap minimization, and the efficient use of item pool are some of the important issues in computerized adaptive testing (CAT) designs. The overexposure of some items and high test-overlap rate may cause both item and test security problems. Previously these problems associated with the maximum information (Max-I)…

  3. The evaluation of the OSGLR algorithm for restructurable controls

    NASA Technical Reports Server (NTRS)

    Bonnice, W. F.; Wagner, E.; Hall, S. R.; Motyka, P.

    1986-01-01

    The detection and isolation of commercial aircraft control surface and actuator failures using the orthogonal series generalized likelihood ratio (OSGLR) test was evaluated. The OSGLR algorithm was chosen as the most promising algorithm based on a preliminary evaluation of three failure detection and isolation (FDI) algorithms (the detection filter, the generalized likelihood ratio test, and the OSGLR test) and a survey of the literature. One difficulty of analytic FDI techniques and the OSGLR algorithm in particular is their sensitivity to modeling errors. Therefore, methods of improving the robustness of the algorithm were examined with the incorporation of age-weighting into the algorithm being the most effective approach, significantly reducing the sensitivity of the algorithm to modeling errors. The steady-state implementation of the algorithm based on a single cruise linear model was evaluated using a nonlinear simulation of a C-130 aircraft. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling the linear models used by the algorithm on dynamic pressure and flap deflection was also considered. Since simply scheduling the linear models over the entire flight envelope is unlikely to be adequate, scheduling of the steady-state implentation of the algorithm was briefly investigated.

  4. A general tank test of a model of the hull of the Pem-1 flying boat including a special working chart for the determination of hull performance

    NASA Technical Reports Server (NTRS)

    Dawson, John R

    1938-01-01

    The results of a general tank test of a 1/6 full-size model of the hull of the Pem-1 flying boat (N.A.C.A. model 18) are given in non-dimensional form. In addition to the usual curves, the results are presented in a new form that makes it possible to apply them more conveniently than in the forms previously used. The resistance was compared with that of N.A.C.A. models 11-C and 26(Sikorsky S-40) and was found to be generally less than the resistance of either.

  5. Sourcebook of locations of geophysical surveys in tunnels and horizontal holes, including results of seismic refraction surveys, Rainier Mesa, Aqueduct Mesa, and Area 16, Nevada Test Site

    USGS Publications Warehouse

    Carroll, R.D.; Kibler, J.E.

    1983-01-01

    Seismic refraction surveys have been obtained sporadically in tunnels in zeolitized tuff at the Nevada Test Site since the late 1950's. Commencing in 1967 and continuing to date (1982), .extensive measurements of shear- and compressional-wave velocities have been made in five tunnel complexes in Rainier and Aqueduct Mesas and in one tunnel complex in Shoshone Mountain. The results of these surveys to 1980 are compiled in this report. In addition, extensive horizontal drilling was initiated in 1967 in connection with geologic exploration in these tunnel complexes for sites for nuclear weapons tests. Seismic and electrical surveys were conducted in the majority of these holes. The type and location of these tunnel and borehole surveys are indexed in this report. Synthesis of the seismic refraction data indicates a mean compressional-wave velocity near the nuclear device point (WP) of 23 tunnel events of 2,430 m/s (7,970 f/s) with a range of 1,846-2,753 m/s (6,060-9,030 f/s). The mean shear-wave velocity of 17 tunnel events is 1,276 m/s (4,190 f/s) with a range of 1,140-1,392 m/s (3,740-4,570 f/s). Experience indicates that these velocity variations are due chiefly to the extent of fracturing and (or) the presence of partially saturated rock in the region of the survey.

  6. An aerial radiological survey of the Tonopah Test Range including Clean Slate 1,2,3, Roller Coaster, decontamination area, Cactus Springs Ranch target areas. Central Nevada

    SciTech Connect

    Proctor, A.E.; Hendricks, T.J.

    1995-08-01

    An aerial radiological survey was conducted of major sections of the Tonopah Test Range (TTR) in central Nevada from August through October 1993. The survey consisted of aerial measurements of both natural and man-made gamma radiation emanating from the terrestrial surface. The initial purpose of the survey was to locate depleted uranium (detecting {sup 238}U) from projectiles which had impacted on the TTR. The examination of areas near Cactus Springs Ranch (located near the western boundary of the TTR) and an animal burial area near the Double Track site were secondary objectives. When more widespread than expected {sup 241}Am contamination was found around the Clean Slates sites, the survey was expanded to cover the area surrounding the Clean Slates and also the Double Track site. Results are reported as radiation isopleths superimposed on aerial photographs of the area.

  7. Corrective Action Investigation Plan for Corrective Action Unit 529: Area 25 Contaminated Materials, Nevada Test Site, Nevada, Rev. 0, Including Record of Technical Change No. 1

    SciTech Connect

    U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office

    2003-02-26

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 529, Area 25 Contaminated Materials, Nevada Test Site (NTS), Nevada, under the Federal Facility Agreement and Consent Order. CAU 529 consists of one Corrective Action Site (25-23-17). For the purpose of this investigation, the Corrective Action Site has been divided into nine parcels based on the separate and distinct releases. A conceptual site model was developed for each parcel to address the translocation of contaminants from each release. The results of this investigation will be used to support a defensible evaluation of corrective action alternatives in the corrective action decision document.

  8. A depth-averaged debris-flow model that includes the effects of evolving dilatancy: II. Numerical predictions and experimental tests.

    USGS Publications Warehouse

    George, David L.; Iverson, Richard M.

    2014-01-01

    We evaluate a new depth-averaged mathematical model that is designed to simulate all stages of debris-flow motion, from initiation to deposition. A companion paper shows how the model’s five governing equations describe simultaneous evolution of flow thickness, solid volume fraction, basal pore-fluid pressure, and two components of flow momentum. Each equation contains a source term that represents the influence of state-dependent granular dilatancy. Here we recapitulate the equations and analyze their eigenstructure to show that they form a hyperbolic system with desirable stability properties. To solve the equations we use a shock-capturing numerical scheme with adaptive mesh refinement, implemented in an open-source software package we call D-Claw. As tests of D-Claw, we compare model output with results from two sets of large-scale debris-flow experiments. One set focuses on flow initiation from landslides triggered by rising pore-water pressures, and the other focuses on downstream flow dynamics, runout, and deposition. D-Claw performs well in predicting evolution of flow speeds, thicknesses, and basal pore-fluid pressures measured in each type of experiment. Computational results illustrate the critical role of dilatancy in linking coevolution of the solid volume fraction and pore-fluid pressure, which mediates basal Coulomb friction and thereby regulates debris-flow dynamics.

  9. Corrective Action Investigation Plan for Corrective Action Unit 536: Area 3 Release Site, Nevada Test Site, Nevada (Rev. 0 / June 2003), Including Record of Technical Change No. 1

    SciTech Connect

    2003-06-27

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office's approach to collect the data necessary to evaluate corrective action alternatives (CAAs) appropriate for the closure of Corrective Action Unit (CAU) 536: Area 3 Release Site, Nevada Test Site, Nevada, under the Federal Facility Agreement and Consent Order. Corrective Action Unit 536 consists of a single Corrective Action Site (CAS): 03-44-02, Steam Jenny Discharge. The CAU 536 site is being investigated because existing information on the nature and extent of possible contamination is insufficient to evaluate and recommend corrective action alternatives for CAS 03-44-02. The additional information will be obtained by conducting a corrective action investigation (CAI) prior to evaluating CAAs and selecting the appropriate corrective action for this CAS. The results of this field investigation are to be used to support a defensible evaluation of corrective action alternatives in the corrective action decision document. Record of Technical Change No. 1 is dated 3-2004.

  10. Corrective Action Investigation Plan for Corrective Action Unit 516: Septic Systems and Discharge Points, Nevada Test Site, Nevada, Rev. 0, Including Record of Technical Change No. 1

    SciTech Connect

    2003-04-28

    This Corrective Action Investigation Plan (CAIP) contains the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Sites Office's (NNSA/NSO's) approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 516, Septic Systems and Discharge Points, Nevada Test Site (NTS), Nevada, under the Federal Facility Agreement and Consent Order. CAU 516 consists of six Corrective Action Sites: 03-59-01, Building 3C-36 Septic System; 03-59-02, Building 3C-45 Septic System; 06-51-01, Sump Piping, 06-51-02, Clay Pipe and Debris; 06-51-03, Clean Out Box and Piping; and 22-19-04, Vehicle Decontamination Area. Located in Areas 3, 6, and 22 of the NTS, CAU 516 is being investigated because disposed waste may be present without appropriate controls, and hazardous and/or radioactive constituents may be present or migrating at concentrations and locations that could potentially pose a threat to human health and the environment. Existing information and process knowledge on the expected nature and extent of contamination of CAU 516 are insufficient to select preferred corrective action alternatives; therefore, additional information will be obtained by conducting a corrective action investigation. The results of this field investigation will support a defensible evaluation of corrective action alternatives in the corrective action decision document. Record of Technical Change No. 1 is dated 3/2004.

  11. Design and performance testing of an avalanche photodiode receiver with multiplication gain control algorithm for intersatellite laser communication

    NASA Astrophysics Data System (ADS)

    Yu, Xiaonan; Tong, Shoufeng; Dong, Yan; Song, Yansong; Hao, Shicong; Lu, Jing

    2016-06-01

    An avalanche photodiode (APD) receiver for intersatellite laser communication links is proposed and its performance is experimentally demonstrated. In the proposed system, a series of analog circuits are used not only to adjust the temperature and control the bias voltage but also to monitor the current and recover the clock from the communication data. In addition, the temperature compensation and multiplication gain control algorithm are embedded in the microcontroller to improve the performance of the receiver. As shown in the experiment, with the change of communication rate from 10 to 2000 Mbps, the detection sensitivity of the APD receiver varies from -47 to -34 dBm. Moreover, due to the existence of the multiplication gain control algorithm, the dynamic range of the APD receiver is effectively improved, while the dynamic range at 10, 100, and 1000 Mbps is 38.7, 37.7, and 32.8 dB, respectively. As a result, the experimental results agree well with the theoretical predictions, and the receiver will improve the flexibility of the intersatellite links without increasing the cost.

  12. Implementation and testing of a real-time 3-component phase picking program for Earthworm using the CECM algorithm

    NASA Astrophysics Data System (ADS)

    Baker, B. I.; Friberg, P. A.

    2014-12-01

    Modern seismic networks typically deploy three component (3C) sensors, but still fail to utilize all of the information available in the seismograms when performing automated phase picking for real-time event location. In most cases a variation on a short term over long term average threshold detector is used for picking and then an association program is used to assign phase types to the picks. However, the 3C waveforms from an earthquake contain an abundance of information related to the P and S phases in both their polarization and energy partitioning. An approach that has been overlooked and has demonstrated encouraging results is the Component Energy Comparison Method (CECM) by Nagano et al. as published in Geophysics 1989. CECM is well suited to being used in real-time because the calculation is not computationally intensive. Furthermore, the CECM method has fewer tuning variables (3) than traditional pickers in Earthworm such as the Rex Allen algorithm (N=18) or even the Anthony Lomax Filter Picker module (N=5). In addition to computing the CECM detector we study the detector sensitivity by rotating the signal into principle components as well as estimating the P phase onset from a curvature function describing the CECM as opposed to the CECM itself. We present our results implementing this algorithm in a real-time module for Earthworm and show the improved phase picks as compared to the traditional single component pickers using Earthworm.

  13. Mapping of Schistosomiasis and Soil-Transmitted Helminths in Namibia: The First Large-Scale Protocol to Formally Include Rapid Diagnostic Tests

    PubMed Central

    Sousa-Figueiredo, José Carlos; Stanton, Michelle C.; Katokele, Stark; Arinaitwe, Moses; Adriko, Moses; Balfour, Lexi; Reiff, Mark; Lancaster, Warren; Noden, Bruce H.; Bock, Ronnie; Stothard, J. Russell

    2015-01-01

    Background Namibia is now ready to begin mass drug administration of praziquantel and albendazole against schistosomiasis and soil-transmitted helminths, respectively. Although historical data identifies areas of transmission of these neglected tropical diseases (NTDs), there is a need to update epidemiological data. For this reason, Namibia adopted a new protocol for mapping of schistosomiasis and geohelminths, formally integrating rapid diagnostic tests (RDTs) for infections and morbidity. In this article, we explain the protocol in detail, and introduce the concept of ‘mapping resolution’, as well as present results and treatment recommendations for northern Namibia. Methods/Findings/Interpretation This new protocol allowed a large sample to be surveyed (N = 17 896 children from 299 schools) at relatively low cost (7 USD per person mapped) and very quickly (28 working days). All children were analysed by RDTs, but only a sub-sample was also diagnosed by light microscopy. Overall prevalence of schistosomiasis in the surveyed areas was 9.0%, highly associated with poorer access to potable water (OR = 1.5, P<0.001) and defective (OR = 1.2, P<0.001) or absent sanitation infrastructure (OR = 2.0, P<0.001). Overall prevalence of geohelminths, more particularly hookworm infection, was 12.2%, highly associated with presence of faecal occult blood (OR = 1.9, P<0.001). Prevalence maps were produced and hot spots identified to better guide the national programme in drug administration, as well as targeted improvements in water, sanitation and hygiene. The RDTs employed (circulating cathodic antigen and microhaematuria for Schistosoma mansoni and S. haematobium, respectively) performed well, with sensitivities above 80% and specificities above 95%. Conclusion/Significance This protocol is cost-effective and sensitive to budget limitations and the potential economic and logistical strains placed on the national Ministries of Health. Here we present a high resolution map

  14. License plate detection algorithm

    NASA Astrophysics Data System (ADS)

    Broitman, Michael; Klopovsky, Yuri; Silinskis, Normunds

    2013-12-01

    A novel algorithm for vehicle license plates localization is proposed. The algorithm is based on pixel intensity transition gradient analysis. Near to 2500 natural-scene gray-level vehicle images of different backgrounds and ambient illumination was tested. The best set of algorithm's parameters produces detection rate up to 0.94. Taking into account abnormal camera location during our tests and therefore geometrical distortion and troubles from trees this result could be considered as passable. Correlation between source data, such as license Plate dimensions and texture, cameras location and others, and parameters of algorithm were also defined.

  15. POSE Algorithms for Automated Docking

    NASA Technical Reports Server (NTRS)

    Heaton, Andrew F.; Howard, Richard T.

    2011-01-01

    POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data.

  16. Overview of an Algorithm Plugin Package (APP)

    NASA Astrophysics Data System (ADS)

    Linda, M.; Tilmes, C.; Fleig, A. J.

    2004-12-01

    Science software that runs operationally is fundamentally different than software that runs on a scientist's desktop. There are complexities in hosting software for automated production that are necessary and significant. Identifying common aspects of these complexities can simplify algorithm integration. We use NASA's MODIS and OMI data production systems as examples. An Algorithm Plugin Package (APP) is science software that is combined with algorithm-unique elements that permit the algorithm to interface with, and function within, the framework of a data processing system. The framework runs algorithms operationally against large quantities of data. The extra algorithm-unique items are constrained by the design of the data processing system. APPs often include infrastructure that is vastly similar. When the common elements in APPs are identified and abstracted, the cost of APP development, testing, and maintenance will be reduced. This paper is an overview of the extra algorithm-unique pieces that are shared between MODAPS and OMIDAPS APPs. Our exploration of APP structure will help builders of other production systems identify their common elements and reduce algorithm integration costs. Our goal is to complete the development of a library of functions and a menu of implementation choices that reflect common needs of APPs. The library and menu will reduce the time and energy required for science developers to integrate algorithms into production systems.

  17. Development of automated test procedures and techniques for LSI circuits

    NASA Technical Reports Server (NTRS)

    Carroll, B. D.

    1975-01-01

    Testing of large scale integrated (LSI) logic circuits was considered from the point of view of automatic test pattern generation. A system for automatic test pattern generation is described. A test generation algorithm is presented that can be applied to both combinational and sequential logic circuits. Also included is a programmed implementation of the algorithm and sample results from the program.

  18. Dynamic Analyses Including Joints Of Truss Structures

    NASA Technical Reports Server (NTRS)

    Belvin, W. Keith

    1991-01-01

    Method for mathematically modeling joints to assess influences of joints on dynamic response of truss structures developed in study. Only structures with low-frequency oscillations considered; only Coulomb friction and viscous damping included in analysis. Focus of effort to obtain finite-element mathematical models of joints exhibiting load-vs.-deflection behavior similar to measured load-vs.-deflection behavior of real joints. Experiments performed to determine stiffness and damping nonlinearities typical of joint hardware. Algorithm for computing coefficients of analytical joint models based on test data developed to enable study of linear and nonlinear effects of joints on global structural response. Besides intended application to large space structures, applications in nonaerospace community include ground-based antennas and earthquake-resistant steel-framed buildings.

  19. Control Algorithms and Simulated Environment Developed and Tested for Multiagent Robotics for Autonomous Inspection of Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Wong, Edmond

    2005-01-01

    The NASA Glenn Research Center and academic partners are developing advanced multiagent robotic control algorithms that will enable the autonomous inspection and repair of future propulsion systems. In this application, on-wing engine inspections will be performed autonomously by large groups of cooperative miniature robots that will traverse the surfaces of engine components to search for damage. The eventual goal is to replace manual engine inspections that require expensive and time-consuming full engine teardowns and allow the early detection of problems that would otherwise result in catastrophic component failures. As a preliminary step toward the long-term realization of a practical working system, researchers are developing the technology to implement a proof-of-concept testbed demonstration. In a multiagent system, the individual agents are generally programmed with relatively simple controllers that define a limited set of behaviors. However, these behaviors are designed in such a way that, through the localized interaction among individual agents and between the agents and the environment, they result in self-organized, emergent group behavior that can solve a given complex problem, such as cooperative inspection. One advantage to the multiagent approach is that it allows for robustness and fault tolerance through redundancy in task handling. In addition, the relatively simple agent controllers demand minimal computational capability, which in turn allows for greater miniaturization of the robotic agents.

  20. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  1. Design Science Research toward Designing/Prototyping a Repeatable Model for Testing Location Management (LM) Algorithms for Wireless Networking

    ERIC Educational Resources Information Center

    Peacock, Christopher

    2012-01-01

    The purpose of this research effort was to develop a model that provides repeatable Location Management (LM) testing using a network simulation tool, QualNet version 5.1 (2011). The model will provide current and future protocol developers a framework to simulate stable protocol environments for development. This study used the Design Science…

  2. Algorithms and analysis for underwater vehicle plume tracing.

    SciTech Connect

    Byrne, Raymond Harry; Savage, Elizabeth L.; Hurtado, John Edward; Eskridge, Steven E.

    2003-07-01

    The goal of this research was to develop and demonstrate cooperative 3-D plume tracing algorithms for miniature autonomous underwater vehicles. Applications for this technology include Lost Asset and Survivor Location Systems (L-SALS) and Ship-in-Port Patrol and Protection (SP3). This research was a joint effort that included Nekton Research, LLC, Sandia National Laboratories, and Texas A&M University. Nekton Research developed the miniature autonomous underwater vehicles while Sandia and Texas A&M developed the 3-D plume tracing algorithms. This report describes the plume tracing algorithm and presents test results from successful underwater testing with pseudo-plume sources.

  3. Quadrupole Alignment and Trajectory Correction for Future Linear Colliders: SLC Tests of a Dispersion-Free Steering Algorithm

    SciTech Connect

    Assmann, R

    2004-06-08

    The feasibility of future linear colliders depends on achieving very tight alignment and steering tolerances. All proposals (NLC, JLC, CLIC, TESLA and S-BAND) currently require a total emittance growth in the main linac of less than 30-100% [1]. This should be compared with a 100% emittance growth in the much smaller SLC linac [2]. Major advances in alignment and beam steering techniques beyond those used in the SLC are necessary for the next generation of linear colliders. In this paper, we present an experimental study of quadrupole alignment with a dispersion-free steering algorithm. A closely related method (wakefield-free steering) takes into account wakefield effects [3]. However, this method can not be studied at the SLC. The requirements for future linear colliders lead to new and unconventional ideas about alignment and beam steering. For example, no dipole correctors are foreseen for the standard trajectory correction in the NLC [4]; beam steering will be done by moving the quadrupole positions with magnet movers. This illustrates the close symbiosis between alignment, beam steering and beam dynamics that will emerge. It is no longer possible to consider the accelerator alignment as static with only a few surveys and realignments per year. The alignment in future linear colliders will be a dynamic process in which the whole linac, with thousands of beam-line elements, is aligned in a few hours or minutes, while the required accuracy of about 5 pm for the NLC quadrupole alignment [4] is a factor of 20 higher than in existing accelerators. The major task in alignment and steering is the accurate determination of the optimum beam-line position. Ideally one would like all elements to be aligned along a straight line. However, this is not practical. Instead a ''smooth curve'' is acceptable as long as its wavelength is much longer than the betatron wavelength of the accelerated beam. Conventional alignment methods are limited in accuracy by errors in the survey

  4. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal

  5. Comparative in vitro evaluation of dirithromycin tested against recent clinical isolates of Haemophilus influenzae, Moraxella catarrhalis, and Streptococcus pneumoniae, including effects of medium supplements and test conditions on MIC results.

    PubMed

    Biedenbach, D J; Jones, R N; Lewis, M T; Croco, M A; Barrett, M S

    1999-04-01

    The use of macrolides for treatment of respiratory complaints has been complicated by susceptibility test conditions that adversely effect the in vitro test results and perceived potencies of these compounds. Dirithromycin was studied as to its in vitro activity compared to other macrolides as well as the effects that environmental incubation variations and inoculum concentrations may have on susceptibility results. Dirithromycin was less active than other macrolides tested (azithromycin clarithromycin, erythromycin) against Streptococcus pneumoniae, Haemophilus influenzae, and Moraxella catarrhalis with MIC90 values of 16, 32, and 1 microgram/ml, respectively; an activity that was most similar to roxithromycin. This reduced activity may be compensated by the superior pharmacokinetic properties that dirithromycin possesses compared to other members in its class. Method variation studies show that incubation in CO2 environments increase the MIC values for all macrolide compounds and dirithromycin was most effected by pH changes in three in vitro methods tested (Etest [AB BIODISK, Solna, Sweden] broth microdilution, and disk diffusion). Variations in inoculum concentration had minimal effect on dirithromycin potency. In addition the variability (lack of reproducibility) of the test results with dirithromycin were not significant. Dirithromycin is an alternative therapeutic choice among macrolide compounds for treatment of community-acquired respiratory infections caused by various streptococci, Legionella pneumophilia, Mycoplasma pneumoniae and M. catarrhalis, and also possesses a modest in vitro potency versus H. influenzae coupled with excellent pharmacokinetic properties. In vitro tests with dirithromycin will continue to be problematic for H. influenzae because of the adverse effects of recommended CO2 incubation for some standardized methods or commercial products (Etest).

  6. Description of nuclear systems with a self-consistent configuration-mixing approach: Theory, algorithm, and application to the 12C test nucleus

    NASA Astrophysics Data System (ADS)

    Robin, C.; Pillet, N.; Peña Arteaga, D.; Berger, J.-F.

    2016-02-01

    Background: Although self-consistent multiconfiguration methods have been used for decades to address the description of atomic and molecular many-body systems, only a few trials have been made in the context of nuclear structure. Purpose: This work aims at the development of such an approach to describe in a unified way various types of correlations in nuclei in a self-consistent manner where the mean-field is improved as correlations are introduced. The goal is to reconcile the usually set-apart shell-model and self-consistent mean-field methods. Method: This approach is referred to as "variational multiparticle-multihole configuration mixing method." It is based on a double variational principle which yields a set of two coupled equations that determine at the same time the expansion coefficients of the many-body wave function and the single-particle states. The solution of this problem is obtained by building a doubly iterative numerical algorithm. Results: The formalism is derived and discussed in a general context, starting from a three-body Hamiltonian. Links to existing many-body techniques such as the formalism of Green's functions are established. First applications are done using the two-body D1S Gogny effective force. The numerical procedure is tested on the 12C nucleus to study the convergence features of the algorithm in different contexts. Ground-state properties as well as single-particle quantities are analyzed, and the description of the first 2+ state is examined. Conclusions: The self-consistent multiparticle-multihole configuration mixing method is fully applied for the first time to the description of a test nucleus. This study makes it possible to validate our numerical algorithm and leads to encouraging results. To test the method further, we will realize in the second article of this series a systematic description of more nuclei and observables obtained by applying the newly developed numerical procedure with the same Gogny force. As

  7. Atmospheric Correction of Ocean Color Imagery: Test of the Spectral Optimization Algorithm with the Sea-Viewing Wide Field-of-View Sensor.

    PubMed

    Chomko, R M; Gordon, H R

    2001-06-20

    We implemented the spectral optimization algorithm [SOA; Appl. Opt. 37, 5560 (1998)] in an image-processing environment and tested it with Sea-viewing Wide Field-of-View Sensor (SeaWiFS) imagery from the Middle Atlantic Bight and the Sargasso Sea. We compared the SOA and the standard SeaWiFS algorithm on two days that had significantly different atmospheric turbidities but, because of the location and time of the year, nearly the same water properties. The SOA-derived pigment concentration showed excellent continuity over the two days, with the relative difference in pigments exceeding 10% only in regions that are characteristic of high advection. The continuity in the derived water-leaving radiances at 443 and 555 nm was also within ~10%. There was no obvious correlation between the relative differences in pigments and the aerosol concentration. In contrast, standard processing showed poor continuity in derived pigments over the two days, with the relative differences correlating strongly with atmospheric turbidity. SOA-derived atmospheric parameters suggested that the retrieved ocean and atmospheric reflectances were decoupled on the more turbid day. On the clearer day, for which the aerosol concentration was so low that relatively large changes in aerosol properties resulted in only small changes in aerosol reflectance, water patterns were evident in the aerosol properties. This result implies that SOA-derived atmospheric parameters cannot be accurate in extremely clear atmospheres.

  8. Mono and multi-objective optimization techniques applied to a large range of industrial test cases using Metamodel assisted Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Fourment, Lionel; Ducloux, Richard; Marie, Stéphane; Ejday, Mohsen; Monnereau, Dominique; Massé, Thomas; Montmitonnet, Pierre

    2010-06-01

    The use of material processing numerical simulation allows a strategy of trial and error to improve virtual processes without incurring material costs or interrupting production and therefore save a lot of money, but it requires user time to analyze the results, adjust the operating conditions and restart the simulation. Automatic optimization is the perfect complement to simulation. Evolutionary Algorithm coupled with metamodelling makes it possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. Ten industrial partners have been selected to cover the different area of the mechanical forging industry and provide different examples of the forming simulation tools. It aims to demonstrate that it is possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. The large computational time is handled by a metamodel approach. It allows interpolating the objective function on the entire parameter space by only knowing the exact function values at a reduced number of "master points". Two algorithms are used: an evolution strategy combined with a Kriging metamodel and a genetic algorithm combined with a Meshless Finite Difference Method. The later approach is extended to multi-objective optimization. The set of solutions, which corresponds to the best possible compromises between the different objectives, is then computed in the same way. The population based approach allows using the parallel capabilities of the utilized computer with a high efficiency. An optimization module, fully embedded within the Forge2009 IHM, makes possible to cover all the defined examples, and the use of new multi-core hardware to compute several simulations at the same time reduces the needed time dramatically. The presented examples

  9. The Soil Moisture Active Passive Mission (SMAP) Science Data Products: Results of Testing with Field Experiment and Algorithm Testbed Simulation Environment Data

    NASA Technical Reports Server (NTRS)

    Entekhabi, Dara; Njoku, Eni E.; O'Neill, Peggy E.; Kellogg, Kent H.; Entin, Jared K.

    2010-01-01

    Talk outline 1. Derivation of SMAP basic and applied science requirements from the NRC Earth Science Decadal Survey applications 2. Data products and latencies 3. Algorithm highlights 4. SMAP Algorithm Testbed 5. SMAP Working Groups and community engagement

  10. Testing a Variety of Encryption Technologies

    SciTech Connect

    Henson, T J

    2001-04-09

    Review and test speeds of various encryption technologies using Entrust Software. Multiple encryption algorithms are included in the product. Algorithms tested were IDEA, CAST, DES, and RC2. Test consisted of taking a 7.7 MB Word document file which included complex graphics and timing encryption, decryption and signing. Encryption is discussed in the GIAC Kickstart section: Information Security: The Big Picture--Part VI.

  11. Exact Algorithms for Coloring Graphs While Avoiding Monochromatic Cycles

    NASA Astrophysics Data System (ADS)

    Talla Nobibon, Fabrice; Hurkens, Cor; Leus, Roel; Spieksma, Frits C. R.

    We consider the problem of deciding whether a given directed graph can be vertex partitioned into two acyclic subgraphs. Applications of this problem include testing rationality of collective consumption behavior, a subject in micro-economics. We identify classes of directed graphs for which the problem is easy and prove that the existence of a constant factor approximation algorithm is unlikely for an optimization version which maximizes the number of vertices that can be colored using two colors while avoiding monochromatic cycles. We present three exact algorithms, namely an integer-programming algorithm based on cycle identification, a backtracking algorithm, and a branch-and-check algorithm. We compare these three algorithms both on real-life instances and on randomly generated graphs. We find that for the latter set of graphs, every algorithm solves instances of considerable size within few seconds; however, the CPU time of the integer-programming algorithm increases with the number of vertices in the graph while that of the two other procedures does not. For every algorithm, we also study empirically the transition from a high to a low probability of YES answer as function of a parameter of the problem. For real-life instances, the integer-programming algorithm fails to solve the largest instance after one hour while the other two algorithms solve it in about ten minutes.

  12. Automatic design of decision-tree algorithms with evolutionary algorithms.

    PubMed

    Barros, Rodrigo C; Basgalupp, Márcio P; de Carvalho, André C P L F; Freitas, Alex A

    2013-01-01

    This study reports the empirical analysis of a hyper-heuristic evolutionary algorithm that is capable of automatically designing top-down decision-tree induction algorithms. Top-down decision-tree algorithms are of great importance, considering their ability to provide an intuitive and accurate knowledge representation for classification problems. The automatic design of these algorithms seems timely, given the large literature accumulated over more than 40 years of research in the manual design of decision-tree induction algorithms. The proposed hyper-heuristic evolutionary algorithm, HEAD-DT, is extensively tested using 20 public UCI datasets and 10 microarray gene expression datasets. The algorithms automatically designed by HEAD-DT are compared with traditional decision-tree induction algorithms, such as C4.5 and CART. Experimental results show that HEAD-DT is capable of generating algorithms which are significantly more accurate than C4.5 and CART.

  13. Space-Based Near-Infrared CO2 Measurements: Testing the Orbiting Carbon Observatory Retrieval Algorithm and Validation Concept Using SCIAMACHY Observations over Park Falls, Wisconsin

    NASA Technical Reports Server (NTRS)

    Bosch, H.; Toon, G. C.; Sen, B.; Washenfelder, R. A.; Wennberg, P. O.; Buchwitz, M.; deBeek, R.; Burrows, J. P.; Crisp, D.; Christi, M.; Connor, B. J.; Natraj, V.; Yung, Y. L.

    2006-01-01

    test of the OCO retrieval algorithm and validation concept using NIR spectra measured from space. Finally, we argue that significant improvements in precision and accuracy could be obtained from a dedicated CO2 instrument such as OCO, which has much higher spectral and spatial resolutions than SCIAMACHY. These measurements would then provide critical data for improving our understanding of the carbon cycle and carbon sources and sinks.

  14. A flight management algorithm and guidance for fuel-conservative descents in a time-based metered air traffic environment: Development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1984-01-01

    A simple airborne flight management descent algorithm designed to define a flight profile subject to the constraints of using idle thrust, a clean airplane configuration (landing gear up, flaps zero, and speed brakes retracted), and fixed-time end conditions was developed and flight tested in the NASA TSRV B-737 research airplane. The research test flights, conducted in the Denver ARTCC automated time-based metering LFM/PD ATC environment, demonstrated that time guidance and control in the cockpit was acceptable to the pilots and ATC controllers and resulted in arrival of the airplane over the metering fix with standard deviations in airspeed error of 6.5 knots, in altitude error of 23.7 m (77.8 ft), and in arrival time accuracy of 12 sec. These accuracies indicated a good representation of airplane performance and wind modeling. Fuel savings will be obtained on a fleet-wide basis through a reduction of the time error dispersions at the metering fix and on a single-airplane basis by presenting the pilot with guidance for a fuel-efficient descent.

  15. Sampling Within k-Means Algorithm to Cluster Large Datasets

    SciTech Connect

    Bejarano, Jeremy; Bose, Koushiki; Brannan, Tyler; Thomas, Anita; Adragni, Kofi; Neerchal, Nagaraj; Ostrouchov, George

    2011-08-01

    Due to current data collection technology, our ability to gather data has surpassed our ability to analyze it. In particular, k-means, one of the simplest and fastest clustering algorithms, is ill-equipped to handle extremely large datasets on even the most powerful machines. Our new algorithm uses a sample from a dataset to decrease runtime by reducing the amount of data analyzed. We perform a simulation study to compare our sampling based k-means to the standard k-means algorithm by analyzing both the speed and accuracy of the two methods. Results show that our algorithm is significantly more efficient than the existing algorithm with comparable accuracy. Further work on this project might include a more comprehensive study both on more varied test datasets as well as on real weather datasets. This is especially important considering that this preliminary study was performed on rather tame datasets. Also, these datasets should analyze the performance of the algorithm on varied values of k. Lastly, this paper showed that the algorithm was accurate for relatively low sample sizes. We would like to analyze this further to see how accurate the algorithm is for even lower sample sizes. We could find the lowest sample sizes, by manipulating width and confidence level, for which the algorithm would be acceptably accurate. In order for our algorithm to be a success, it needs to meet two benchmarks: match the accuracy of the standard k-means algorithm and significantly reduce runtime. Both goals are accomplished for all six datasets analyzed. However, on datasets of three and four dimension, as the data becomes more difficult to cluster, both algorithms fail to obtain the correct classifications on some trials. Nevertheless, our algorithm consistently matches the performance of the standard algorithm while becoming remarkably more efficient with time. Therefore, we conclude that analysts can use our algorithm, expecting accurate results in considerably less time.

  16. Adaptive reference update (ARU) algorithm. A stochastic search algorithm for efficient optimization of multi-drug cocktails

    PubMed Central

    2012-01-01

    Background Multi-target therapeutics has been shown to be effective for treating complex diseases, and currently, it is a common practice to combine multiple drugs to treat such diseases to optimize the therapeutic outcomes. However, considering the huge number of possible ways to mix multiple drugs at different concentrations, it is practically difficult to identify the optimal drug combination through exhaustive testing. Results In this paper, we propose a novel stochastic search algorithm, called the adaptive reference update (ARU) algorithm, that can provide an efficient and systematic way for optimizing multi-drug cocktails. The ARU algorithm iteratively updates the drug combination to improve its response, where the update is made by comparing the response of the current combination with that of a reference combination, based on which the beneficial update direction is predicted. The reference combination is continuously updated based on the drug response values observed in the past, thereby adapting to the underlying drug response function. To demonstrate the effectiveness of the proposed algorithm, we evaluated its performance based on various multi-dimensional drug functions and compared it with existing algorithms. Conclusions Simulation results show that the ARU algorithm significantly outperforms existing stochastic search algorithms, including the Gur Game algorithm. In fact, the ARU algorithm can more effectively identify potent drug combinations and it typically spends fewer iterations for finding effective combinations. Furthermore, the ARU algorithm is robust to random fluctuations and noise in the measured drug response, which makes the algorithm well-suited for practical drug optimization applications. PMID:23134742

  17. Limited-data computed tomograpy algorithms for the physical sciences

    NASA Astrophysics Data System (ADS)

    Verhoeven, Dean

    1993-07-01

    Results are presented from a comparison of implementations of five computed tomography algorithms which were either designed expressly to work with, or have been shown to work with, limited data and which may be applied to a wide variety of objects. These include the adapted versions of the algebraic reconstruction technique, the multiplicative algebraic reconstruction technique (MART), the Gerchberg-Papoulis algorithgm, a spectral extrapolation algorithm derived from that of Harris (1964), and an algorithm based on the singular value decomposition technique. The algorithms were used to reconstruct phantom data with realistic levels of noise from a number of different imaging geometries. It was found that the MART algorithm has a combination of advantages that makes it superior to other algorithms tested.

  18. Analysis of image thresholding segmentation algorithms based on swarm intelligence

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Lu, Kai; Gao, Yinghui; Yang, Bo

    2013-03-01

    Swarm intelligence-based image thresholding segmentation algorithms are playing an important role in the research field of image segmentation. In this paper, we briefly introduce the theories of four existing image segmentation algorithms based on swarm intelligence including fish swarm algorithm, artificial bee colony, bacteria foraging algorithm and particle swarm optimization. Then some image benchmarks are tested in order to show the differences of the segmentation accuracy, time consumption, convergence and robustness for Salt & Pepper noise and Gaussian noise of these four algorithms. Through these comparisons, this paper gives qualitative analyses for the performance variance of the four algorithms. The conclusions in this paper would give a significant guide for the actual image segmentation.

  19. Challenges of Diagnosing Acute HIV-1 Subtype C Infection in African Women: Performance of a Clinical Algorithm and the Need for Point-of-Care Nucleic-Acid Based Testing

    PubMed Central

    Mlisana, Koleka; Sobieszczyk, Magdalena; Werner, Lise; Feinstein, Addi; van Loggerenberg, Francois; Naicker, Nivashnee; Williamson, Carolyn; Garrett, Nigel

    2013-01-01

    Background Prompt diagnosis of acute HIV infection (AHI) benefits the individual and provides opportunities for public health intervention. The aim of this study was to describe most common signs and symptoms of AHI, correlate these with early disease progression and develop a clinical algorithm to identify acute HIV cases in resource limited setting. Methods 245 South African women at high-risk of HIV-1 were assessed for AHI and received monthly HIV-1 antibody and RNA testing. Signs and symptoms at first HIV-positive visit were compared to HIV-negative visits. Logistic regression identified clinical predictors of AHI. A model-based score was assigned to each predictor to create a risk score for every woman. Results Twenty-eight women seroconverted after a total of 390 person-years of follow-up with an HIV incidence of 7.2/100 person-years (95%CI 4.5–9.8). Fifty-seven percent reported ≥1 sign or symptom at the AHI visit. Factors predictive of AHI included age <25 years (OR = 3.2; 1.4–7.1), rash (OR = 6.1; 2.4–15.4), sore throat (OR = 2.7; 1.0–7.6), weight loss (OR = 4.4; 1.5–13.4), genital ulcers (OR = 8.0; 1.6–39.5) and vaginal discharge (OR = 5.4; 1.6–18.4). A risk score of 2 correctly predicted AHI in 50.0% of cases. The number of signs and symptoms correlated with higher HIV-1 RNA at diagnosis (r = 0.63; p<0.001). Conclusions Accurate recognition of signs and symptoms of AHI is critical for early diagnosis of HIV infection. Our algorithm may assist in risk-stratifying individuals for AHI, especially in resource-limited settings where there is no routine testing for AHI. Independent validation of the algorithm on another cohort is needed to assess its utility further. Point-of-care antigen or viral load technology is required, however, to detect asymptomatic, antibody negative cases enabling early interventions and prevention of transmission. PMID:23646162

  20. Evolution of catalysts directed by genetic algorithms in a plug-based microfluidic device tested with oxidation of methane by oxygen.

    PubMed

    Kreutz, Jason E; Shukhaev, Anton; Du, Wenbin; Druskin, Sasha; Daugulis, Olafs; Ismagilov, Rustem F

    2010-03-10

    This paper uses microfluidics to implement genetic algorithms (GA) to discover new homogeneous catalysts using the oxidation of methane by molecular oxygen as a model system. The parameters of the GA were the catalyst, a cocatalyst capable of using molecular oxygen as the terminal oxidant, and ligands that could tune the catalytic system. The GA required running hundreds of reactions to discover and optimize catalyst systems of high fitness, and microfluidics enabled these numerous reactions to be run in parallel. The small scale and volumes of microfluidics offer significant safety benefits. The microfluidic system included methods to form diverse arrays of plugs containing catalysts, introduce gaseous reagents at high pressure, run reactions in parallel, and detect catalyst activity using an in situ indicator system. Platinum(II) was identified as an active catalyst, and iron(II) and the polyoxometalate H(5)PMo(10)V(2)O(40) (POM-V2) were identified as active cocatalysts. The Pt/Fe system was further optimized and characterized using NMR experiments. After optimization, turnover numbers of approximately 50 were achieved with approximately equal production of methanol and formic acid. The Pt/Fe system demonstrated the compatibility of iron with the entire catalytic cycle. This approach of GA-guided evolution has the potential to accelerate discovery in catalysis and other areas where exploration of chemical space is essential, including optimization of materials for hydrogen storage and CO(2) capture and modifications.

  1. Algorithms for intravenous insulin delivery.

    PubMed

    Braithwaite, Susan S; Clement, Stephen

    2008-08-01

    This review aims to classify algorithms for intravenous insulin infusion according to design. Essential input data include the current blood glucose (BG(current)), the previous blood glucose (BG(previous)), the test time of BG(current) (test time(current)), the test time of BG(previous) (test time(previous)), and the previous insulin infusion rate (IR(previous)). Output data consist of the next insulin infusion rate (IR(next)) and next test time. The classification differentiates between "IR" and "MR" algorithm types, both defined as a rule for assigning an insulin infusion rate (IR), having a glycemic target. Both types are capable of assigning the IR for the next iteration of the algorithm (IR(next)) as an increasing function of BG(current), IR(previous), and rate-of-change of BG with respect to time, each treated as an independent variable. Algorithms of the IR type directly seek to define IR(next) as an incremental adjustment to IR(previous). At test time(current), under an IR algorithm the differences in values of IR(next) that might be assigned depending upon the value of BG(current) are not necessarily continuously dependent upon, proportionate to, or commensurate with either the IR(previous) or the rate-of-change of BG. Algorithms of the MR type create a family of IR functions of BG differing according to maintenance rate (MR), each being an iso-MR curve. The change of IR(next) with respect to BG(current) is a strictly increasing function of MR. At test time(current), algorithms of the MR type use IR(previous) and the rate-of-change of BG to define the MR, multiplier, or column assignment, which will be used for patient assignment to the right iso-MR curve and as precedent for IR(next). Bolus insulin therapy is especially effective when used in proportion to carbohydrate load to cover anticipated incremental transitory enteral or parenteral carbohydrate exposure. Specific distinguishing algorithm design features and choice of parameters may be important to

  2. Evaluation of the expected moments algorithm and a multiple low-outlier test for flood frequency analysis at streamgaging stations in Arizona

    USGS Publications Warehouse

    Paretti, Nicholas V.; Kennedy, Jeffrey R.; Cohn, Timothy A.

    2014-01-01

    Flooding is among the costliest natural disasters in terms of loss of life and property in Arizona, which is why the accurate estimation of flood frequency and magnitude is crucial for proper structural design and accurate floodplain mapping. Current guidelines for flood frequency analysis in the United States are described in Bulletin 17B (B17B), yet since B17B’s publication in 1982 (Interagency Advisory Committee on Water Data, 1982), several improvements have been proposed as updates for future guidelines. Two proposed updates are the Expected Moments Algorithm (EMA) to accommodate historical and censored data, and a generalized multiple Grubbs-Beck (MGB) low-outlier test. The current guidelines use a standard Grubbs-Beck (GB) method to identify low outliers, changing the determination of the moment estimators because B17B uses a conditional probability adjustment to handle low outliers while EMA censors the low outliers. B17B and EMA estimates are identical if no historical information or censored or low outliers are present in the peak-flow data. EMA with MGB (EMA-MGB) test was compared to the standard B17B (B17B-GB) method for flood frequency analysis at 328 streamgaging stations in Arizona. The methods were compared using the relative percent difference (RPD) between annual exceedance probabilities (AEPs), goodness-of-fit assessments, random resampling procedures, and Monte Carlo simulations. The AEPs were calculated and compared using both station skew and weighted skew. Streamgaging stations were classified by U.S. Geological Survey (USGS) National Water Information System (NWIS) qualification codes, used to denote historical and censored peak-flow data, to better understand the effect that nonstandard flood information has on the flood frequency analysis for each method. Streamgaging stations were also grouped according to geographic flood regions and analyzed separately to better understand regional differences caused by physiography and climate. The B

  3. An exact accelerated stochastic simulation algorithm

    NASA Astrophysics Data System (ADS)

    Mjolsness, Eric; Orendorff, David; Chatelain, Philippe; Koumoutsakos, Petros

    2009-04-01

    An exact method for stochastic simulation of chemical reaction networks, which accelerates the stochastic simulation algorithm (SSA), is proposed. The present "ER-leap" algorithm is derived from analytic upper and lower bounds on the multireaction probabilities sampled by SSA, together with rejection sampling and an adaptive multiplicity for reactions. The algorithm is tested on a number of well-quantified reaction networks and is found experimentally to be very accurate on test problems including a chaotic reaction network. At the same time ER-leap offers a substantial speedup over SSA with a simulation time proportional to the 2/3 power of the number of reaction events in a Galton-Watson process.

  4. An exact accelerated stochastic simulation algorithm

    PubMed Central

    Mjolsness, Eric; Orendorff, David; Chatelain, Philippe; Koumoutsakos, Petros

    2009-01-01

    An exact method for stochastic simulation of chemical reaction networks, which accelerates the stochastic simulation algorithm (SSA), is proposed. The present “ER-leap” algorithm is derived from analytic upper and lower bounds on the multireaction probabilities sampled by SSA, together with rejection sampling and an adaptive multiplicity for reactions. The algorithm is tested on a number of well-quantified reaction networks and is found experimentally to be very accurate on test problems including a chaotic reaction network. At the same time ER-leap offers a substantial speedup over SSA with a simulation time proportional to the 2∕3 power of the number of reaction events in a Galton–Watson process. PMID:19368432

  5. A comparative study of staff removal algorithms.

    PubMed

    Dalitz, Christoph; Droettboom, Michael; Pranzas, Bastian; Fujinaga, Ichiro

    2008-05-01

    This paper presents a quantitative comparison of different algorithms for the removal of stafflines from music images. It contains a survey of previously proposed algorithms and suggests a new skeletonization based approach. We define three different error metrics, compare the algorithms with respect to these metrics and measure their robustness with respect to certain image defects. Our test images are computer-generated scores on which we apply various image deformations typically found in real-world data. In addition to modern western music notation our test set also includes historic music notation such as mensural notation and lute tablature. Our general approach and evaluation methodology is not specific to staff removal, but applicable to other segmentation problems as well.

  6. Verifying a Computer Algorithm Mathematically.

    ERIC Educational Resources Information Center

    Olson, Alton T.

    1986-01-01

    Presents an example of mathematics from an algorithmic point of view, with emphasis on the design and verification of this algorithm. The program involves finding roots for algebraic equations using the half-interval search algorithm. The program listing is included. (JN)

  7. The Use of Genetic Algorithms as an Inverse Technique to Guide the Design and Implementation of Research at a Test Site in Shelby County, Tennessee

    NASA Astrophysics Data System (ADS)

    Gentry, R. W.

    2002-12-01

    The Shelby Farms test site in Shelby County, Tennessee is being developed to better understand recharge hydraulics to the Memphis aquifer in areas where leakage through an overlying aquitard occurs. The site is unique in that it demonstrates many opportunities for interdisciplinary research regarding environmental tracers, anthropogenic impacts and inverse modeling. The objective of the research funding the development of the test site is to better understand the groundwater hydrology and hydraulics between a shallow alluvial aquifer and the Memphis aquifer given an area of leakage, defined as an aquitard window. The site is situated in an area on the boundary of a highly developed urban area and is currently being used by an agricultural research agency and a local recreational park authority. Also, an abandoned landfill is situated to the immediate south of the window location. Previous research by the USGS determined the location of the aquitard window subsequent to the landfill closure. Inverse modeling using a genetic algorithm approach has identified the likely extents of the area of the window given an interaquifer accretion rate. These results, coupled with additional fieldwork, have been used to guide the direction of the field studies and the overall design of the research project. This additional work has encompassed the drilling of additional monitoring wells in nested groups by rotasonic drilling methods. The core collected during the drilling will provide additional constraints to the physics of the problem that may provide additional help in redefining the conceptual model. The problem is non-unique with respect to the leakage area and accretion rate and further research is being performed to provide some idea of the advective flow paths using a combination of tritium and 3He analyses and geochemistry. The outcomes of the research will result in a set of benchmark data and physical infrastructure that can be used to evaluate other environmental

  8. Fractal Landscape Algorithms for Environmental Simulations

    NASA Astrophysics Data System (ADS)

    Mao, H.; Moran, S.

    2014-12-01

    Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.

  9. Measuring surface wave phase velocities beneath small broad-band arrays: tests of an improved algorithm and application to the French Alps

    NASA Astrophysics Data System (ADS)

    Pedersen, Helle A.; Coutant, Olivier; Deschamps, A.; Soulage, M.; Cotte, N.

    2003-09-01

    The local measurement of dispersion curves of intermediate-period surface waves is particularly difficult because of the long wavelengths involved. We suggest an improved procedure for measuring dispersion curves using small-aperture broad-band arrays. The method is based on the hypotheses of plane incoming waves and that averaging over a set of events with a good backazimuth distribution will suppress the effects of diffraction outside the array. None of the elements of the processing are new in themselves, but each step is optimized so we can obtain a reliable dispersion curve with a well-defined uncertainty. The method is based on the inversion for the slowness vector at each event and frequency using time delays between pairs of stations, where the time delays Δt are obtained by frequency-domain Wiener filtering. The interstation distance projected on to the slowness vector (D) is then calculated. The final dispersion curve is found by, at each frequency, calculating the inverse of the slope of the best-fitting line of all (D, Δt) points. To test the algorithm, it is applied to synthetic seismograms of fundamental mode Rayleigh waves in different configurations: (1) the sum of several incident waves; (2) an array located next to or above a crustal thickening; and (3) added white noise, using regular and irregular backazimuth distributions. In each case, a circular array of 23 km diameter and composed of six stations is used. The algorithm is stable over a large range of wavelengths (between half and a tenth of the array size), depending on the configuration. The situations of several, simultaneously incoming waves or neighbouring heterogeneities are well handled and the inferred dispersion curve corresponds to that of the underlying medium. Above a strong lateral heterogeneity, the inferred dispersion curve corresponds to that of the underlying medium up to wavelengths of eight times the array size in the configuration considered, but further work is needed

  10. Pump apparatus including deconsolidator

    DOEpatents

    Sonwane, Chandrashekhar; Saunders, Timothy; Fitzsimmons, Mark Andrew

    2014-10-07

    A pump apparatus includes a particulate pump that defines a passage that extends from an inlet to an outlet. A duct is in flow communication with the outlet. The duct includes a deconsolidator configured to fragment particle agglomerates received from the passage.

  11. Optical modulator including grapene

    DOEpatents

    Liu, Ming; Yin, Xiaobo; Zhang, Xiang

    2016-06-07

    The present invention provides for a one or more layer graphene optical modulator. In a first exemplary embodiment the optical modulator includes an optical waveguide, a nanoscale oxide spacer adjacent to a working region of the waveguide, and a monolayer graphene sheet adjacent to the spacer. In a second exemplary embodiment, the optical modulator includes at least one pair of active media, where the pair includes an oxide spacer, a first monolayer graphene sheet adjacent to a first side of the spacer, and a second monolayer graphene sheet adjacent to a second side of the spacer, and at least one optical waveguide adjacent to the pair.

  12. Scheduling algorithms

    NASA Astrophysics Data System (ADS)

    Wolfe, William J.; Wood, David; Sorensen, Stephen E.

    1996-12-01

    This paper discusses automated scheduling as it applies to complex domains such as factories, transportation, and communications systems. The window-constrained-packing problem is introduced as an ideal model of the scheduling trade offs. Specific algorithms are compared in terms of simplicity, speed, and accuracy. In particular, dispatch, look-ahead, and genetic algorithms are statistically compared on randomly generated job sets. The conclusion is that dispatch methods are fast and fairly accurate; while modern algorithms, such as genetic and simulate annealing, have excessive run times, and are too complex to be practical.

  13. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  14. LEED I/V determination of the structure of a MoO3 monolayer on Au(111): Testing the performance of the CMA-ES evolutionary strategy algorithm, differential evolution, a genetic algorithm and tensor LEED based structural optimization

    NASA Astrophysics Data System (ADS)

    Primorac, E.; Kuhlenbeck, H.; Freund, H.-J.

    2016-07-01

    The structure of a thin MoO3 layer on Au(111) with a c(4 × 2) superstructure was studied with LEED I/V analysis. As proposed previously (Quek et al., Surf. Sci. 577 (2005) L71), the atomic structure of the layer is similar to that of a MoO3 single layer as found in regular α-MoO3. The layer on Au(111) has a glide plane parallel to the short unit vector of the c(4 × 2) unit cell and the molybdenum atoms are bridge-bonded to two surface gold atoms with the structure of the gold surface being slightly distorted. The structural refinement of the structure was performed with the CMA-ES evolutionary strategy algorithm which could reach a Pendry R-factor of ∼ 0.044. In the second part the performance of CMA-ES is compared with that of the differential evolution method, a genetic algorithm and the Powell optimization algorithm employing I/V curves calculated with tensor LEED.

  15. Screening for Human Immunodeficiency Virus, Hepatitis B Virus, Hepatitis C Virus, and Treponema pallidum by Blood Testing Using a Bio-Flash Technology-Based Algorithm before Gastrointestinal Endoscopy

    PubMed Central

    Zhen, Chen; QuiuLi, Zhang; YuanQi, An; Casado, Verónica Vocero; Fan, Yuan

    2016-01-01

    Currently, conventional enzyme immunoassays which use manual gold immunoassays and colloidal tests (GICTs) are used as screening tools to detect Treponema pallidum (syphilis), hepatitis B virus (HBV), hepatitis C virus (HCV), human immunodeficiency virus type 1 (HIV-1), and HIV-2 in patients undergoing surgery. The present observational, cross-sectional study compared the sensitivity, specificity, and work flow characteristics of the conventional algorithm with manual GICTs with those of a newly proposed algorithm that uses the automated Bio-Flash technology as a screening tool in patients undergoing gastrointestinal (GI) endoscopy. A total of 956 patients were examined for the presence of serological markers of infection with HIV-1/2, HCV, HBV, and T. pallidum. The proposed algorithm with the Bio-Flash technology was superior for the detection of all markers (100.0% sensitivity and specificity for detection of anti-HIV and anti-HCV antibodies, HBV surface antigen [HBsAg], and T. pallidum) compared with the conventional algorithm based on the manual method (80.0% sensitivity and 98.6% specificity for the detection of anti-HIV, 75.0% sensitivity for the detection of anti-HCV, 94.7% sensitivity for the detection of HBsAg, and 100% specificity for the detection of anti-HCV and HBsAg) in these patients. The automated Bio-Flash technology-based screening algorithm also reduced the operation time by 85.0% (205 min) per day, saving up to 24 h/week. In conclusion, the use of the newly proposed screening algorithm based on the automated Bio-Flash technology can provide an advantage over the use of conventional algorithms based on manual methods for screening for HIV, HBV, HCV, and syphilis before GI endoscopy. PMID:27707942

  16. Screening for Human Immunodeficiency Virus, Hepatitis B Virus, Hepatitis C Virus, and Treponema pallidum by Blood Testing Using a Bio-Flash Technology-Based Algorithm before Gastrointestinal Endoscopy.

    PubMed

    Jun, Zhou; Zhen, Chen; QuiuLi, Zhang; YuanQi, An; Casado, Verónica Vocero; Fan, Yuan

    2016-12-01

    Currently, conventional enzyme immunoassays which use manual gold immunoassays and colloidal tests (GICTs) are used as screening tools to detect Treponema pallidum (syphilis), hepatitis B virus (HBV), hepatitis C virus (HCV), human immunodeficiency virus type 1 (HIV-1), and HIV-2 in patients undergoing surgery. The present observational, cross-sectional study compared the sensitivity, specificity, and work flow characteristics of the conventional algorithm with manual GICTs with those of a newly proposed algorithm that uses the automated Bio-Flash technology as a screening tool in patients undergoing gastrointestinal (GI) endoscopy. A total of 956 patients were examined for the presence of serological markers of infection with HIV-1/2, HCV, HBV, and T. pallidum The proposed algorithm with the Bio-Flash technology was superior for the detection of all markers (100.0% sensitivity and specificity for detection of anti-HIV and anti-HCV antibodies, HBV surface antigen [HBsAg], and T. pallidum) compared with the conventional algorithm based on the manual method (80.0% sensitivity and 98.6% specificity for the detection of anti-HIV, 75.0% sensitivity for the detection of anti-HCV, 94.7% sensitivity for the detection of HBsAg, and 100% specificity for the detection of anti-HCV and HBsAg) in these patients. The automated Bio-Flash technology-based screening algorithm also reduced the operation time by 85.0% (205 min) per day, saving up to 24 h/week. In conclusion, the use of the newly proposed screening algorithm based on the automated Bio-Flash technology can provide an advantage over the use of conventional algorithms based on manual methods for screening for HIV, HBV, HCV, and syphilis before GI endoscopy.

  17. Investigation of registration algorithms for the automatic tile processing system

    NASA Technical Reports Server (NTRS)

    Tamir, Dan E.

    1995-01-01

    The Robotic Tile Inspection System (RTPS), under development in NASA-KSC, is expected to automate the processes of post-flight re-water-proofing and the process of inspection of the Shuttle heat absorbing tiles. An important task of the robot vision sub-system is to register the 'real-world' coordinates with the coordinates of the robot model of the Shuttle tiles. The model coordinates relate to a tile data-base and pre-flight tile-images. In the registration process, current (post-flight) images are aligned with pre-flight images to detect the rotation and translation displacement required for the coordinate systems rectification. The research activities performed this summer included study and evaluation of the registration algorithm that is currently implemented by the RTPS, as well as, investigation of the utility of other registration algorithms. It has been found that the current algorithm is not robust enough. This algorithm has a success rate of less than 80% and is, therefore, not suitable for complying with the requirements of the RTPS. Modifications to the current algorithm has been developed and tested. These modifications can improve the performance of the registration algorithm in a significant way. However, this improvement is not sufficient to satisfy system requirements. A new algorithm for registration has been developed and tested. This algorithm presented very high degree of robustness with success rate of 96%.

  18. HPTN 071 (PopART): Rationale and design of a cluster-randomised trial of the population impact of an HIV combination prevention intervention including universal testing and treatment – a study protocol for a cluster randomised trial

    PubMed Central

    2014-01-01

    Background Effective interventions to reduce HIV incidence in sub-Saharan Africa are urgently needed. Mathematical modelling and the HIV Prevention Trials Network (HPTN) 052 trial results suggest that universal HIV testing combined with immediate antiretroviral treatment (ART) should substantially reduce incidence and may eliminate HIV as a public health problem. We describe the rationale and design of a trial to evaluate this hypothesis. Methods/Design A rigorously-designed trial of universal testing and treatment (UTT) interventions is needed because: i) it is unknown whether these interventions can be delivered to scale with adequate uptake; ii) there are many uncertainties in the models such that the population-level impact of these interventions is unknown; and ii) there are potential adverse effects including sexual risk disinhibition, HIV-related stigma, over-burdening of health systems, poor adherence, toxicity, and drug resistance. In the HPTN 071 (PopART) trial, 21 communities in Zambia and South Africa (total population 1.2 m) will be randomly allocated to three arms. Arm A will receive the full PopART combination HIV prevention package including annual home-based HIV testing, promotion of medical male circumcision for HIV-negative men, and offer of immediate ART for those testing HIV-positive; Arm B will receive the full package except that ART initiation will follow current national guidelines; Arm C will receive standard of care. A Population Cohort of 2,500 adults will be randomly selected in each community and followed for 3 years to measure the primary outcome of HIV incidence. Based on model projections, the trial will be well-powered to detect predicted effects on HIV incidence and secondary outcomes. Discussion Trial results, combined with modelling and cost data, will provide short-term and long-term estimates of cost-effectiveness of UTT interventions. Importantly, the three-arm design will enable assessment of how much could be achieved by

  19. Including Jews in Multiculturalism.

    ERIC Educational Resources Information Center

    Langman, Peter F.

    1995-01-01

    Discusses reasons for the lack of attention to Jews as an ethnic minority within multiculturalism both by Jews and non-Jews; why Jews and Jewish issues need to be included; and addresses some of the issues involved in the ethical treatment of Jewish clients. (Author)

  20. The VITRO Score (Von Willebrand Factor Antigen/Thrombocyte Ratio) as a New Marker for Clinically Significant Portal Hypertension in Comparison to Other Non-Invasive Parameters of Fibrosis Including ELF Test

    PubMed Central

    Hametner, Stephanie; Ferlitsch, Arnulf; Ferlitsch, Monika; Etschmaier, Alexandra; Schöfl, Rainer; Ziachehabi, Alexander; Maieron, Andreas

    2016-01-01

    Background Clinically significant portal hypertension (CSPH), defined as hepatic venous pressure gradient (HVPG) ≥10 mmHg, causes major complications. HVPG is not always available, so a non-invasive tool to diagnose CSPH would be useful. VWF-Ag can be used to diagnose. Using the VITRO score (the VWF-Ag/platelet ratio) instead of VWF-Ag itself improves the diagnostic accuracy of detecting cirrhosis/ fibrosis in HCV patients. Aim This study tested the diagnostic accuracy of VITRO score detecting CSPH compared to HVPG measurement. Methods All patients underwent HVPG testing and were categorised as CSPH or no CSPH. The following patient data were determined: CPS, D’Amico stage, VITRO score, APRI and transient elastography (TE). Results The analysis included 236 patients; 170 (72%) were male, and the median age was 57.9 (35.2–76.3; 95% CI). Disease aetiology included ALD (39.4%), HCV (23.4%), NASH (12.3%), other (8.1%) and unknown (11.9%). The CPS showed 140 patients (59.3%) with CPS A; 56 (23.7%) with CPS B; and 18 (7.6%) with CPS C. 136 patients (57.6%) had compensated and 100 (42.4%) had decompensated cirrhosis; 83.9% had HVPG ≥10 mmHg. The VWF-Ag and the VITRO score increased significantly with worsening HVPG categories (P<0.0001). ROC analysis was performed for the detection of CSPH and showed AUC values of 0.92 for TE, 0.86 for VITRO score, 0.79 for VWF-Ag, 0.68 for ELF and 0.62 for APRI. Conclusion The VITRO score is an easy way to diagnose CSPH independently of CPS in routine clinical work and may improve the management of patients with cirrhosis. PMID:26895398

  1. The challenges of implementing and testing two signal processing algorithms for high rep-rate Coherent Doppler Lidar for wind sensing

    NASA Astrophysics Data System (ADS)

    Abdelazim, S.; Santoro, D.; Arend, M.; Moshary, F.; Ahmed, S.

    2015-05-01

    In this paper, we present two signal processing algorithms implemented using the FPGA. The first algorithm involves explicate time gating of received signals that correspond to a desired spatial resolution, performing a Fast Fourier Transform (FFT) calculation on each individual time gate, taking the square modulus of the FFT to form a power spectrum and then accumulating these power spectra for 10k return signals. The second algorithm involves calculating the autocorrelation of the backscattered signals and then accumulating the autocorrelation for 10k pulses. Efficient implementation of each of these two signal processing algorithms on an FPGA is challenging because it requires there to be tradeoffs between retaining the full data word width, managing the amount of on chip memory used and respecting the constraints imposed by the data width of the FPGA. A description of the approach used to manage these tradeoffs for each of the two signal processing algorithms are presented and explained in this article. Results of atmospheric measurements obtained through these two embedded programming techniques are also presented.

  2. [Clinical algorithms in the treatment of status epilepticus in children].

    PubMed

    Zubcević, S; Buljina, A; Gavranović, M; Uzicanin, S; Catibusić, F

    1999-01-01

    The clinical algorithm is a text format that is specially suited for presenting a sequence of clinical decisions, for teaching clinical decision making, and for guiding patient care. Clinical algorithms are compared as to their clinical usefulness with decision analysis. We have tried to make clinical algorithm for managing status epilepticus in children that can be applicable to our conditions. Most of the algorithms that are made on this subject include drugs and procedures that are not available at our hospital. We identified performance requirement, defined the set of problems to be solved as well as who would solve them, developed drafts in several versions and put them in the discussion with experts in this field. Algorithm was tested and revised and graphical acceptability was achieved. In the algorithm we tried to clearly define how the clinician should make the decision and to be provided with appropriate feedback. In one year period of experience in working we found this algorithm very useful in managing status epilepticus in children, as well as in teaching young doctors the specifities of algorithms and this specific issue. Their feedback is that they find that it provides the framework for facilitating thinking about clinical problems. Sometimes we hear objection that algorithms may not apply to a specific patient. This objection is based on misunderstanding how algorithms are used and should be corrected by a proper explanation of their use. We conclude that methods should be sought for writing clinical algorithms that represent expert consensus. A clinical algorithm can then be written for many areas of medical decision making that can be standardized. Medical practice would then be presented to students more effectively, accurately and understood better.

  3. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  4. Neural-Network-Biased Genetic Algorithms for Materials Design: Evolutionary Algorithms That Learn.

    PubMed

    Patra, Tarak K; Meenakshisundaram, Venkatesh; Hung, Jui-Hsiang; Simmons, David S

    2017-02-13

    Machine learning has the potential to dramatically accelerate high-throughput approaches to materials design, as demonstrated by successes in biomolecular design and hard materials design. However, in the search for new soft materials exhibiting properties and performance beyond those previously achieved, machine learning approaches are frequently limited by two shortcomings. First, because they are intrinsically interpolative, they are better suited to the optimization of properties within the known range of accessible behavior than to the discovery of new materials with extremal behavior. Second, they require large pre-existing data sets, which are frequently unavailable and prohibitively expensive to produce. Here we describe a new strategy, the neural-network-biased genetic algorithm (NBGA), for combining genetic algorithms, machine learning, and high-throughput computation or experiment to discover materials with extremal properties in the absence of pre-existing data. Within this strategy, predictions from a progressively constructed artificial neural network are employed to bias the evolution of a genetic algorithm, with fitness evaluations performed via direct simulation or experiment. In effect, this strategy gives the evolutionary algorithm the ability to "learn" and draw inferences from its experience to accelerate the evolutionary process. We test this algorithm against several standard optimization problems and polymer design problems and demonstrate that it matches and typically exceeds the efficiency and reproducibility of standard approaches including a direct-evaluation genetic algorithm and a neural-network-evaluated genetic algorithm. The success of this algorithm in a range of test problems indicates that the NBGA provides a robust strategy for employing informatics-accelerated high-throughput methods to accelerate materials design in the absence of pre-existing data.

  5. Algorithm for reaction classification.

    PubMed

    Kraut, Hans; Eiblmaier, Josef; Grethe, Guenter; Löw, Peter; Matuszczyk, Heinz; Saller, Heinz

    2013-11-25

    Reaction classification has important applications, and many approaches to classification have been applied. Our own algorithm tests all maximum common substructures (MCS) between all reactant and product molecules in order to find an atom mapping containing the minimum chemical distance (MCD). Recent publications have concluded that new MCS algorithms need to be compared with existing methods in a reproducible environment, preferably on a generalized test set, yet the number of test sets available is small, and they are not truly representative of the range of reactions that occur in real reaction databases. We have designed a challenging test set of reactions and are making it publicly available and usable with InfoChem's software or other classification algorithms. We supply a representative set of example reactions, grouped into different levels of difficulty, from a large number of reaction databases that chemists actually encounter in practice, in order to demonstrate the basic requirements for a mapping algorithm to detect the reaction centers in a consistent way. We invite the scientific community to contribute to the future extension and improvement of this data set, to achieve the goal of a common standard.

  6. An Introduction to the Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Tian, Jian-quan; Miao, Dan-min; Zhu, Xia; Gong, Jing-jing

    2007-01-01

    Computerized adaptive testing (CAT) has unsurpassable advantages over traditional testing. It has become the mainstream in large scale examinations in modern society. This paper gives a brief introduction to CAT including differences between traditional testing and CAT, the principles of CAT, psychometric theory and computer algorithms of CAT, the…

  7. Nutritional therapies (including fosteum).

    PubMed

    Nieves, Jeri W

    2009-03-01

    Nutrition is important in promoting bone health and in managing an individual with low bone mass or osteoporosis. In adult women and men, known losses of bone mass and microarchitecture occur, and nutrition can help minimize these losses. In every patient, a healthy diet with adequate protein, fruits, vegetables, calcium, and vitamin D is required to maintain bone health. Recent reports on nutritional remedies for osteoporosis have highlighted the importance of calcium in youth and continued importance in conjunction with vitamin D as the population ages. It is likely that a calcium intake of 1200 mg/d is ideal, and there are some concerns about excessive calcium intakes. However, vitamin D intake needs to be increased in most populations. The ability of soy products, particularly genistein aglycone, to provide skeletal benefit has been recently studied, including some data that support a new medical food marketed as Fosteum (Primus Pharmaceuticals, Scottsdale, AZ).

  8. How Are Mate Preferences Linked with Actual Mate Selection? Tests of Mate Preference Integration Algorithms Using Computer Simulations and Actual Mating Couples

    PubMed Central

    Conroy-Beam, Daniel; Buss, David M.

    2016-01-01

    Prior mate preference research has focused on the content of mate preferences. Yet in real life, people must select mates among potentials who vary along myriad dimensions. How do people incorporate information on many different mate preferences in order to choose which partner to pursue? Here, in Study 1, we compare seven candidate algorithms for integrating multiple mate preferences in a competitive agent-based model of human mate choice evolution. This model shows that a Euclidean algorithm is the most evolvable solution to the problem of selecting fitness-beneficial mates. Next, across three studies of actual couples (Study 2: n = 214; Study 3: n = 259; Study 4: n = 294) we apply the Euclidean algorithm toward predicting mate preference fulfillment overall and preference fulfillment as a function of mate value. Consistent with the hypothesis that mate preferences are integrated according to a Euclidean algorithm, we find that actual mates lie close in multidimensional preference space to the preferences of their partners. Moreover, this Euclidean preference fulfillment is greater for people who are higher in mate value, highlighting theoretically-predictable individual differences in who gets what they want. These new Euclidean tools have important implications for understanding real-world dynamics of mate selection. PMID:27276030

  9. Test of the Semi-Analytical Case 1 and Gelbstoff Case 2 SeaWiFS Algorithm with a Global Data Set

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1997-01-01

    The algorithm-development activities at USF during the second half of 1997 have concentrated on data collection and theoretical modeling. Six abstracts were submitted for presentation at the AGU conference in San Diego, California during February 9-13, 1998. Four papers were submitted to JGR and Applied Optics for publication.

  10. Refraction, including prisms.

    PubMed

    Hiatt, R L

    1991-02-01

    The literature in the past year on refraction is replete with several isolated but very important topics that have been of interest to strabismologists and refractionists for many decades. The refractive changes in scleral buckling procedures include an increase in axial length as well as an increase in myopia, as would be expected. Tinted lenses in dyslexia show little positive effect in the nonasthmatic patients in one study. The use of spectacles or bifocals as a way to control increase in myopia is refuted in another report. It has been shown that in accommodative esotropia not all patients will be able to escape the use of bifocals in the teenage years, even though surgery might be performed. The hope that disposable contact lenses would cut down on the instance of giant papillary conjunctivitis and keratitis has been given some credence, and the conventional theory that sclerosis alone is the cause of presbyopia is attacked. Also, gas permeable bifocal contact lenses are reviewed and the difficulties of correcting presbyopia by this method outlined. The practice of giving an aphakic less bifocal addition instead of a nonaphakic, based on the presumption of increased effective power, is challenged. In the review of prisms, the majority of articles concern prism adaption. The most significant report is that of the Prism Adaptation Study Research Group (Arch Ophthalmol 1990, 108:1248-1256), showing that acquired esotropia in particular has an increased incidence of stable and full corrections surgically in the prism adaptation group versus the control group.(ABSTRACT TRUNCATED AT 250 WORDS)

  11. Power spectral estimation algorithms

    NASA Technical Reports Server (NTRS)

    Bhatia, Manjit S.

    1989-01-01

    Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.

  12. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  13. DETECTION OF SUBSURFACE FACILITIES INCLUDING NON-METALLIC PIPE

    SciTech Connect

    Mr. Herb Duvoisin

    2003-05-26

    CyTerra has leveraged our unique, shallow buried plastic target detection technology developed under US Army contracts into deeper buried subsurface facilities and including nonmetallic pipe detection. This Final Report describes a portable, low-cost, real-time, and user-friendly subsurface plastic pipe detector (LULU- Low Cost Utility Location Unit) that relates to the goal of maintaining the integrity and reliability of the nation's natural gas transmission and distribution network by preventing third party damage, by detecting potential infringements. Except for frequency band and antenna size, the LULU unit is almost identical to those developed for the US Army. CyTerra designed, fabricated, and tested two frequency stepped GPR systems, spanning the frequencies of importance (200 to 1600 MHz), one low and one high frequency system. Data collection and testing was done at a variety of locations (selected for soil type variations) on both targets of opportunity and selected buried targets. We developed algorithms and signal processing techniques that provide for the automatic detection of the buried utility lines. The real time output produces a sound as the radar passes over the utility line alerting the operator to the presence of a buried object. Our unique, low noise/high performance RF hardware, combined with our field tested detection algorithms, represents an important advancement toward achieving the DOE potential infringement goal.

  14. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  15. Approximation algorithms

    PubMed Central

    Schulz, Andreas S.; Shmoys, David B.; Williamson, David P.

    1997-01-01

    Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science. PMID:9370525

  16. Parallel Clustering Algorithms for Structured AMR

    SciTech Connect

    Gunney, B T; Wissink, A M; Hysom, D A

    2005-10-26

    We compare several different parallel implementation approaches for the clustering operations performed during adaptive gridding operations in patch-based structured adaptive mesh refinement (SAMR) applications. Specifically, we target the clustering algorithm of Berger and Rigoutsos (BR91), which is commonly used in many SAMR applications. The baseline for comparison is a simplistic parallel extension of the original algorithm that works well for up to O(10{sup 2}) processors. Our goal is a clustering algorithm for machines of up to O(10{sup 5}) processors, such as the 64K-processor IBM BlueGene/Light system. We first present an algorithm that avoids the unneeded communications of the simplistic approach to improve the clustering speed by up to an order of magnitude. We then present a new task-parallel implementation to further reduce communication wait time, adding another order of magnitude of improvement. The new algorithms also exhibit more favorable scaling behavior for our test problems. Performance is evaluated on a number of large scale parallel computer systems, including a 16K-processor BlueGene/Light system.

  17. Lightning detection and exposure algorithms for smartphones

    NASA Astrophysics Data System (ADS)

    Wang, Haixin; Shao, Xiaopeng; Wang, Lin; Su, Laili; Huang, Yining

    2015-05-01

    This study focuses on the key theory of lightning detection, exposure and the experiments. Firstly, the algorithm based on differential operation between two adjacent frames is selected to remove the lightning background information and extract lighting signal, and the threshold detection algorithm is applied to achieve the purpose of precise detection of lightning. Secondly, an algorithm is proposed to obtain scene exposure value, which can automatically detect external illumination status. Subsequently, a look-up table could be built on the basis of the relationships between the exposure value and average image brightness to achieve rapid automatic exposure. Finally, based on a USB 3.0 industrial camera including a CMOS imaging sensor, a set of hardware test platform is established and experiments are carried out on this platform to verify the performances of the proposed algorithms. The algorithms can effectively and fast capture clear lightning pictures such as special nighttime scenes, which will provide beneficial supporting to the smartphone industry, since the current exposure methods in smartphones often lost capture or induce overexposed or underexposed pictures.

  18. Implementing a self-structuring data learning algorithm

    NASA Astrophysics Data System (ADS)

    Graham, James; Carson, Daniel; Ternovskiy, Igor

    2016-05-01

    In this paper, we elaborate on what we did to implement our self-structuring data learning algorithm. To recap, we are working to develop a data learning algorithm that will eventually be capable of goal driven pattern learning and extrapolation of more complex patterns from less complex ones. At this point we have developed a conceptual framework for the algorithm, but have yet to discuss our actual implementation and the consideration and shortcuts we needed to take to create said implementation. We will elaborate on our initial setup of the algorithm and the scenarios we used to test our early stage algorithm. While we want this to be a general algorithm, it is necessary to start with a simple scenario or two to provide a viable development and testing environment. To that end, our discussion will be geared toward what we include in our initial implementation and why, as well as what concerns we may have. In the future, we expect to be able to apply our algorithm to a more general approach, but to do so within a reasonable time, we needed to pick a place to start.

  19. Fast voxel and polygon ray-tracing algorithms in intensity modulated radiation therapy treatment planning.

    PubMed

    Fox, Christopher; Romeijn, H Edwin; Dempsey, James F

    2006-05-01

    We present work on combining three algorithms to improve ray-tracing efficiency in radiation therapy dose computation. The three algorithms include: An improved point-in-polygon algorithm, incremental voxel ray tracing algorithm, and stereographic projection of beamlets for voxel truncation. The point-in-polygon and incremental voxel ray-tracing algorithms have been used in computer graphics and nuclear medicine applications while the stereographic projection algorithm was developed by our group. These algorithms demonstrate significant improvements over the current standard algorithms in peer reviewed literature, i.e., the polygon and voxel ray-tracing algorithms of Siddon for voxel classification (point-in-polygon testing) and dose computation, respectively, and radius testing for voxel truncation. The presented polygon ray-tracing technique was tested on 10 intensity modulated radiation therapy (IMRT) treatment planning cases that required the classification of between 0.58 and 2.0 million voxels on a 2.5 mm isotropic dose grid into 1-4 targets and 5-14 structures represented as extruded polygons (a.k.a. Siddon prisms). Incremental voxel ray tracing and voxel truncation employing virtual stereographic projection was tested on the same IMRT treatment planning cases where voxel dose was required for 230-2400 beamlets using a finite-size pencil-beam algorithm. Between a 100 and 360 fold cpu time improvement over Siddon's method was observed for the polygon ray-tracing algorithm to perform classification of voxels for target and structure membership. Between a 2.6 and 3.1 fold reduction in cpu time over current algorithms was found for the implementation of incremental ray tracing. Additionally, voxel truncation via stereographic projection was observed to be 11-25 times faster than the radial-testing beamlet extent approach and was further improved 1.7-2.0 fold through point-classification using the method of translation over the cross product technique.

  20. A Frequency-Domain Substructure System Identification Algorithm

    NASA Technical Reports Server (NTRS)

    Blades, Eric L.; Craig, Roy R., Jr.

    1996-01-01

    A new frequency-domain system identification algorithm is presented for system identification of substructures, such as payloads to be flown aboard the Space Shuttle. In the vibration test, all interface degrees of freedom where the substructure is connected to the carrier structure are either subjected to active excitation or are supported by a test stand with the reaction forces measured. The measured frequency-response data is used to obtain a linear, viscous-damped model with all interface-degree of freedom entries included. This model can then be used to validate analytical substructure models. This procedure makes it possible to obtain not only the fixed-interface modal data associated with a Craig-Bampton substructure model, but also the data associated with constraint modes. With this proposed algorithm, multiple-boundary-condition tests are not required, and test-stand dynamics is accounted for without requiring a separate modal test or finite element modeling of the test stand. Numerical simulations are used in examining the algorithm's ability to estimate valid reduced-order structural models. The algorithm's performance when frequency-response data covering narrow and broad frequency bandwidths is used as input is explored. Its performance when noise is added to the frequency-response data and the use of different least squares solution techniques are also examined. The identified reduced-order models are also compared for accuracy with other test-analysis models and a formulation for a Craig-Bampton test-analysis model is also presented.

  1. Algorithms and Libraries

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This exploratory study initiated our inquiry into algorithms and applications that would benefit by latency tolerant approach to algorithm building, including the construction of new algorithms where appropriate. In a multithreaded execution, when a processor reaches a point where remote memory access is necessary, the request is sent out on the network and a context--switch occurs to a new thread of computation. This effectively masks a long and unpredictable latency due to remote loads, thereby providing tolerance to remote access latency. We began to develop standards to profile various algorithm and application parameters, such as the degree of parallelism, granularity, precision, instruction set mix, interprocessor communication, latency etc. These tools will continue to develop and evolve as the Information Power Grid environment matures. To provide a richer context for this research, the project also focused on issues of fault-tolerance and computation migration of numerical algorithms and software. During the initial phase we tried to increase our understanding of the bottlenecks in single processor performance. Our work began by developing an approach for the automatic generation and optimization of numerical software for processors with deep memory hierarchies and pipelined functional units. Based on the results we achieved in this study we are planning to study other architectures of interest, including development of cost models, and developing code generators appropriate to these architectures.

  2. Approximate string matching algorithms for limited-vocabulary OCR output correction

    NASA Astrophysics Data System (ADS)

    Lasko, Thomas A.; Hauser, Susan E.

    2000-12-01

    Five methods for matching words mistranslated by optical character recognition to their most likely match in a reference dictionary were tested on data from the archives of the National Library of Medicine. The methods, including an adaptation of the cross correlation algorithm, the generic edit distance algorithm, the edit distance algorithm with a probabilistic substitution matrix, Bayesian analysis, and Bayesian analysis on an actively thinned reference dictionary were implemented and their accuracy rates compared. Of the five, the Bayesian algorithm produced the most correct matches (87%), and had the advantage of producing scores that have a useful and practical interpretation.

  3. A Rotor Tip Vortex Tracing Algorithm for Image Post-Processing

    NASA Technical Reports Server (NTRS)

    Overmeyer, Austin D.

    2015-01-01

    A neurite tracing algorithm, originally developed for medical image processing, was used to trace the location of the rotor tip vortex in density gradient flow visualization images. The tracing algorithm was applied to several representative test images to form case studies. The accuracy of the tracing algorithm was compared to two current methods including a manual point and click method and a cross-correlation template method. It is shown that the neurite tracing algorithm can reduce the post-processing time to trace the vortex by a factor of 10 to 15 without compromising the accuracy of the tip vortex location compared to other methods presented in literature.

  4. Corrective Action Investigation Plan for Corrective Action Unit 165: Areas 25 and 26 Dry Well and Washdown Areas, Nevada Test Site, Nevada (including Record of Technical Change Nos. 1, 2, and 3) (January 2002, Rev. 0)

    SciTech Connect

    U.S. Department of Energy, National Nuclear Security Administration Nevada Operations Office

    2002-01-09

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Operations Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 165 under the Federal Facility Agreement and Consent Order. Corrective Action Unit 165 consists of eight Corrective Action Sites (CASs): CAS 25-20-01, Lab Drain Dry Well; CAS 25-51-02, Dry Well; CAS 25-59-01, Septic System; CAS 26-59-01, Septic System; CAS 25-07-06, Train Decontamination Area; CAS 25-07-07, Vehicle Washdown; CAS 26-07-01, Vehicle Washdown Station; and CAS 25-47-01, Reservoir and French Drain. All eight CASs are located in the Nevada Test Site, Nevada. Six of these CASs are located in Area 25 facilities and two CASs are located in Area 26 facilities. The eight CASs at CAU 165 consist of dry wells, septic systems, decontamination pads, and a reservoir. The six CASs in Area 25 are associated with the Nuclear Rocket Development Station that operated from 1958 to 1973. The two CASs in Area 26 are associated with facilities constructed for Project Pluto, a series of nuclear reactor tests conducted between 1961 to 1964 to develop a nuclear-powered ramjet engine. Based on site history, the scope of this plan will be a two-phased approach to investigate the possible presence of hazardous and/or radioactive constituents at concentrations that could potentially pose a threat to human health and the environment. The Phase I analytical program for most CASs will include volatile organic compounds, semivolatile organic compounds, Resource Conservation and Recovery Act metals, total petroleum hydrocarbons, polychlorinated biphenyls, and radionuclides. If laboratory data obtained from the Phase I investigation indicates the presence of contaminants of concern, the process will continue with a Phase II investigation to define the extent of contamination. Based on the results of

  5. Corrective Action Investigation Plan for Corrective Action Unit 5: Landfills, Nevada Test Site, Nevada (Rev. No.: 0) includes Record of Technical Change No. 1 (dated 9/17/2002)

    SciTech Connect

    IT Corporation, Las Vegas, NV

    2002-05-28

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Operations Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 5 under the Federal Facility Agreement and Consent Order. Corrective Action Unit 5 consists of eight Corrective Action Sites (CASs): 05-15-01, Sanitary Landfill; 05-16-01, Landfill; 06-08-01, Landfill; 06-15-02, Sanitary Landfill; 06-15-03, Sanitary Landfill; 12-15-01, Sanitary Landfill; 20-15-01, Landfill; 23-15-03, Disposal Site. Located between Areas 5, 6, 12, 20, and 23 of the Nevada Test Site (NTS), CAU 5 consists of unlined landfills used in support of disposal operations between 1952 and 1992. Large volumes of solid waste were produced from the projects which used the CAU 5 landfills. Waste disposed in these landfills may be present without appropriate controls (i.e., use restrictions, adequate cover) and hazardous and/or radioactive constituents may be present at concentrations and locations that could potentially pose a threat to human health and/or the environment. During the 1992 to 1995 time frame, the NTS was used for various research and development projects including nuclear weapons testing. Instead of managing solid waste at one or two disposal sites, the practice on the NTS was to dispose of solid waste in the vicinity of the project. A review of historical documentation, process knowledge, personal interviews, and inferred activities associated with this CAU identified the following as potential contaminants of concern: volatile organic compounds, semivolatile organic compounds, polychlorinated biphenyls, pesticides, petroleum hydrocarbons (diesel- and gasoline-range organics), Resource Conservation and Recovery Act Metals, plus nickel and zinc. A two-phase approach has been selected to collect information and generate data to satisfy needed resolution criteria

  6. Multisensor data fusion algorithm development

    SciTech Connect

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  7. The Xmath Integration Algorithm

    ERIC Educational Resources Information Center

    Bringslid, Odd

    2009-01-01

    The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…

  8. An innovative localisation algorithm for railway vehicles

    NASA Astrophysics Data System (ADS)

    Allotta, B.; D'Adamio, P.; Malvezzi, M.; Pugi, L.; Ridolfi, A.; Rindi, A.; Vettori, G.

    2014-11-01

    . The estimation strategy has good performance also under degraded adhesion conditions and could be put on board of high-speed railway vehicles; it represents an accurate and reliable solution. The IMU board is tested via a dedicated Hardware in the Loop (HIL) test rig: it includes an industrial robot able to replicate the motion of the railway vehicle. Through the generated experimental outputs the performances of the innovative localisation algorithm have been evaluated: the HIL test rig permitted to test the proposed algorithm, avoiding expensive (in terms of time and cost) on-track tests, obtaining encouraging results. In fact, the preliminary results show a significant improvement of the position and speed estimation performances compared to those obtained with SCMT algorithms, currently in use on the Italian railway network.

  9. Tests on a CAST 7 two-dimensional airfoil in a streamlining test section

    NASA Technical Reports Server (NTRS)

    Goodyear, M. J.

    1984-01-01

    A unique opportunity has arisen to test one and the same airfoil model of CAST-7 section in two wind tunnels having adaptive walled test sections. The tunnels are very similar in terms of size and the available range of test conditions, but differ principally in their wall setting algorithms. Detailed data from the tests of the model in the Southampton tunnel, are included with comparisons between various sources of data indicating that both adaptive walled test sections provide low interference test conditions.

  10. Corrective Action Investigation Plan for Corrective Action Unit 214: Bunkers and Storage Areas Nevada Test Site, Nevada: Revision 0, Including Record of Technical Change No. 1 and No. 2

    SciTech Connect

    U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office

    2003-05-16

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 214 under the Federal Facility Agreement and Consent Order. Located in Areas 5, 11, and 25 of the Nevada Test Site, CAU 214 consists of nine Corrective Action Sites (CASs): 05-99-01, Fallout Shelters; 11-22-03, Drum; 25-99-12, Fly Ash Storage; 25-23-01, Contaminated Materials; 25-23-19, Radioactive Material Storage; 25-99-18, Storage Area; 25-34-03, Motor Dr/Gr Assembly (Bunker); 25-34-04, Motor Dr/Gr Assembly (Bunker); and 25-34-05, Motor Dr/Gr Assembly (Bunker). These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives (CAAs). The suspected contaminants and critical analyte s for CAU 214 include oil (total petroleum hydrocarbons-diesel-range organics [TPH-DRO], polychlorinated biphenyls [PCBs]), pesticides (chlordane, heptachlor, 4,4-DDT), barium, cadmium, chronium, lubricants (TPH-DRO, TPH-gasoline-range organics [GRO]), and fly ash (arsenic). The land-use zones where CAU 214 CASs are located dictate that future land uses will be limited to nonresidential (i.e., industrial) activities. The results of this field investigation will support a defensible evaluation of viable corrective action alternatives that will be presented in the corrective action decision document.

  11. The systems biology simulation core algorithm

    PubMed Central

    2013-01-01

    Background With the increasing availability of high dimensional time course data for metabolites, genes, and fluxes, the mathematical description of dynamical systems has become an essential aspect of research in systems biology. Models are often encoded in formats such as SBML, whose structure is very complex and difficult to evaluate due to many special cases. Results This article describes an efficient algorithm to solve SBML models that are interpreted in terms of ordinary differential equations. We begin our consideration with a formal representation of the mathematical form of the models and explain all parts of the algorithm in detail, including several preprocessing steps. We provide a flexible reference implementation as part of the Systems Biology Simulation Core Library, a community-driven project providing a large collection of numerical solvers and a sophisticated interface hierarchy for the definition of custom differential equation systems. To demonstrate the capabilities of the new algorithm, it has been tested with the entire SBML Test Suite and all models of BioModels Database. Conclusions The formal description of the mathematics behind the SBML format facilitates the implementation of the algorithm within specifically tailored programs. The reference implementation can be used as a simulation backend for Java™-based programs. Source code, binaries, and documentation can be freely obtained under the terms of the LGPL version 3 from http://simulation-core.sourceforge.net. Feature requests, bug reports, contributions, or any further discussion can be directed to the mailing list simulation-core-development@lists.sourceforge.net. PMID:23826941

  12. Parallel Algorithms and Patterns

    SciTech Connect

    Robey, Robert W.

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  13. Improved Chaff Solution Algorithm

    DTIC Science & Technology

    2009-03-01

    Programme de démonstration de technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré...technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré pour déterminer automatiquement...0Z4 2. SECURITY CLASSIFICATION (Overall security classification of the document including special warning terms if applicable .) UNCLASSIFIED

  14. Object-oriented algorithmic laboratory for ordering sparse matrices

    SciTech Connect

    Kumfert, Gary Karl

    2000-05-01

    generate the global ordering. Our software laboratory, ''Spinole'', implements state-of-the-art ordering algorithms for sparse matrices and graphs. We have used it to examine and augment the behavior of existing algorithms and test new ones. Its 40,000+ lilies of C++ code includes a base library test drivers, sample applications, and interfaces to C, C++, Matlab, and PETSc. Spinole is freely available and can be built on a variety of UNIX platforms as well as WindowsNT.

  15. Investigation into the efficiency of different bionic algorithm combinations for a COBRA meta-heuristic

    NASA Astrophysics Data System (ADS)

    Akhmedova, Sh; Semenkin, E.

    2017-02-01

    Previously, a meta-heuristic approach, called Co-Operation of Biology-Related Algorithms or COBRA, for solving real-parameter optimization problems was introduced and described. COBRA’s basic idea consists of a cooperative work of five well-known bionic algorithms such as Particle Swarm Optimization, the Wolf Pack Search, the Firefly Algorithm, the Cuckoo Search Algorithm and the Bat Algorithm, which were chosen due to the similarity of their schemes. The performance of this meta-heuristic was evaluated on a set of test functions and its workability was demonstrated. Thus it was established that the idea of the algorithms’ cooperative work is useful. However, it is unclear which bionic algorithms should be included in this cooperation and how many of them. Therefore, the five above-listed algorithms and additionally the Fish School Search algorithm were used for the development of five different modifications of COBRA by varying the number of component-algorithms. These modifications were tested on the same set of functions and the best of them was found. Ways of further improving the COBRA algorithm are then discussed.

  16. FORTRAN Algorithm for Image Processing

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Hull, David R.

    1987-01-01

    FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.

  17. Synthesis of Greedy Algorithms Using Dominance Relations

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2010-01-01

    Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.

  18. STAR Algorithm Integration Team - Facilitating operational algorithm development

    NASA Astrophysics Data System (ADS)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  19. Kernel Affine Projection Algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Weifeng; Príncipe, José C.

    2008-12-01

    The combination of the famed kernel trick and affine projection algorithms (APAs) yields powerful nonlinear extensions, named collectively here, KAPA. This paper is a follow-up study of the recently introduced kernel least-mean-square algorithm (KLMS). KAPA inherits the simplicity and online nature of KLMS while reducing its gradient noise, boosting performance. More interestingly, it provides a unifying model for several neural network techniques, including kernel least-mean-square algorithms, kernel adaline, sliding-window kernel recursive-least squares (KRLS), and regularization networks. Therefore, many insights can be gained into the basic relations among them and the tradeoff between computation complexity and performance. Several simulations illustrate its wide applicability.

  20. A Nonhomogeneous Cuckoo Search Algorithm Based on Quantum Mechanism for Real Parameter Optimization.

    PubMed

    Cheung, Ngaam J; Ding, Xue-Ming; Shen, Hong-Bin

    2017-02-01

    Cuckoo search (CS) algorithm is a nature-inspired search algorithm, in which all the individuals have identical search behaviors. However, this simple homogeneous search behavior is not always optimal to find the potential solution to a special problem, and it may trap the individuals into local regions leading to premature convergence. To overcome the drawback, this paper presents a new variant of CS algorithm with nonhomogeneous search strategies based on quantum mechanism to enhance search ability of the classical CS algorithm. Featured contributions in this paper include: 1) quantum-based strategy is developed for nonhomogeneous update laws and 2) we, for the first time, present a set of theoretical analyses on CS algorithm as well as the proposed algorithm, respectively, and conclude a set of parameter boundaries guaranteeing the convergence of the CS algorithm and the proposed algorithm. On 24 benchmark functions, we compare our method with five existing CS-based methods and other ten state-of-the-art algorithms. The numerical results demonstrate that the proposed algorithm is significantly better than the original CS algorithm and the rest of compared methods according to two nonparametric tests.

  1. A Test of Empirical and Semi-Analytical Algorithms for Euphotic Zone Depth with SeaWiFs Data Off Southeastern China

    DTIC Science & Technology

    2008-02-04

    0.219). The new algorithm is thus found not only worked well with waters of the Gulf of Mexico, Monterey Bay and the Arabian Sea, but also worked well...only worked well with watersofthe Gulf of Mexico. Monterey Bay and the ArabianSca, but also worked well with waters of the China Sea. 15. SUBJECT...with ship-borne measurements made over three different regions (the Arabian Sea, the Monterey Bay and the Gulf of Mexico) at different seasons. It was

  2. Water flow algorithm decision support tool for travelling salesman problem

    NASA Astrophysics Data System (ADS)

    Kamarudin, Anis Aklima; Othman, Zulaiha Ali; Sarim, Hafiz Mohd

    2016-08-01

    This paper discuss about the role of Decision Support Tool in Travelling Salesman Problem (TSP) for helping the researchers who doing research in same area will get the better result from the proposed algorithm. A study has been conducted and Rapid Application Development (RAD) model has been use as a methodology which includes requirement planning, user design, construction and cutover. Water Flow Algorithm (WFA) with initialization technique improvement is used as the proposed algorithm in this study for evaluating effectiveness against TSP cases. For DST evaluation will go through usability testing conducted on system use, quality of information, quality of interface and overall satisfaction. Evaluation is needed for determine whether this tool can assists user in making a decision to solve TSP problems with the proposed algorithm or not. Some statistical result shown the ability of this tool in term of helping researchers to conduct the experiments on the WFA with improvements TSP initialization.

  3. Genetic Bee Colony (GBC) algorithm: A new gene selection method for microarray cancer classification.

    PubMed

    Alshamlan, Hala M; Badr, Ghada H; Alohali, Yousef A

    2015-06-01

    Naturally inspired evolutionary algorithms prove effectiveness when used for solving feature selection and classification problems. Artificial Bee Colony (ABC) is a relatively new swarm intelligence method. In this paper, we propose a new hybrid gene selection method, namely Genetic Bee Colony (GBC) algorithm. The proposed algorithm combines the used of a Genetic Algorithm (GA) along with Artificial Bee Colony (ABC) algorithm. The goal is to integrate the advantages of both algorithms. The proposed algorithm is applied to a microarray gene expression profile in order to select the most predictive and informative genes for cancer classification. In order to test the accuracy performance of the proposed algorithm, extensive experiments were conducted. Three binary microarray datasets are use, which include: colon, leukemia, and lung. In addition, another three multi-class microarray datasets are used, which are: SRBCT, lymphoma, and leukemia. Results of the GBC algorithm are compared with our recently proposed technique: mRMR when combined with the Artificial Bee Colony algorithm (mRMR-ABC). We also compared the combination of mRMR with GA (mRMR-GA) and Particle Swarm Optimization (mRMR-PSO) algorithms. In addition, we compared the GBC algorithm with other related algorithms that have been recently published in the literature, using all benchmark datasets. The GBC algorithm shows superior performance as it achieved the highest classification accuracy along with the lowest average number of selected genes. This proves that the GBC algorithm is a promising approach for solving the gene selection problem in both binary and multi-class cancer classification.

  4. The Economic Benefits of Personnel Selection Using Ability Tests: A State of the Art Review Including a Detailed Analysis of the Dollar Benefit of U.S. Employment Service Placements and a Critique of the Low-Cutoff Method of Test Use. USES Test Research Report No. 47.

    ERIC Educational Resources Information Center

    Hunter, John E.

    The economic impact of optimal selection using ability tests is far higher than is commonly known. For small organizations, dollar savings from higher productivity can run into millions of dollars a year. This report estimates the potential savings to the Federal Government as an employer as being 15.61 billion dollars per year if tests were given…

  5. Subsurface Residence Times as an Algorithm for Aquifer Sensitivity Mapping: testing the concept with analytic element ground water models in the Contentnea Creek Basin, North Carolina, USA

    NASA Astrophysics Data System (ADS)

    Kraemer, S. R.

    2002-05-01

    The objective of this research is to test the utility of simple functions of spatially integrated and temporally averaged ground water residence times in shallow groundwatersheds with field observations and detailed computer simulations. The residence time of water in the subsurface is arguably a surrogate of aquifer sensitivity to contamination --- short contact time in subsurface media may result in reduced contaminant assimilation prior to discharge to a well or stream. Residence time is an established criterion for the delineation of wellhead protection areas. The residence time of water may also have application in assessing the connection between landscape and fair weather loadings of non-point source pollution to streams, such as the drainage of nitrogen-nitrate from agricultural fields as base flow. The field setting of this study includes a hierarchy of catchments in the Contentnea Creek basin (2600 km2) of North Carolina, USA, centered on the intensive coastal plain field study site at Lizzie, NC (1.2+km^2), run by the US Geological Survey and the NC Department of Environment and Natural Resources of Raleigh, NC. Analytic element models are used to define the advective flow field and regional boundary conditions. The issues of conceptual model complexity are explored using the multi-layer object oriented analytic element model Tim, and by embedding the finite difference model MODFLOW within the analytic element model GFLOW copyright. The models are compared to observations of hydraulic head, base flow separations, and aquifer geochemistry and age dating evidence. The resulting insights are captured and mapped across the basin as zones of average aquifer residence time using ArcView copyright GIS tools. Preliminary results and conclusions will be presented. Mention of commercial software does not constitute endorsement or recommendation for use.

  6. A human papilloma virus testing algorithm comprising a combination of the L1 broad-spectrum SPF10 PCR assay and a novel E6 high-risk multiplex type-specific genotyping PCR assay.

    PubMed

    van Alewijk, Dirk; Kleter, Bernhard; Vent, Maarten; Delroisse, Jean-Marc; de Koning, Maurits; van Doorn, Leen-Jan; Quint, Wim; Colau, Brigitte

    2013-04-01

    Human papillomavirus (HPV) epidemiological and vaccine studies require highly sensitive HPV detection and genotyping systems. To improve HPV detection by PCR, the broad-spectrum L1-based SPF10 PCR DNA enzyme immunoassay (DEIA) LiPA system and a novel E6-based multiplex type-specific system (MPTS123) that uses Luminex xMAP technology were combined into a new testing algorithm. To evaluate this algorithm, cervical swabs (n = 860) and cervical biopsy specimens (n = 355) were tested, with a focus on HPV types detected by the MPTS123 assay (types 16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, 59, 66, 68, 6, and 11). Among the HPV-positive samples, identifications of individual HPV genotypes were compared. When all MPTS123 targeted genotypes were considered together, good overall agreement was found (κ = 0.801, 95% confidence interval [CI], 0.784 to 0.818) with identification by SPF10 LiPA, but significantly more genotypes (P < 0.0001) were identified by the MPTS123 PCR Luminex assay, especially for HPV types 16, 35, 39, 45, 58, and 59. An alternative type-specific assay was evaluated that is based on detection of a limited number of HPV genotypes by type-specific PCR and a reverse hybridization assay (MPTS12 RHA). This assay showed results similar to those of the expanded MPTS123 Luminex assay. These results confirm the fact that broad-spectrum PCRs are hampered by type competition when multiple HPV genotypes are present in the same sample. Therefore, a testing algorithm combining the broad-spectrum PCR and a range of type-specific PCRs can offer a highly accurate method for the analysis of HPV infections and diminish the rate of false-negative results and may be particularly useful for epidemiological and vaccine studies.

  7. New perspectives in the use of ink evidence in forensic science Part II. Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC.

    PubMed

    Neumann, Cedric; Margot, Pierre

    2009-03-10

    In the first part of this research, three stages were stated for a program to increase the information extracted from ink evidence and maximise its usefulness to the criminal and civil justice system. These stages are (a) develop a standard methodology for analysing ink samples by high-performance thin layer chromatography (HPTLC) in reproducible way, when ink samples are analysed at different time, locations and by different examiners; (b) compare automatically and objectively ink samples; and (c) define and evaluate theoretical framework for the use of ink evidence in forensic context. This report focuses on the second of the three stages. Using the calibration and acquisition process described in the previous report, mathematical algorithms are proposed to automatically and objectively compare ink samples. The performances of these algorithms are systematically studied for various chemical and forensic conditions using standard performance tests commonly used in biometrics studies. The results show that different algorithms are best suited for different tasks. Finally, this report demonstrates how modern analytical and computer technology can be used in the field of ink examination and how tools developed and successfully applied in other fields of forensic science can help maximising its impact within the field of questioned documents.

  8. PSC algorithm description

    NASA Technical Reports Server (NTRS)

    Nobbs, Steven G.

    1995-01-01

    An overview of the performance seeking control (PSC) algorithm and details of the important components of the algorithm are given. The onboard propulsion system models, the linear programming optimization, and engine control interface are described. The PSC algorithm receives input from various computers on the aircraft including the digital flight computer, digital engine control, and electronic inlet control. The PSC algorithm contains compact models of the propulsion system including the inlet, engine, and nozzle. The models compute propulsion system parameters, such as inlet drag and fan stall margin, which are not directly measurable in flight. The compact models also compute sensitivities of the propulsion system parameters to change in control variables. The engine model consists of a linear steady state variable model (SSVM) and a nonlinear model. The SSVM is updated with efficiency factors calculated in the engine model update logic, or Kalman filter. The efficiency factors are used to adjust the SSVM to match the actual engine. The propulsion system models are mathematically integrated to form an overall propulsion system model. The propulsion system model is then optimized using a linear programming optimization scheme. The goal of the optimization is determined from the selected PSC mode of operation. The resulting trims are used to compute a new operating point about which the optimization process is repeated. This process is continued until an overall (global) optimum is reached before applying the trims to the controllers.

  9. PCA-LBG-based algorithms for VQ codebook generation

    NASA Astrophysics Data System (ADS)

    Tsai, Jinn-Tsong; Yang, Po-Yuan

    2015-04-01

    Vector quantisation (VQ) codebooks are generated by combining principal component analysis (PCA) algorithms with Linde-Buzo-Gray (LBG) algorithms. All training vectors are grouped according to the projected values of the principal components. The PCA-LBG-based algorithms include (1) PCA-LBG-Median, which selects the median vector of each group, (2) PCA-LBG-Centroid, which adopts the centroid vector of each group, and (3) PCA-LBG-Random, which randomly selects a vector of each group. The LBG algorithm finds a codebook based on the better vectors sent to an initial codebook by the PCA. The PCA performs an orthogonal transformation to convert a set of potentially correlated variables into a set of variables that are not linearly correlated. Because the orthogonal transformation efficiently distinguishes test image vectors, the proposed PCA-LBG-based algorithm is expected to outperform conventional algorithms in designing VQ codebooks. The experimental results confirm that the proposed PCA-LBG-based algorithms indeed obtain better results compared to existing methods reported in the literature.

  10. Short Time Exposure (STE) test in conjunction with Bovine Corneal Opacity and Permeability (BCOP) assay including histopathology to evaluate correspondence with the Globally Harmonized System (GHS) eye irritation classification of textile dyes.

    PubMed

    Oliveira, Gisele Augusto Rodrigues; Ducas, Rafael do Nascimento; Teixeira, Gabriel Campos; Batista, Aline Carvalho; Oliveira, Danielle Palma; Valadares, Marize Campos

    2015-09-01

    Eye irritation evaluation is mandatory for predicting health risks in consumers exposed to textile dyes. The two dyes, Reactive Orange 16 (RO16) and Reactive Green 19 (RG19) are classified as Category 2A (irritating to eyes) based on the UN Globally Harmonized System for classification (UN GHS), according to the Draize test. On the other hand, animal welfare considerations and the enforcement of a new regulation in the EU are drawing much attention in reducing or replacing animal experiments with alternative methods. This study evaluated the eye irritation of the two dyes RO16 and RG19 by combining the Short Time Exposure (STE) and the Bovine Corneal Opacity and Permeability (BCOP) assays and then comparing them with in vivo data from the GHS classification. The STE test (first level screening) categorized both dyes as GHS Category 1 (severe irritant). In the BCOP, dye RG19 was also classified as GHS Category 1 while dye RO16 was classified as GHS no prediction can be made. Both dyes caused damage to the corneal tissue as confirmed by histopathological analysis. Our findings demonstrated that the STE test did not contribute to arriving at a better conclusion about the eye irritation potential of the dyes when used in conjunction with the BCOP test. Adding the histopathology to the BCOP test could be an appropriate tool for a more meaningful prediction of the eye irritation potential of dyes.

  11. Greedy heuristic algorithm for solving series of eee components classification problems*

    NASA Astrophysics Data System (ADS)

    Kazakovtsev, A. L.; Antamoshkin, A. N.; Fedosov, V. V.

    2016-04-01

    Algorithms based on using the agglomerative greedy heuristics demonstrate precise and stable results for clustering problems based on k- means and p-median models. Such algorithms are successfully implemented in the processes of production of specialized EEE components for using in space systems which include testing each EEE device and detection of homogeneous production batches of the EEE components based on results of the tests using p-median models. In this paper, authors propose a new version of the genetic algorithm with the greedy agglomerative heuristic which allows solving series of problems. Such algorithm is useful for solving the k-means and p-median clustering problems when the number of clusters is unknown. Computational experiments on real data show that the preciseness of the result decreases insignificantly in comparison with the initial genetic algorithm for solving a single problem.

  12. Accuracy and efficiency of algorithms for the demarcation of bacterial ecotypes from DNA sequence data.

    PubMed

    Francisco, Juan Carlos; Cohan, Frederick M; Krizanc, Danny

    2014-01-01

    Identification of closely related, ecologically distinct populations of bacteria would benefit microbiologists working in many fields including systematics, epidemiology and biotechnology. Several laboratories have recently developed algorithms aimed at demarcating such 'ecotypes'. We examine the ability of four of these algorithms to correctly identify ecotypes from sequence data. We tested the algorithms on synthetic sequences, with known history and habitat associations, generated under the stable ecotype model and on data from Bacillus strains isolated from Death Valley where previous work has confirmed the existence of multiple ecotypes. We found that one of the algorithms (ecotype simulation) performs significantly better than the others (AdaptML, GMYC, BAPS) in both instances. Unfortunately, it was also shown to be the least efficient of the four. While ecotype simulation is the most accurate, it is by a large margin the slowest of the algorithms tested. Attempts at improving its efficiency are underway.

  13. A General Algorithm for Reusing Krylov Subspace Information. I. Unsteady Navier-Stokes

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Vuik, C.; Lucas, Peter; vanGijzen, Martin; Bijl, Hester

    2010-01-01

    A general algorithm is developed that reuses available information to accelerate the iterative convergence of linear systems with multiple right-hand sides A x = b (sup i), which are commonly encountered in steady or unsteady simulations of nonlinear equations. The algorithm is based on the classical GMRES algorithm with eigenvector enrichment but also includes a Galerkin projection preprocessing step and several novel Krylov subspace reuse strategies. The new approach is applied to a set of test problems, including an unsteady turbulent airfoil, and is shown in some cases to provide significant improvement in computational efficiency relative to baseline approaches.

  14. A universal symmetry detection algorithm.

    PubMed

    Maurer, Peter M

    2015-01-01

    Research on symmetry detection focuses on identifying and detecting new types of symmetry. The paper presents an algorithm that is capable of detecting any type of permutation-based symmetry, including many types for which there are no existing algorithms. General symmetry detection is library-based, but symmetries that can be parameterized, (i.e. total, partial, rotational, and dihedral symmetry), can be detected without using libraries. In many cases it is faster than existing techniques. Furthermore, it is simpler than most existing techniques, and can easily be incorporated into existing software. The algorithm can also be used with virtually any type of matrix-based symmetry, including conjugate symmetry.

  15. EDSP Tier 2 test (T2T) guidances and protocols are delivered, including web-based guidance for diagnosing and scoring, and evaluating EDC-induced pathology in fish and amphibian

    EPA Science Inventory

    The Agency’s Endocrine Disruptor Screening Program (EDSP) consists of two tiers. The first tier provides information regarding whether a chemical may have endocrine disruption properties. Tier 2 tests provide confirmation of ED effects and dose-response information to be us...

  16. Sampling protein conformations using segment libraries and a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Gunn, John R.

    1997-03-01

    We present a new simulation algorithm for minimizing empirical contact potentials for a simplified model of protein structure. The model consists of backbone atoms only (including Cβ) with the φ and ψ dihedral angles as the only degrees of freedom. In addition, φ and ψ are restricted to a finite set of 532 discrete pairs of values, and the secondary structural elements are held fixed in ideal geometries. The potential function consists of a look-up table based on discretized inter-residue atomic distances. The minimization consists of two principal elements: the use of preselected lists of trial moves and the use of a genetic algorithm. The trial moves consist of substitutions of one or two complete loop regions, and the lists are in turn built up using preselected lists of randomly-generated three-residue segments. The genetic algorithm consists of mutation steps (namely, the loop replacements), as well as a hybridization step in which new structures are created by combining parts of two "parents'' and a selection step in which hybrid structures are introduced into the population. These methods are combined into a Monte Carlo simulated annealing algorithm which has the overall structure of a random walk on a restricted set of preselected conformations. The algorithm is tested using two types of simple model potential. The first uses global information derived from the radius of gyration and the rms deviation to drive the folding, whereas the second is based exclusively on distance-geometry constraints. The hierarchical algorithm significantly outperforms conventional Monte Carlo simulation for a set of test proteins in both cases, with the greatest advantage being for the largest molecule having 193 residues. When tested on a realistic potential function, the method consistently generates structures ranked lower than the crystal structure. The results also show that the improved efficiency of the hierarchical algorithm exceeds that which would be anticipated

  17. Normative data for the "Sniffin' Sticks" including tests of odor identification, odor discrimination, and olfactory thresholds: an upgrade based on a group of more than 3,000 subjects.

    PubMed

    Hummel, T; Kobal, G; Gudziol, H; Mackay-Sim, A

    2007-03-01

    "Sniffin' Sticks" is a test of nasal chemosensory function that is based on pen-like odor dispensing devices, introduced some 10 years ago by Kobal and co-workers. It consists of tests for odor threshold, discrimination, and identification. Previous work established its test-retest reliability and validity. Results of the test are presented as "TDI score", the sum of results obtained for threshold, discrimination, and identification measures. While normative data have been established they are based on a relatively small number of subjects, especially with regard to subjects older than 55 years where data from only 30 healthy subjects have been used. The present study aimed to remedy this situation. Now data are available from 3,282 subjects as compared to data from 738 subjects published previously. Disregarding sex-related differences, the TDI score at the tenth percentile was 24.9 in subjects younger than 15 years, 30.3 for ages from 16 to 35 years, 27.3 for ages from 36 to 55 years, and 19.6 for subjects older than 55 years. Because the tenth percentile has been defined to separate hyposmia from normosmia, these data can be used as a guide to estimate individual olfactory ability in relation to subject's age. Absolute hyposmia was defined as the tenth percentile score of 16-35 year old subjects. Other than previous reports the present norms are also sex-differentiated with women outperforming men in the three olfactory tests. Further, the present data suggest specific changes of individual olfactory functions in relation to age, with odor thresholds declining most dramatically compared to odor discrimination and odor identification.

  18. A New Proton Dose Algorithm for Radiotherapy

    NASA Astrophysics Data System (ADS)

    Lee, Chungchi (Chris).

    This algorithm recursively propagates the proton distribution in energy, angle and space at one level in an absorbing medium to another, at slightly greater depth, until all the protons are stopped. The angular transition density describing the proton trajectory is based on Moliere's multiple scattering theory and Vavilov's theory of energy loss along the proton's path increment. These multiple scattering and energy loss distributions are sampled using equal probability spacing to optimize computational speed while maintaining calculational accuracy. Nuclear interactions are accounted for by using a simple exponential expression to describe the loss of protons along a given path increment and the fraction of the original energy retained by the proton is deposited locally. Two levels of testing for the algorithm are provided: (1) Absolute dose comparisons with PTRAN Monte Carlo simulations in homogeneous water media. (2) Modeling of a fixed beam line including the scattering system and range modulator and comparisons with measured data in a homogeneous water phantom. The dose accuracy of this algorithm is shown to be within +/-5% throughout the range of a 200-MeV proton when compared to measurements except in the shoulder region of the lateral profile at the Bragg peak where a dose difference as large as 11% can be found. The numerical algorithm has an adequate spatial accuracy of 3 mm. Measured data as input is not required.

  19. Hybrid Bearing Prognostic Test Rig

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Certo, Joseph M.; Handschuh, Robert F.; Dimofte, Florin

    2005-01-01

    The NASA Glenn Research Center has developed a new Hybrid Bearing Prognostic Test Rig to evaluate the performance of sensors and algorithms in predicting failures of rolling element bearings for aeronautics and space applications. The failure progression of both conventional and hybrid (ceramic rolling elements, metal races) bearings can be tested from fault initiation to total failure. The effects of different lubricants on bearing life can also be evaluated. Test conditions monitored and recorded during the test include load, oil temperature, vibration, and oil debris. New diagnostic research instrumentation will also be evaluated for hybrid bearing damage detection. This paper summarizes the capabilities of this new test rig.

  20. Committee Meeting of Assembly Education Committee "To Receive Testimony from the Commissioner of Education, Mary Lee Fitzgerald, Department Staff, and Others Concerning the Department's Skills Testing Program, Including the Early Warning Test and High School Proficiency Test, Pursuant to Assembly Resolution No. 113."

    ERIC Educational Resources Information Center

    New Jersey State Office of Legislative Services, Trenton. Assembly Education Committee.

    The Assembly Education Committee of the New Jersey Office of Legislative Services held a hearing pursuant to Assembly Resolution 113, a proposal directing the Committee to investigate the skills testing program developed and administered to New Jersey children by the State Department of Education. The Committee was interested in the eighth-grade…

  1. Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.

    2005-01-01

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.

  2. New knowledge-based genetic algorithm for excavator boom structural optimization

    NASA Astrophysics Data System (ADS)

    Hua, Haiyan; Lin, Shuwen

    2014-03-01

    Due to the insufficiency of utilizing knowledge to guide the complex optimal searching, existing genetic algorithms fail to effectively solve excavator boom structural optimization problem. To improve the optimization efficiency and quality, a new knowledge-based real-coded genetic algorithm is proposed. A dual evolution mechanism combining knowledge evolution with genetic algorithm is established to extract, handle and utilize the shallow and deep implicit constraint knowledge to guide the optimal searching of genetic algorithm circularly. Based on this dual evolution mechanism, knowledge evolution and population evolution can be connected by knowledge influence operators to improve the configurability of knowledge and genetic operators. Then, the new knowledge-based selection operator, crossover operator and mutation operator are proposed to integrate the optimal process knowledge and domain culture to guide the excavator boom structural optimization. Eight kinds of testing algorithms, which include different genetic operators, are taken as examples to solve the structural optimization of a medium-sized excavator boom. By comparing the results of optimization, it is shown that the algorithm including all the new knowledge-based genetic operators can more remarkably improve the evolutionary rate and searching ability than other testing algorithms, which demonstrates the effectiveness of knowledge for guiding optimal searching. The proposed knowledge-based genetic algorithm by combining multi-level knowledge evolution with numerical optimization provides a new effective method for solving the complex engineering optimization problem.

  3. NOSS altimeter algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.

    1982-01-01

    A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.

  4. An algorithm to assess intestinal iron availability for use in dietary surveys.

    PubMed

    Rickard, Anna P; Chatfield, Mark D; Conway, Rana E; Stephen, Alison M; Powell, Jonathan J

    2009-12-01

    In nutritional epidemiology, it is often assumed that nutrient absorption is proportional to nutrient intake. For several nutrients, including non-haem Fe, this assumption may not hold. Depending on the nutrients ingested with non-haem Fe, its availability for absorption varies greatly. Therefore, using Fe intake to examine associations between Fe and health can impact upon the validity of findings. Previous algorithms that adjust Fe intakes for dietary factors known to affect absorption have been found to underestimate Fe absorption and, in the present study, perform poorly on independent dietary data. We have designed a new algorithm to adjust Fe intakes for the effects of ascorbic acid, meat, fish and poultry, phytate, polyphenols and Ca, incorporating not only absorption data from test meals but also current understanding of Fe absorption. In so doing, we have created a robust and universal Fe algorithm with potential for use in large cohorts. The algorithm described aims not to predict Fe absorption but available Fe in the gut, a measure we believe to be of greater use in epidemiological research. Available Fe is Fe available for absorption from the gastrointestinal tract, taking into account enhancing or inhibiting effects of dietary modifiers. Our algorithm successfully estimated average Fe availability in test meal data used to construct the algorithm and, unlike other algorithms tested, also provided plausible predictions when applied to independent dietary data. Future research is needed to evaluate the extent to which this algorithm is useful in epidemiological research to relate Fe to health outcomes.

  5. Understanding Algorithms in Different Presentations

    ERIC Educational Resources Information Center

    Csernoch, Mária; Biró, Piroska; Abari, Kálmán; Máth, János

    2015-01-01

    Within the framework of the Testing Algorithmic and Application Skills project we tested first year students of Informatics at the beginning of their tertiary education. We were focusing on the students' level of understanding in different programming environments. In the present paper we provide the results from the University of Debrecen, the…

  6. LAHS: A novel harmony search algorithm based on learning automata

    NASA Astrophysics Data System (ADS)

    Enayatifar, Rasul; Yousefi, Moslem; Abdullah, Abdul Hanan; Darus, Amer Nordin

    2013-12-01

    This study presents a learning automata-based harmony search (LAHS) for unconstrained optimization of continuous problems. The harmony search (HS) algorithm performance strongly depends on the fine tuning of its parameters, including the harmony consideration rate (HMCR), pitch adjustment rate (PAR) and bandwidth (bw). Inspired by the spur-in-time responses in the musical improvisation process, learning capabilities are employed in the HS to select these parameters based on spontaneous reactions. An extensive numerical investigation is conducted on several well-known test functions, and the results are compared with the HS algorithm and its prominent variants, including the improved harmony search (IHS), global-best harmony search (GHS) and self-adaptive global-best harmony search (SGHS). The numerical results indicate that the LAHS is more efficient in finding optimum solutions and outperforms the existing HS algorithm variants.

  7. SeaWiFS Science Algorithm Flow Chart

    NASA Technical Reports Server (NTRS)

    Darzi, Michael

    1998-01-01

    This flow chart describes the baseline science algorithms for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Data Processing System (SDPS). As such, it includes only processing steps used in the generation of the operational products that are archived by NASA's Goddard Space Flight Center (GSFC) Distributed Active Archive Center (DAAC). It is meant to provide the reader with a basic understanding of the scientific algorithm steps applied to SeaWiFS data. It does not include non-science steps, such as format conversions, and places the greatest emphasis on the geophysical calculations of the level-2 processing. Finally, the flow chart reflects the logic sequences and the conditional tests of the software so that it may be used to evaluate the fidelity of the implementation of the scientific algorithm. In many cases however, the chart may deviate from the details of the software implementation so as to simplify the presentation.

  8. Frequency of colour vision deficiencies in melanoma patients: results of a prospective comparative screening study with the Farnsworth panel D 15 test including 300 melanoma patients and 100 healthy controls.

    PubMed

    Pföhler, Claudia; Tschöp, Sabine; König, Jochem; Rass, Knuth; Tilgen, Wolfgang

    2006-10-01

    Patients with melanoma may experience a variety of different vision symptoms, in part associated with melanoma-associated retinopathy. For several melanoma patients with or without melanoma-associated retinopathy, colour vision deficiencies, especially involving the tritan system, have been reported. The frequency of colour vision deficiencies in a larger cohort of melanoma patients has not yet been investigated. The aim of this study was to investigate the frequency of colour vision deficiencies in melanoma patients subject to stage of disease, prognostic factors such as tumour thickness or Clark level, S100-beta and predisposing diseases that may have an impact on colour vision (hypertension, diabetes mellitus, glaucoma or cataract). Three hundred melanoma patients in different tumour stages and 100 healthy age-matched and sex-matched controls were examined with the saturated Farnsworth panel D 15 test. Seventy out of 300 (23.3%) melanoma patients and 12/100 (12%) controls showed pathologic results in colour testing. This discrepancy was significant (P < 0.016; odds ratio = 2.23, 95% confidence interval 1.15-4.32). Increasing age was identified as a highly significant (P = 0.0005) risk factor for blue vision deficiency. Adjusting for the age and predisposing diseases, we could show that melanoma was associated with the risk of blue vision deficiency. The frequency of blue vision deficiency in 52/260 melanoma patients without predisposing diseases (20%) compared with 4/78 controls without predisposing diseases (5.1%) differed significantly (odds ratio 4.441; confidence interval 1.54-12.62; P < 0.004). In 260 melanoma patients without predisposing diseases, blue vision deficiency, as graded on a 6-point scale, showed a weak positive correlation (Spearman) with tumour stage (r = 0.147; P < 0.01), tumour thickness (r = 0.10; P = 0.0035), Clark level (r = 0.12; P = 0.04) and a weak negative correlation with time since initial diagnosis (r = -0.11; P = 0.0455). Blue

  9. Streamlined Approach for Environmental Restoration (SAFER) Plan for Corrective Action Unit 357: Mud Pits and Waste Dump, Nevada Test Site, Nevada: Revision 0, Including Record of Technical Change No. 1

    SciTech Connect

    2003-06-25

    This Streamlined Approach for Environmental Restoration (SAFER) plan was prepared as a characterization and closure report for Corrective Action Unit (CAU) 357, Mud Pits and Waste Dump, in accordance with the Federal Facility Agreement and Consent Order. The CAU consists of 14 Corrective Action Sites (CASs) located in Areas 1, 4, 7, 8, 10, and 25 of the Nevada Test Site (NTS). All of the CASs are found within Yucca Flat except CAS 25-15-01 (Waste Dump). Corrective Action Site 25-15-01 is found in Area 25 in Jackass Flat. Of the 14 CASs in CAU 357, 11 are mud pits, suspected mud pits, or mud processing-related sites, which are by-products of drilling activities in support of the underground nuclear weapons testing done on the NTS. Of the remaining CASs, one CAS is a waste dump, one CAS contains scattered lead bricks, and one CAS has a building associated with Project 31.2. All 14 of the CASs are inactive and abandoned. Clean closure with no further action of CAU 357 will be completed if no contaminants are detected above preliminary action levels. A closure report will be prepared and submitted to the Nevada Division of Environmental Protection for review and approval upon completion of the field activities. Record of Technical Change No. 1 is dated 3/2004.

  10. Field Testing of LIDAR-Assisted Feedforward Control Algorithms for Improved Speed Control and Fatigue Load Reduction on a 600-kW Wind Turbine: Preprint

    SciTech Connect

    Kumar, Avishek A.; Bossanyi, Ervin A.; Scholbrock, Andrew K.; Fleming, Paul; Boquet, Mathieu; Krishnamurthy, Raghu

    2015-12-14

    A severe challenge in controlling wind turbines is ensuring controller performance in the presence of a stochastic and unknown wind field, relying on the response of the turbine to generate control actions. Recent technologies such as LIDAR, allow sensing of the wind field before it reaches the rotor. In this work a field-testing campaign to test LIDAR Assisted Control (LAC) has been undertaken on a 600-kW turbine using a fixed, five-beam LIDAR system. The campaign compared the performance of a baseline controller to four LACs with progressively lower levels of feedback using 35 hours of collected data.

  11. Algorithm development for hyperspectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Rosario, Dalton S.

    2008-10-01

    This dissertation proposes and evaluates a novel anomaly detection algorithm suite for ground-to-ground, or air-to-ground, applications requiring automatic target detection using hyperspectral (HS) data. Targets are manmade objects in natural background clutter under unknown illumination and atmospheric conditions. The use of statistical models herein is purely for motivation of particular formulas for calculating anomaly output surfaces. In particular, formulas from semiparametrics are utilized to obtain novel forms for output surfaces, and alternative scoring algorithms are proposed to calculate output surfaces that are comparable to those of semiparametrics. Evaluation uses both simulated data and real HS data from a joint data collection effort between the Army Research Laboratory and the Army Armament Research Development & Engineering Center. A data transformation method is presented for use by the two-sample data structure univariate semiparametric and nonparametric scoring algorithms, such that, the two-sample data are mapped from their original multivariate space to an univariate domain, where the statistical power of the univariate scoring algorithms is shown to be improved relative to existing multivariate scoring algorithms testing the same two-sample data. An exhaustive simulation experimental study is conducted to assess the performance of different HS anomaly detection techniques, where the null and alternative hypotheses are completely specified, including all parameters, using multivariate normal and mixtures of multivariate normal distributions. Finally, for ground-to-ground anomaly detection applications, where the unknown scales of targets add to the problem complexity, a novel global anomaly detection algorithm suite is introduced, featuring autonomous partial random sampling (PRS) of the data cube. The PRS method is proposed to automatically sample the unknown background clutter in the test HS imagery, and by repeating multiple times this

  12. Experimental validation of clock synchronization algorithms

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.; Graham, R. Lynn

    1992-01-01

    The objective of this work is to validate mathematically derived clock synchronization theories and their associated algorithms through experiment. Two theories are considered, the Interactive Convergence Clock Synchronization Algorithm and the Midpoint Algorithm. Special clock circuitry was designed and built so that several operating conditions and failure modes (including malicious failures) could be tested. Both theories are shown to predict conservative upper bounds (i.e., measured values of clock skew were always less than the theory prediction). Insight gained during experimentation led to alternative derivations of the theories. These new theories accurately predict the behavior of the clock system. It is found that a 100 percent penalty is paid to tolerate worst-case failures. It is also shown that under optimal conditions (with minimum error and no failures) the clock skew can be as much as three clock ticks. Clock skew grows to six clock ticks when failures are present. Finally, it is concluded that one cannot rely solely on test procedures or theoretical analysis to predict worst-case conditions.

  13. Thyroid Tests

    MedlinePlus

    ... calories and how fast your heart beats. Thyroid tests check how well your thyroid is working. They ... thyroid diseases such as hyperthyroidism and hypothyroidism. Thyroid tests include blood tests and imaging tests. Blood tests ...

  14. The footprint of old syphilis: using a reverse screening algorithm for syphilis testing in a U.S. Geographic Information Systems-Based Community Outreach Program.

    PubMed

    Goswami, Neela D; Stout, Jason E; Miller, William C; Hecker, Emily J; Cox, Gary M; Norton, Brianna L; Sena, Arlene C

    2013-11-01

    The impact of syphilis reverse sequence screening has not been evaluated in community outreach. Using reverse sequence screening in neighborhoods identified with geographic information systems, we found that among 239 participants, 45 (19%) were seropositive. Of these, 3 (7%) had untreated syphilis, 33 (73%) had previously treated syphilis infection, and 9 (20%) had negative nontreponemal test results.

  15. SUBSURFACE RESIDENCE TIMES AS AN ALGORITHM FOR AQUIFER SENSITIVITY MAPPING: TESTING THE CONCEPT WITH ANALYTIC ELEMENT GROUND WATER MODELS IN THE CONTENTNEA CREEK BASIN, NORTH CAROLINA, USA

    EPA Science Inventory

    The objective of this research is to test the utility of simple functions of spatially integrated and temporally averaged ground water residence times in shallow "groundwatersheds" with field observations and more detailed computer simulations. The residence time of water in the...

  16. Enhancing Orthographic Competencies and Reducing Domain-Specific Test Anxiety: The Systematic Use of Algorithmic and Self-Instructional Task Formats in Remedial Spelling Training

    ERIC Educational Resources Information Center

    Faber, Gunter

    2010-01-01

    In this study the effects of a remedial spelling training approach were evaluated, which systematically combines certain visualization and verbalization methods to foster students' spelling knowledge and strategy use. Several achievement and test anxiety data from three measurement times were analyzed. All students displayed severe spelling…

  17. [The algorithm for the determination of the sufficient number of dynamic electroneurostimulation procedures based on the magnitude of individual testing voltage at the reference point].

    PubMed

    Chernysh, I M; Zilov, V G; Vasilenko, A M; Frolkov, V K

    2016-01-01

    This article was designed to present evidence of the advantages of the personified approach to the treatment of the patients presenting with arterial hypertension (AH), lumbar spinal dorsopathy (LSD), chronic obstructive pulmonary disease (COPD), and duodenal ulcer (DU) at the stage of exacerbation obtained by the measurements of testing voltage at the reference point (Utest).

  18. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  19. A preliminary test of the application of the Lightning Detection and Ranging System (LDAR) as a thunderstorm warning and location device for the FHA including a correlation with updrafts, turbulence, and radar precipitation echoes

    NASA Technical Reports Server (NTRS)

    Poehler, H. A.

    1978-01-01

    Results of a test of the use of a Lightning Detection and Ranging (LDAR) remote display in the Patrick AFB RAPCON facility are presented. Agreement between LDAR and radar precipitation echoes of the RAPCON radar was observed, as well as agreement between LDAR and pilot's visual observations of lightning flashes. A more precise comparison between LDAR and KSC based radars is achieved by the superposition of LDAR precipitation echoes. Airborne measurements of updrafts and turbulence by an armored T-28 aircraft flying through the thunderclouds are correlated with LDAR along the flight path. Calibration and measurements of the accuracy of the LDAR System are discussed, and the extended range of the system is illustrated.

  20. Computerized Classification Testing under Practical Constraints with a Polytomous Model.

    ERIC Educational Resources Information Center

    Lau, C. Allen; Wang, Tianyou

    A study was conducted to extend the sequential probability ratio testing (SPRT) procedure with the polytomous model under some practical constraints in computerized classification testing (CCT), such as methods to control item exposure rate, and to study the effects of other variables, including item information algorithms, test difficulties, item…

  1. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  2. Evaluation of caspofungin susceptibility testing by the new Vitek 2 AST-YS06 yeast card using a unique collection of FKS wild-type and hot spot mutant isolates, including the five most common candida species.

    PubMed

    Astvad, Karen M; Perlin, David S; Johansen, Helle K; Jensen, Rasmus H; Arendrup, Maiken C

    2013-01-01

    FKS mutant isolates associated with breakthrough or failure cases are emerging in clinical settings. Discrimination of these from wild-type (wt) isolates in a routine laboratory setting is complicated. We evaluated the ability of caspofungin MIC determination using the new Vitek 2 AST-Y06 yeast susceptibility card to correctly identify the fks mutants from wt isolates and compared the performance to those of the CLSI and EUCAST reference methods. A collection of 98 Candida isolates, including 31 fks hot spot mutants, were included. Performance was evaluated using the FKS genotype as the "gold standard" and compared to those of the CLSI and EUCAST methodologies. The categorical agreement for Vitek 2 was 93.9%, compared to 88.4% for the CLSI method and 98.7% for the EUCAST method. Vitek 2 misclassified 19.4% (6/31) of the fks mutant isolates as susceptible, in contrast to <4% for each of the reference methods. The overall essential agreement between the CLSI method and Vitek 2 MICs was 92.6% (88/95) but was substantially lower for fks mutant isolates (78.6% [22/28]). Correct discrimination between susceptible and intermediate Candida glabrata isolates was not possible, as the revised species-specific susceptibility breakpoint was not included in the Vitek 2 detection range (MIC of ≤0.250 to ≥4 mg/liter). In conclusion, the Vitek 2 allowed correct categorization of all wt isolates as susceptible. However, despite an acceptable categorical agreement, it failed to reliably classify isolates harboring fks hot spot mutations as intermediate or resistant, which was in part due to the fact that the detection range did not span the susceptibility breakpoint for C. glabrata.

  3. Modular algorithm concept evaluation tool (MACET) sensor fusion algorithm testbed

    NASA Astrophysics Data System (ADS)

    Watson, John S.; Williams, Bradford D.; Talele, Sunjay E.; Amphay, Sengvieng A.

    1995-07-01

    Target acquisition in a high clutter environment in all-weather at any time of day represents a much needed capability for the air-to-surface strike mission. A considerable amount of the research at the Armament Directorate at Wright Laboratory, Advanced Guidance Division WL/MNG, has been devoted to exploring various seeker technologies, including multi-spectral sensor fusion, that may yield a cost efficient system with these capabilities. Critical elements of any such seekers are the autonomous target acquisition and tracking algorithms. These algorithms allow the weapon system to operate independently and accurately in realistic battlefield scenarios. In order to assess the performance of the multi-spectral sensor fusion algorithms being produced as part of the seeker technology development programs, the Munition Processing Technology Branch of WL/MN is developing an algorithm testbed. This testbed consists of the Irma signature prediction model, data analysis workstations, such as the TABILS Analysis and Management System (TAMS), and the Modular Algorithm Concept Evaluation Tool (MACET) algorithm workstation. All three of these components are being enhanced to accommodate multi-spectral sensor fusion systems. MACET is being developed to provide a graphical interface driven simulation by which to quickly configure algorithm components and conduct performance evaluations. MACET is being developed incrementally with each release providing an additional channel of operation. To date MACET 1.0, a passive IR algorithm environment, has been delivered. The second release, MACET 1.1 is presented in this paper using the MMW/IR data from the Advanced Autonomous Dual Mode Seeker (AADMS) captive flight demonstration. Once completed, the delivered software from past algorithm development efforts will be converted to the MACET library format, thereby providing an on-line database of the algorithm research conducted to date.

  4. Intra-and-Inter Species Biomass Prediction in a Plantation Forest: Testing the Utility of High Spatial Resolution Spaceborne Multispectral RapidEye Sensor and Advanced Machine Learning Algorithms

    PubMed Central

    Dube, Timothy; Mutanga, Onisimo; Adam, Elhadi; Ismail, Riyad

    2014-01-01

    The quantification of aboveground biomass using remote sensing is critical for better understanding the role of forests in carbon sequestration and for informed sustainable management. Although remote sensing techniques have been proven useful in assessing forest biomass in general, more is required to investigate their capabilities in predicting intra-and-inter species biomass which are mainly characterised by non-linear relationships. In this study, we tested two machine learning algorithms, Stochastic Gradient Boosting (SGB) and Random Forest (RF) regression trees to predict intra-and-inter species biomass using high resolution RapidEye reflectance bands as well as the derived vegetation indices in a commercial plantation. The results showed that the SGB algorithm yielded the best performance for intra-and-inter species biomass prediction; using all the predictor variables as well as based on the most important selected variables. For example using the most important variables the algorithm produced an R2 of 0.80 and RMSE of 16.93 t·ha−1 for E. grandis; R2 of 0.79, RMSE of 17.27 t·ha−1 for P. taeda and R2 of 0.61, RMSE of 43.39 t·ha−1 for the combined species data sets. Comparatively, RF yielded plausible results only for E. dunii (R2 of 0.79; RMSE of 7.18 t·ha−1). We demonstrated that although the two statistical methods were able to predict biomass accurately, RF produced weaker results as compared to SGB when applied to combined species dataset. The result underscores the relevance of stochastic models in predicting biomass drawn from different species and genera using the new generation high resolution RapidEye sensor with strategically positioned bands. PMID:25140631

  5. Towards General Algorithms for Grammatical Inference

    NASA Astrophysics Data System (ADS)

    Clark, Alexander

    Many algorithms for grammatical inference can be viewed as instances of a more general algorithm which maintains a set of primitive elements, which distributionally define sets of strings, and a set of features or tests that constrain various inference rules. Using this general framework, which we cast as a process of logical inference, we re-analyse Angluin's famous lstar algorithm and several recent algorithms for the inference of context-free grammars and multiple context-free grammars. Finally, to illustrate the advantages of this approach, we extend it to the inference of functional transductions from positive data only, and we present a new algorithm for the inference of finite state transducers.

  6. Corrective Action Decision Document for Corrective Action Unit 168: Areas 25 and 26 Contaminated Materials and Waste Dumps, Nevada Test Site, Nevada: Revision 0, Including Record of Technical Change No. 1

    SciTech Connect

    U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office

    2003-08-08

    This Corrective Action Decision Document identifies and rationalizes the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office's selection of recommended corrective action alternatives (CAAs) to facilitate the closure of Corrective Action Unit (CAU)168: Areas 25 and 26 Contaminated Materials and Waste Dumps, Nevada Test Site (NTS), Nevada, under the Federal Facility Agreement and Consent Order. Located in Areas 25 and 26 at the NTS in Nevada, CAU 168 is comprised of twelve Corrective Action Sites (CASs). Review of data collected during the corrective action investigation, as well as consideration of current and future operations in Areas 25 and 26 of the NTS, led the way to the development of three CAAs for consideration: Alternative 1 - No Further Action; Alternative 2 - Clean Closure; and Alternative 3 - Close in Place with Administrative Controls. As a result of this evaluation, a combination of all three CAAs is recommended for this CAU. Alternative 1 was the preferred CAA for three CASs, Alternative 2 was the preferred CAA for six CASs (and nearly all of one other CAS), and Alternative 3 was the preferred CAA for two CASs (and a portion of one other CAS) to complete the closure at the CAU 168 sites. These alternatives were judged to meet all requirements for the technical components evaluated as well as all applicable state and federal regulations for closure of the sites and elimination of potential future exposure pathways to the contaminated soils at CAU 168.

  7. Corrective Action Investigation Plan for Corrective Action Unit 527: Horn Silver Mine, Nevada Test Site, Nevada: Revision 1 (Including Records of Technical Change No.1, 2, 3, and 4)

    SciTech Connect

    U.S. Department of Energy, National Nuclear Security Administration Nevada Operations Office

    2002-12-06

    This Corrective Action Investigation Plan contains the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Operations Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 527, Horn Silver Mine, Nevada Test Site, Nevada, under the Federal Facility Agreement and Consent Order. Corrective Action Unit 527 consists of one Corrective Action Site (CAS): 26-20-01, Contaminated Waste Dump No.1. The site is located in an abandoned mine site in Area 26 (which is the most arid part of the NTS) approximately 65 miles northwest of Las Vegas. Historical documents may refer to this site as CAU 168, CWD-1, the Wingfield mine (or shaft), and the Wahmonie mine (or shaft). Historical documentation indicates that between 1959 and the 1970s, nonliquid classified material and unclassified waste was placed in the Horn Silver Mine's shaft. Some of the waste is known to be radioactive. Documentation indicates that the waste is present from 150 feet to the bottom of the mine (500 ft below ground surface). This CAU is being investigated because hazardous constituents migrating from materials and/or wastes disposed of in the Horn Silver Mine may pose a threat to human health and the environment as well as to assess the potential impacts associated with any potential releases from the waste. The results of this field investigation will support a defensible evaluation of corrective action alternatives in the corrective action decision document.

  8. Corrective Action Investigation Plan for Corrective Action Unit 322: Areas 1 and 3 Release Sites and Injection Wells, Nevada Test Site, Nevada: Revision 0, Including Record of Technical Change No. 1

    SciTech Connect

    U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office

    2003-07-16

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office's approach to collect the data necessary to evaluate corrective action alternatives (CAAs) appropriate for the closure of Corrective Action Unit (CAU) 322, Areas 1 and 3 Release Sites and Injection Wells, Nevada Test Site, Nevada, under the Federal Facility Agreement and Consent Order. Corrective Action Unit 322 consists of three Corrective Action Sites (CASs): 01-25-01, AST Release (Area 1); 03-25-03, Mud Plant AST Diesel Release (Area 3); 03-20-05, Injection Wells (Area 3). Corrective Action Unit 322 is being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives. The investigation of three CASs in CAU 322 will determine if hazardous and/or radioactive constituents are present at concentrations and locations that could potentially pose a threat to human health and the environment. The results of this field investigation will support a defensible evaluation of corrective action alternatives in the corrective action decision document.

  9. A retrodictive stochastic simulation algorithm

    SciTech Connect

    Vaughan, T.G. Drummond, P.D.; Drummond, A.J.

    2010-05-20

    In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.

  10. A High Precision Terahertz Wave Image Reconstruction Algorithm

    PubMed Central

    Guo, Qijia; Chang, Tianying; Geng, Guoshuai; Jia, Chengyan; Cui, Hong-Liang

    2016-01-01

    With the development of terahertz (THz) technology, the applications of this spectrum have become increasingly wide-ranging, in areas such as non-destructive testing, security applications and medical scanning, in which one of the most important methods is imaging. Unlike remote sensing applications, THz imaging features sources of array elements that are almost always supposed to be spherical wave radiators, including single antennae. As such, well-developed methodologies such as Range-Doppler Algorithm (RDA) are not directly applicable in such near-range situations. The Back Projection Algorithm (BPA) can provide products of high precision at the the cost of a high computational burden, while the Range Migration Algorithm (RMA) sacrifices the quality of images for efficiency. The Phase-shift Migration Algorithm (PMA) is a good alternative, the features of which combine both of the classical algorithms mentioned above. In this research, it is used for mechanical scanning, and is extended to array imaging for the first time. In addition, the performances of PMA are studied in detail in contrast to BPA and RMA. It is demonstrated in our simulations and experiments described herein that the algorithm can reconstruct images with high precision. PMID:27455269

  11. Comparative analysis of PSO algorithms for PID controller tuning

    NASA Astrophysics Data System (ADS)

    Štimac, Goranka; Braut, Sanjin; Žigulić, Roberto

    2014-09-01

    The active magnetic bearing(AMB) suspends the rotating shaft and maintains it in levitated position by applying controlled electromagnetic forces on the rotor in radial and axial directions. Although the development of various control methods is rapid, PID control strategy is still the most widely used control strategy in many applications, including AMBs. In order to tune PID controller, a particle swarm optimization(PSO) method is applied. Therefore, a comparative analysis of particle swarm optimization(PSO) algorithms is carried out, where two PSO algorithms, namely (1) PSO with linearly decreasing inertia weight(LDW-PSO), and (2) PSO algorithm with constriction factor approach(CFA-PSO), are independently tested for different PID structures. The computer simulations are carried out with the aim of minimizing the objective function defined as the integral of time multiplied by the absolute value of error(ITAE). In order to validate the performance of the analyzed PSO algorithms, one-axis and two-axis radial rotor/active magnetic bearing systems are examined. The results show that PSO algorithms are effective and easily implemented methods, providing stable convergence and good computational efficiency of different PID structures for the rotor/AMB systems. Moreover, the PSO algorithms prove to be easily used for controller tuning in case of both SISO and MIMO system, which consider the system delay and the interference among the horizontal and vertical rotor axes.

  12. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  13. Parachute Line Hook Includes Integral Loop Expander

    NASA Technical Reports Server (NTRS)

    Bayless, G. B.

    1983-01-01

    Parachute packing simplified with modified line hook. One person packs parachutes for test recovery vehicles faster than previously two-person team. New line hook includes expander that opens up two locking loops so parachute lines are pulled through them. Parachutes are packed at high pressure to be compressed into limited space available in test vehicles.

  14. On the new GPCC gridded reference data sets of observed (daily) monthly land-surface precipitation since (1988) 1901 published in 2014 including an all seasons open source test product

    NASA Astrophysics Data System (ADS)

    Ziese, Markus; Andreas, Becker; Peter, Finger; Anja, Meyer-Christoffer; Kirstin, Schamm; Udo, Schneider

    2014-05-01

    compared to other data sets like CRU or GHCN is based on the fact, that GPCC does not claim copyrights for its supplied data. Therefore GPCC cannot make public the original data of its analysis products. Still to allow the user to check GPCC's methods in re-processing and interpolation, a new Interpolation Test Dataset (ITD) will be released. The ITD will be based on a sub-set of public available station data and cover only one year. The gridded as well as underlying copyright free station data will be provided with the ITD addressing open source demands.

  15. Reproducibility of Research Algorithms in GOES-R Operational Software

    NASA Astrophysics Data System (ADS)

    Kennelly, E.; Botos, C.; Snell, H. E.; Steinfelt, E.; Khanna, R.; Zaccheo, T.

    2012-12-01

    The research to operations transition for satellite observations is an area of active interest as identified by The National Research Council Committee on NASA-NOAA Transition from Research to Operations. Their report recommends improved transitional processes for bridging technology from research to operations. Assuring the accuracy of operational algorithm results as compared to research baselines, called reproducibility in this paper, is a critical step in the GOES-R transition process. This paper defines reproducibility methods and measurements for verifying that operationally implemented algorithms conform to research baselines, demonstrated with examples from GOES-R software development. The approach defines reproducibility for implemented algorithms that produce continuous data in terms of a traditional goodness-of-fit measure (i.e., correlation coefficient), while the reproducibility for discrete categorical data is measured using a classification matrix. These reproducibility metrics have been incorporated in a set of Test Tools developed for GOES-R and the software processes have been developed to include these metrics to validate both the scientific and numerical implementation of the GOES-R algorithms. In this work, we outline the test and validation processes and summarize the current results for GOES-R Level 2+ algorithms.

  16. Harmony search algorithm: application to the redundancy optimization problem

    NASA Astrophysics Data System (ADS)

    Nahas, Nabil; Thien-My, Dao

    2010-09-01

    The redundancy optimization problem is a well known NP-hard problem which involves the selection of elements and redundancy levels to maximize system performance, given different system-level constraints. This article presents an efficient algorithm based on the harmony search algorithm (HSA) to solve this optimization problem. The HSA is a new nature-inspired algorithm which mimics the improvization process of music players. Two kinds of problems are considered in testing the proposed algorithm, with the first limited to the binary series-parallel system, where the problem consists of a selection of elements and redundancy levels used to maximize the system reliability given various system-level constraints; the second problem for its part concerns the multi-state series-parallel systems with performance levels ranging from perfect operation to complete failure, and in which identical redundant elements are included in order to achieve a desirable level of availability. Numerical results for test problems from previous research are reported and compared. The results of HSA showed that this algorithm could provide very good solutions when compared to those obtained through other approaches.

  17. Variational Algorithms for Drift and Collisional Guiding Center Dynamics

    NASA Astrophysics Data System (ADS)

    Ellison, C. Leland; Finn, John M.; Qin, Hong; Tang, William M.

    2014-10-01

    The simulation of guiding center test particle dynamics in the upcoming generation of magnetic confinement devices requires novel numerical methods to obtain the necessary long-term numerical fidelity. Geometric algorithms, which retain conserved quantities in the numerical time advances, are well-known to exhibit excellent long simulation time behavior. Due to the non-canonical Hamiltonian structure of the guiding center equations of motion, it is only recently that geometric algorithms have been developed for guiding center dynamics. This poster will discuss and compare several families of variational algorithms for application to 3-D guiding center test particle studies, while benchmarking the methods against standard Runge-Kutta techniques. Time-to-solution improvements using GPGPU hardware will be presented. Additionally, collisional dynamics will be incorporated into the structure-preserving guiding center algorithms for the first time. Non-Hamiltonian effects, such as polarization drag and simplified stochastic operators, can be incorporated using a Lagrange-d'Alembert variational principle. The long-time behavior of variational algorithms which include dissipative dynamics will be compared against standard techniques. This work was supported by DOE Contract DE-AC02-09CH11466.

  18. Reliability of old and new ventricular fibrillation detection algorithms for automated external defibrillators

    PubMed Central

    Amann, Anton; Tratnig, Robert; Unterkofler, Karl

    2005-01-01

    Background A pivotal component in automated external defibrillators (AEDs) is the detection of ventricular fibrillation by means of appropriate detection algorithms. In scientific literature there exists a wide variety of methods and ideas for handling this task. These algorithms should have a high detection quality, be easily implementable, and work in real time in an AED. Testing of these algorithms should be done by using a large amount of annotated data under equal conditions. Methods For our investigation we simulated a continuous analysis by selecting the data in steps of one second without any preselection. We used the complete BIH-MIT arrhythmia database, the CU database, and the files 7001 – 8210 of the AHA database. All algorithms were tested under equal conditions. Results For 5 well-known standard and 5 new ventricular fibrillation detection algorithms we calculated the sensitivity, specificity, and the area under their receiver operating characteristic. In addition, two QRS detection algorithms were included. These results are based on approximately 330 000 decisions (per algorithm). Conclusion Our values for sensitivity and specificity differ from earlier investigations since we used no preselection. The best algorithm is a new one, presented here for the first time. PMID:16253134

  19. Optimized selection of benchmark test parameters for image watermark algorithms based on Taguchi methods and corresponding influence on design decisions for real-world applications

    NASA Astrophysics Data System (ADS)

    Rodriguez, Tony F.; Cushman, David A.

    2003-06-01

    With the growing commercialization of watermarking techniques in various application scenarios it has become increasingly important to quantify the performance of watermarking products. The quantification of relative merits of various products is not only essential in enabling further adoption of the technology by society as a whole, but will also drive the industry to develop testing plans/methodologies to ensure quality and minimize cost (to both vendors & customers.) While the research community understands the theoretical need for a publicly available benchmarking system to quantify performance, there has been less discussion on the practical application of these systems. By providing a standard set of acceptance criteria, benchmarking systems can dramatically increase the quality of a particular watermarking solution, validating the product performances if they are used efficiently and frequently during the design process. In this paper we describe how to leverage specific design of experiments techniques to increase the quality of a watermarking scheme, to be used with the benchmark tools being developed by the Ad-Hoc Watermark Verification Group. A Taguchi Loss Function is proposed for an application and orthogonal arrays used to isolate optimal levels for a multi-factor experimental situation. Finally, the results are generalized to a population of cover works and validated through an exhaustive test.

  20. Using gaming engines and editors to construct simulations of fusion algorithms for situation management

    NASA Astrophysics Data System (ADS)

    Lewis, Lundy M.; DiStasio, Nolan; Wright, Christopher

    2010-04-01

    In this paper we discuss issues in testing various cognitive fusion algorithms for situation management. We provide a proof-of-principle discussion and demo showing how gaming technologies and platforms could be used to devise and test various fusion algorithms, including input, processing, and output, and we look at how the proof-of-principle could lead to more advanced test beds and methods for high-level fusion in support of situation management. We develop four simple fusion scenarios and one more complex scenario in which a simple rule-based system is scripted to govern the behavior of battlespace entities.

  1. Algorithm Animation with Galant.

    PubMed

    Stallmann, Matthias F

    2017-01-01

    Although surveys suggest positive student attitudes toward the use of algorithm animations, it is not clear that they improve learning outcomes. The Graph Algorithm Animation Tool, or Galant, challenges and motivates students to engage more deeply with algorithm concepts, without distracting them with programming language details or GUIs. Even though Galant is specifically designed for graph algorithms, it has also been used to animate other algorithms, most notably sorting algorithms.

  2. A new algorithm for constrained nonlinear least-squares problems, part 1

    NASA Technical Reports Server (NTRS)

    Hanson, R. J.; Krogh, F. T.

    1983-01-01

    A Gauss-Newton algorithm is presented for solving nonlinear least squares problems. The problem statement may include simple bounds or more general constraints on the unknowns. The algorithm uses a trust region that allows the objective function to increase with logic for retreating to best values. The computations for the linear problem are done using a least squares system solver that allows for simple bounds and linear constraints. The trust region limits are defined by a box around the current point. In its current form the algorithm is effective only for problems with small residuals, linear constraints and dense Jacobian matrices. Results on a set of test problems are encouraging.

  3. A Linear Quadratic Regulator Weight Selection Algorithm for Robust Pole Assignment

    DTIC Science & Technology

    1990-12-01

    MATLAB . Five test cases are run with the algorithm. First and second order systems that can be solved in closed form are compared with the algorithm poles...The MATLAB "m"-file used to solve the second order SISO system is included in appendix A. Run time on the Compaq 286 was 4 minutes. Table I! Second...poles but it should get close. When this system 3 was run on MATLAB with the algorithm m-files, the achievable poles were found to be -2.096 z 2.389j

  4. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.

    1990-01-01

    Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  5. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony

    1990-01-01

    The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  6. A distributed Canny edge detector: algorithm and FPGA implementation.

    PubMed

    Xu, Qian; Varadarajan, Srenivas; Chakrabarti, Chaitali; Karam, Lina J

    2014-07-01

    The Canny edge detector is one of the most widely used edge detection algorithms due to its superior performance. Unfortunately, not only is it computationally more intensive as compared with other edge detection algorithms, but it also has a higher latency because it is based on frame-level statistics. In this paper, we propose a mechanism to implement the Canny algorithm at the block level without any loss in edge detection performance compared with the original frame-level Canny algorithm. Directly applying the original Canny algorithm at the block-level leads to excessive edges in smooth regions and to loss of significant edges in high-detailed regions since the original Canny computes the high and low thresholds based on the frame-level statistics. To solve this problem, we present a distributed Canny edge detection algorithm that adaptively computes the edge detection thresholds based on the block type and the local distribution of the gradients in the image block. In addition, the new algorithm uses a nonuniform gradient magnitude histogram to compute block-based hysteresis thresholds. The resulting block-based algorithm has a significantly reduced latency and can be easily integrated with other block-based image codecs. It is capable of supporting fast edge detection of images and videos with high resolutions, including full-HD since the latency is now a function of the block size instead of the frame size. In addition, quantitative conformance evaluations and subjective tests show that the edge detection performance of the proposed algorithm is better than the original frame-based algorithm, especially when noise is present in the images. Finally, this algorithm is implemented using a 32 computing engine architecture and is synthesized on the Xilinx Virtex-5 FPGA. The synthesized architecture takes only 0.721 ms (including the SRAM READ/WRITE time and the computation time) to detect edges of 512 × 512 images in the USC SIPI database when clocked at 100

  7. Implementation on Landsat Data of a Simple Cloud Mask Algorithm Developed for MODIS Land Bands

    NASA Technical Reports Server (NTRS)

    Oreopoulos, Lazaros; Wilson, Michael J.; Varnai, Tamas

    2010-01-01

    This letter assesses the performance on Landsat-7 images of a modified version of a cloud masking algorithm originally developed for clear-sky compositing of Moderate Resolution Imaging Spectroradiometer (MODIS) images at northern mid-latitudes. While data from recent Landsat missions include measurements at thermal wavelengths, and such measurements are also planned for the next mission, thermal tests are not included in the suggested algorithm in its present form to maintain greater versatility and ease of use. To evaluate the masking algorithm we take advantage of the availability of manual (visual) cloud masks developed at USGS for the collection of Landsat scenes used here. As part of our evaluation we also include the Automated Cloud Cover Assesment (ACCA) algorithm that includes thermal tests and is used operationally by the Landsat-7 mission to provide scene cloud fractions, but no cloud masks. We show that the suggested algorithm can perform about as well as ACCA both in terms of scene cloud fraction and pixel-level cloud identification. Specifically, we find that the algorithm gives an error of 1.3% for the scene cloud fraction of 156 scenes, and a root mean square error of 7.2%, while it agrees with the manual mask for 93% of the pixels, figures very similar to those from ACCA (1.2%, 7.1%, 93.7%).

  8. A single TLD dose algorithm to satisfy federal standards and typical field conditions

    SciTech Connect

    Stanford, N.; McCurdy, D.E. )

    1990-06-01

    Modern whole-body dosimeters are often required to accurately measure the absorbed dose in a wide range of radiation fields. While programs are commonly developed around the fields tested as part of the National Voluntary Accreditation Program (NVLAP), the actual fields of application may be significantly different. Dose algorithms designed to meet the NVLAP standard, which emphasizes photons and high-energy beta radiation, may not be capable of the beta-energy discrimination necessary for accurate assessment of absorbed dose in the work environment. To address this problem, some processors use one algorithm for NVLAP testing and one or more different algorithms for the work environments. After several years of experience with a multiple algorithm approach, the Dosimetry Services Group of Yankee Atomic Electric Company (YAEC) developed a one-algorithm system for use with a four-element TLD badge using Li2B4O7 and CaSO4 phosphors. The design of the dosimeter allows the measurement of the effective energies of both photon and beta components of the radiation field, resulting in excellent mixed-field capability. The algorithm was successfully tested in all of the NVLAP photon and beta fields, as well as several non-NVLAP fields representative of the work environment. The work environment fields, including low- and medium-energy beta radiation and mixed fields of low-energy photons and beta particles, are often more demanding than the NVLAP fields. This paper discusses the development of the algorithm as well as some results of the system testing including: mixed-field irradiations, angular response, and a unique test to demonstrate the stability of the algorithm. An analysis of the uncertainty of the reported doses under various irradiation conditions is also presented.

  9. A single TLD dose algorithm to satisfy federal standards and typical field conditions.

    PubMed

    Stanford, N; McCurdy, D E

    1990-06-01

    Modern whole-body dosimeters are often required to accurately measure the absorbed dose in a wide range of radiation fields. While programs are commonly developed around the fields tested as part of the National Voluntary Accreditation Program (NVLAP), the actual fields of application may be significantly different. Dose algorithms designed to meet the NVLAP standard, which emphasizes photons and high-energy beta radiation, may not be capable of the beta-energy discrimination necessary for accurate assessment of absorbed dose in the work environment. To address this problem, some processors use one algorithm for NVLAP testing and one or more different algorithms for the work environments. After several years of experience with a multiple algorithm approach, the Dosimetry Services Group of Yankee Atomic Electric Company (YAEC) developed a one-algorithm system for use with a four-element TLD badge using Li2B4O7 and CaSO4 phosphors. The design of the dosimeter allows the measurement of the effective energies of both photon and beta components of the radiation field, resulting in excellent mixed-field capability. The algorithm was successfully tested in all of the NVLAP photon and beta fields, as well as several non-NVLAP fields representative of the work environment. The work environment fields, including low- and medium-energy beta radiation and mixed fields of low-energy photons and beta particles, are often more demanding than the NVLAP fields. This paper discusses the development of the algorithm as well as some results of the system testing including: mixed-field irradiations, angular response, and a unique test to demonstrate the stability of the algorithm. An analysis of the uncertainty of the reported doses under various irradiation conditions is also presented.

  10. The Rational Hybrid Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Clark, Michael

    2006-12-01

    The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.

  11. JPSS CGS Tools For Rapid Algorithm Updates

    NASA Astrophysics Data System (ADS)

    Smith, D. C.; Grant, K. D.

    2011-12-01

    The National Oceanic and Atmospheric Administration (NOAA) and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation civilian weather and environmental satellite system: the Joint Polar Satellite System (JPSS). JPSS will contribute the afternoon orbit component and ground processing system of the restructured National Polar-orbiting Operational Environmental Satellite System (NPOESS). As such, JPSS replaces the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA and the ground processing component of both POES and the Defense Meteorological Satellite Program (DMSP) replacement known as the Defense Weather Satellite System (DWSS), managed by the Department of Defense (DoD). The JPSS satellites will carry a suite of sensors designed to collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The ground processing system for JPSS is known as the JPSS Common Ground System (JPSS CGS), and consists of a Command, Control, and Communications Segment (C3S) and the Interface Data Processing Segment (IDPS). Both are developed by Raytheon Intelligence and Information Systems (IIS). The Interface Data Processing Segment will process NPOESS Preparatory Project, Joint Polar Satellite System and Defense Weather Satellite System satellite data to provide environmental data products to NOAA and DoD processing centers operated by the United States government. Under NPOESS, Northrop Grumman Aerospace Systems Algorithms and Data Products (A&DP) organization was responsible for the algorithms that produce the EDRs, including their quality aspects. For JPSS, that responsibility has transferred to NOAA's Center for Satellite Applications & Research (STAR). As the Calibration and Validation (Cal/Val) activities move forward following both the NPP launch and subsequent JPSS and DWSS launches, rapid algorithm updates may be required. Raytheon and

  12. Performance analysis of freeware filtering algorithms for determining ground surface from airborne laser scanning data

    NASA Astrophysics Data System (ADS)

    Julge, Kalev; Ellmann, Artu; Gruno, Anti

    2014-01-01

    Numerous filtering algorithms have been developed in order to distinguish the ground surface from nonground points acquired by airborne laser scanning. These algorithms automatically attempt to determine the ground points using various features such as predefined parameters and statistical analysis. Their efficiency also depends on landscape characteristics. The aim of this contribution is to test the performance of six common filtering algorithms embedded in three freeware programs. The algorithms' adaptive TIN, elevation threshold with expand window, maximum local slope, progressive morphology, multiscale curvature, and linear prediction were tested on four relatively large (4 to 8 km2) and diverse landscape areas, which included steep sloped hills, urban areas, ridge-like eskers, and a river valley. The results show that in diverse test areas each algorithm yields various commission and omission errors. It appears that adaptive TIN is suitable in urban areas while the multiscale curvature algorithm is best suited in wooded areas. The multiscale curvature algorithm yielded the overall best results with average root-mean-square error values of 0.35 m.

  13. Comparison of various contact algorithms for poroelastic tissues.

    PubMed

    Galbusera, Fabio; Bashkuev, Maxim; Wilke, Hans-Joachim; Shirazi-Adl, Aboulfazl; Schmidt, Hendrik

    2014-01-01

    Capabilities of the commercial finite element package ABAQUS in simulating frictionless contact between two saturated porous structures were evaluated and compared with those of an open source code, FEBio. In ABAQUS, both the default contact implementation and another algorithm based on an iterative approach requiring script programming were considered. Test simulations included a patch test of two cylindrical slabs in a gapless contact and confined compression conditions; a confined compression test of a porous cylindrical slab with a spherical porous indenter; and finally two unconfined compression tests of soft tissues mimicking diarthrodial joints. The patch test showed almost identical results for all algorithms. On the contrary, the confined and unconfined compression tests demonstrated large differences related to distinct physical and boundary conditions considered in each of the three contact algorithms investigated in this study. In general, contact with non-uniform gaps between fluid-filled porous structures could be effectively simulated with either ABAQUS or FEBio. The user should be aware of the parameter definitions, assumptions and limitations in each case, and take into consideration the physics and boundary conditions of the problem of interest when searching for the most appropriate model.

  14. Optimization of image processing algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Poudel, Pramod; Shirvaikar, Mukul

    2011-03-01

    This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.

  15. Algorithm for navigated ESS.

    PubMed

    Baudoin, T; Grgić, M V; Zadravec, D; Geber, G; Tomljenović, D; Kalogjera, L

    2013-12-01

    ENT navigation has given new opportunities in performing Endoscopic Sinus Surgery (ESS) and improving surgical outcome of the patients` treatment. ESS assisted by a navigation system could be called Navigated Endoscopic Sinus Surgery (NESS). As it is generally accepted that the NESS should be performed only in cases of complex anatomy and pathology, it has not yet been established as a state-of-the-art procedure and thus not used on a daily basis. This paper presents an algorithm for use of a navigation system for basic ESS in the treatment of chronic rhinosinusitis (CRS). The algorithm includes five units that should be highlighted using a navigation system. They are as follows: 1) nasal vestibule unit, 2) OMC unit, 3) anterior ethmoid unit, 4) posterior ethmoid unit, and 5) sphenoid unit. Each unit has a shape of a triangular pyramid and consists of at least four reference points or landmarks. As many landmarks as possible should be marked when determining one of the five units. Navigated orientation in each unit should always precede any surgical intervention. The algorithm should improve the learning curve of trainees and enable surgeons to use the navigation system routinely and systematically.

  16. RADFLO physics and algorithms

    SciTech Connect

    Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.

    1995-09-01

    This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.

  17. Development, Comparisons and Evaluation of Aerosol Retrieval Algorithms

    NASA Astrophysics Data System (ADS)

    de Leeuw, G.; Holzer-Popp, T.; Aerosol-cci Team

    2011-12-01

    The Climate Change Initiative (cci) of the European Space Agency (ESA) has brought together a team of European Aerosol retrieval groups working on the development and improvement of aerosol retrieval algorithms. The goal of this cooperation is the development of methods to provide the best possible information on climate and climate change based on satellite observations. To achieve this, algorithms are characterized in detail as regards the retrieval approaches, the aerosol models used in each algorithm, cloud detection and surface treatment. A round-robin intercomparison of results from the various participating algorithms serves to identify the best modules or combinations of modules for each sensor. Annual global datasets including their uncertainties will then be produced and validated. The project builds on 9 existing algorithms to produce spectral aerosol optical depth (AOD and Ångström exponent) as well as other aerosol information; two instruments are included to provide the absorbing aerosol index (AAI) and stratospheric aerosol information. The algorithms included are: - 3 for ATSR (ORAC developed by RAL / Oxford university, ADV developed by FMI and the SU algorithm developed by Swansea University ) - 2 for MERIS (BAER by Bremen university and the ESA standard handled by HYGEOS) - 1 for POLDER over ocean (LOA) - 1 for synergetic retrieval (SYNAER by DLR ) - 1 for OMI retreival of the absorbing aerosol index with averaging kernel information (KNMI) - 1 for GOMOS stratospheric extinction profile retrieval (BIRA) The first seven algorithms aim at the retrieval of the AOD. However, each of the algorithms used differ in their approach, even for algorithms working with the same instrument such as ATSR or MERIS. To analyse the strengths and weaknesses of each algorithm several tests are made. The starting point for comparison and measurement of improvements is a retrieval run for 1 month, September 2008. The data from the same month are subsequently used for

  18. Revisiting the method of characteristics via a convex hull algorithm

    NASA Astrophysics Data System (ADS)

    LeFloch, Philippe G.; Mercier, Jean-Marc

    2015-10-01

    We revisit the method of characteristics for shock wave solutions to nonlinear hyperbolic problems and we propose a novel numerical algorithm-the convex hull algorithm (CHA)-which allows us to compute both entropy dissipative solutions (satisfying all entropy inequalities) and entropy conservative (or multi-valued) solutions. From the multi-valued solutions determined by the method of characteristics, our algorithm "extracts" the entropy dissipative solutions, even after the formation of shocks. It applies to both convex and non-convex flux/Hamiltonians. We demonstrate the relevance of the proposed method with a variety of numerical tests, including conservation laws in one or two spatial dimensions and problem arising in fluid dynamics.

  19. A Short Survey of Document Structure Similarity Algorithms

    SciTech Connect

    Buttler, D

    2004-02-27

    This paper provides a brief survey of document structural similarity algorithms, including the optimal Tree Edit Distance algorithm and various approximation algorithms. The approximation algorithms include the simple weighted tag similarity algorithm, Fourier transforms of the structure, and a new application of the shingle technique to structural similarity. We show three surprising results. First, the Fourier transform technique proves to be the least accurate of any of approximation algorithms, while also being slowest. Second, optimal Tree Edit Distance algorithms may not be the best technique for clustering pages from different sites. Third, the simplest approximation to structure may be the most effective and efficient mechanism for many applications.

  20. Mathematical algorithms for approximate reasoning

    NASA Technical Reports Server (NTRS)

    Murphy, John H.; Chay, Seung C.; Downs, Mary M.

    1988-01-01

    Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away