Science.gov

Sample records for algorithms tested include

  1. Effective detection of toxigenic Clostridium difficile by a two-step algorithm including tests for antigen and cytotoxin.

    PubMed

    Ticehurst, John R; Aird, Deborah Z; Dam, Lisa M; Borek, Anita P; Hargrove, John T; Carroll, Karen C

    2006-03-01

    We evaluated a two-step algorithm for detecting toxigenic Clostridium difficile: an enzyme immunoassay for glutamate dehydrogenase antigen (Ag-EIA) and then, for antigen-positive specimens, a concurrent cell culture cytotoxicity neutralization assay (CCNA). Antigen-negative results were > or = 99% predictive of CCNA negativity. Because the Ag-EIA reduced cell culture workload by approximately 75 to 80% and two-step testing was complete in < or = 3 days, we decided that this algorithm would be effective. Over 6 months, our laboratories' expenses were US dollar 143,000 less than if CCNA alone had been performed on all 5,887 specimens. PMID:16517916

  2. Locating critical points on multi-dimensional surfaces by genetic algorithm: test cases including normal and perturbed argon clusters

    NASA Astrophysics Data System (ADS)

    Chaudhury, Pinaki; Bhattacharyya, S. P.

    1999-03-01

    It is demonstrated that Genetic Algorithm in a floating point realisation can be a viable tool for locating critical points on a multi-dimensional potential energy surface (PES). For small clusters, the standard algorithm works well. For bigger ones, the search for global minimum becomes more efficient when used in conjunction with coordinate stretching, and partitioning of the strings into a core part and an outer part which are alternately optimized The method works with equal facility for locating minima, local as well as global, and saddle points (SP) of arbitrary orders. The search for minima requires computation of the gradient vector, but not the Hessian, while that for SP's requires the information of the gradient vector and the Hessian, the latter only at some specific points on the path. The method proposed is tested on (i) a model 2-d PES (ii) argon clusters (Ar 4-Ar 30) in which argon atoms interact via Lennard-Jones potential, (iii) Ar mX, m=12 clusters where X may be a neutral atom or a cation. We also explore if the method could also be used to construct what may be called a stochastic representation of the reaction path on a given PES with reference to conformational changes in Ar n clusters.

  3. Yield of stool culture with isolate toxin testing versus a two-step algorithm including stool toxin testing for detection of toxigenic Clostridium difficile.

    PubMed

    Reller, Megan E; Lema, Clara A; Perl, Trish M; Cai, Mian; Ross, Tracy L; Speck, Kathleen A; Carroll, Karen C

    2007-11-01

    We examined the incremental yield of stool culture (with toxin testing on isolates) versus our two-step algorithm for optimal detection of toxigenic Clostridium difficile. Per the two-step algorithm, stools were screened for C. difficile-associated glutamate dehydrogenase (GDH) antigen and, if positive, tested for toxin by a direct (stool) cell culture cytotoxicity neutralization assay (CCNA). In parallel, stools were cultured for C. difficile and tested for toxin by both indirect (isolate) CCNA and conventional PCR if the direct CCNA was negative. The "gold standard" for toxigenic C. difficile was detection of C. difficile by the GDH screen or by culture and toxin production by direct or indirect CCNA. We tested 439 specimens from 439 patients. GDH screening detected all culture-positive specimens. The sensitivity of the two-step algorithm was 77% (95% confidence interval [CI], 70 to 84%), and that of culture was 87% (95% CI, 80 to 92%). PCR results correlated completely with those of CCNA testing on isolates (29/29 positive and 32/32 negative, respectively). We conclude that GDH is an excellent screening test and that culture with isolate CCNA testing detects an additional 23% of toxigenic C. difficile missed by direct CCNA. Since culture is tedious and also detects nontoxigenic C. difficile, we conclude that culture is most useful (i) when the direct CCNA is negative but a high clinical suspicion of toxigenic C. difficile remains, (ii) in the evaluation of new diagnostic tests for toxigenic C. difficile (where the best reference standard is essential), and (iii) in epidemiologic studies (where the availability of an isolate allows for strain typing and antimicrobial susceptibility testing). PMID:17804652

  4. Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  5. Testing an earthquake prediction algorithm

    USGS Publications Warehouse

    Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.

    1997-01-01

    A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.

  6. A dynamic programming algorithm for RNA structure prediction including pseudoknots.

    PubMed

    Rivas, E; Eddy, S R

    1999-02-01

    We describe a dynamic programming algorithm for predicting optimal RNA secondary structure, including pseudoknots. The algorithm has a worst case complexity of O(N6) in time and O(N4) in storage. The description of the algorithm is complex, which led us to adopt a useful graphical representation (Feynman diagrams) borrowed from quantum field theory. We present an implementation of the algorithm that generates the optimal minimum energy structure for a single RNA sequence, using standard RNA folding thermodynamic parameters augmented by a few parameters describing the thermodynamic stability of pseudoknots. We demonstrate the properties of the algorithm by using it to predict structures for several small pseudoknotted and non-pseudoknotted RNAs. Although the time and memory demands of the algorithm are steep, we believe this is the first algorithm to be able to fold optimal (minimum energy) pseudoknotted RNAs with the accepted RNA thermodynamic model. PMID:9925784

  7. A generalized memory test algorithm

    NASA Technical Reports Server (NTRS)

    Milner, E. J.

    1982-01-01

    A general algorithm for testing digital computer memory is presented. The test checks that (1) every bit can be cleared and set in each memory work, and (2) bits are not erroneously cleared and/or set elsewhere in memory at the same time. The algorithm can be applied to any size memory block and any size memory word. It is concise and efficient, requiring the very few cycles through memory. For example, a test of 16-bit-word-size memory requries only 384 cycles through memory. Approximately 15 seconds were required to test a 32K block of such memory, using a microcomputer having a cycle time of 133 nanoseconds.

  8. Benchmark graphs for testing community detection algorithms

    NASA Astrophysics Data System (ADS)

    Lancichinetti, Andrea; Fortunato, Santo; Radicchi, Filippo

    2008-10-01

    Community structure is one of the most important features of real networks and reveals the internal organization of the nodes. Many algorithms have been proposed but the crucial issue of testing, i.e., the question of how good an algorithm is, with respect to others, is still open. Standard tests include the analysis of simple artificial graphs with a built-in community structure, that the algorithm has to recover. However, the special graphs adopted in actual tests have a structure that does not reflect the real properties of nodes and communities found in real networks. Here we introduce a class of benchmark graphs, that account for the heterogeneity in the distributions of node degrees and of community sizes. We use this benchmark to test two popular methods of community detection, modularity optimization, and Potts model clustering. The results show that the benchmark poses a much more severe test to algorithms than standard benchmarks, revealing limits that may not be apparent at a first analysis.

  9. Component evaluation testing and analysis algorithms.

    SciTech Connect

    Hart, Darren M.; Merchant, Bion John

    2011-10-01

    The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.

  10. Testing block subdivision algorithms on block designs

    NASA Astrophysics Data System (ADS)

    Wiseman, Natalie; Patterson, Zachary

    2016-01-01

    Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.

  11. Quantum Statistical Testing of a QRNG Algorithm

    SciTech Connect

    Humble, Travis S; Pooser, Raphael C; Britt, Keith A

    2013-01-01

    We present the algorithmic design of a quantum random number generator, the subsequent synthesis of a physical design and its verification using quantum statistical testing. We also describe how quantum statistical testing can be used to diagnose channel noise in QKD protocols.

  12. A blind test of monthly homogenisation algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.

    2012-04-01

    metrics: the root mean square error, the error in (linear and nonlinear) trend estimates and contingency scores. The metrics are computed on the station data and the network average regional climate signal, as well as on monthly data and yearly data, for both temperature and precipitation. Because the test was blind, we can state with confidence that relative homogenisation improves the quality of climate station data. The performance of the contributions depends significantly on the error metric considered. Still a group of better algorithms can be found that includes Craddock, PRODIGE, MASH, ACMANT and USHCN. Clearly algorithms developed for solving the multiple breakpoint problem with an inhomogeneous reference perform best. The results suggest that the correction algorithms are currently an important weakness of many methods. For more information on the COST Action on homogenisation see: http://www.homogenisation.org/

  13. Sequential Testing Algorithms for Multiple Fault Diagnosis

    NASA Technical Reports Server (NTRS)

    Shakeri, Mojdeh; Raghavan, Vijaya; Pattipati, Krishna R.; Patterson-Hine, Ann

    1997-01-01

    In this paper, we consider the problem of constructing optimal and near-optimal test sequencing algorithms for multiple fault diagnosis. The computational complexity of solving the optimal multiple-fault isolation problem is super-exponential, that is, it is much more difficult than the single-fault isolation problem, which, by itself, is NP-hard. By employing concepts from information theory and AND/OR graph search, we present several test sequencing algorithms for the multiple fault isolation problem. These algorithms provide a trade-off between the degree of suboptimality and computational complexity. Furthermore, we present novel diagnostic strategies that generate a diagnostic directed graph (digraph), instead of a diagnostic tree, for multiple fault diagnosis. Using this approach, the storage complexity of the overall diagnostic strategy reduces substantially. The algorithms developed herein have been successfully applied to several real-world systems. Computational results indicate that the size of a multiple fault strategy is strictly related to the structure of the system.

  14. Laboratory test interpretations and algorithms in utilization management.

    PubMed

    Van Cott, Elizabeth M

    2014-01-01

    Appropriate assimilation of laboratory test results into patient care is enhanced when pathologist interpretations of the laboratory tests are provided for clinicians, and when reflex algorithm testing is utilized. Benefits of algorithms and interpretations include avoidance of misdiagnoses, reducing the number of laboratory tests needed, reducing the number of procedures, transfusions and admissions, shortening the amount of time needed to reach a diagnosis, reducing errors in test ordering, and providing additional information about how the laboratory results might affect other aspects of a patient's care. Providing interpretations can be challenging for pathologists, therefore mechanisms to facilitate the successful implementation of an interpretation service are described. These include algorithm-based testing and interpretation, optimizing laboratory requisitions and/or order-entry systems, proficiency testing programs that assess interpretations and provide constructive feedback, utilization of a collection of interpretive sentences or paragraphs that can be building blocks ("coded comments") for constructing preliminary interpretations, middleware, and pathology resident participation and education. In conclusion, the combination of algorithms and interpretations for laboratory testing has multiple benefits for the medical care for the patient. PMID:24080245

  15. Motion Cueing Algorithm Development: Piloted Performance Testing of the Cueing Algorithms

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    The relative effectiveness in simulating aircraft maneuvers with both current and newly developed motion cueing algorithms was assessed with an eleven-subject piloted performance evaluation conducted on the NASA Langley Visual Motion Simulator (VMS). In addition to the current NASA adaptive algorithm, two new cueing algorithms were evaluated: the optimal algorithm and the nonlinear algorithm. The test maneuvers included a straight-in approach with a rotating wind vector, an offset approach with severe turbulence and an on/off lateral gust that occurs as the aircraft approaches the runway threshold, and a takeoff both with and without engine failure after liftoff. The maneuvers were executed with each cueing algorithm with added visual display delay conditions ranging from zero to 200 msec. Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. Piloted performance parameters for the approach maneuvers, the vertical velocity upon touchdown and the runway touchdown position, were also analyzed but did not show any noticeable difference among the cueing algorithms. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach were less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.

  16. 8. VIEW OF RADIOGRAPHY EQUIPMENT, TEST METHODS INCLUDED RADIOGRAPHY AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    8. VIEW OF RADIOGRAPHY EQUIPMENT, TEST METHODS INCLUDED RADIOGRAPHY AND BETA BACKSCATTERING. (7/13/56) - Rocky Flats Plant, Non-Nuclear Production Facility, South of Cottonwood Avenue, west of Seventh Avenue & east of Building 460, Golden, Jefferson County, CO

  17. 13. Historic drawing of rocket engine test facility layout, including ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    13. Historic drawing of rocket engine test facility layout, including Buildings 202, 205, 206, and 206A, February 3, 1984. NASA GRC drawing number CF-101539. On file at NASA Glenn Research Center. - Rocket Engine Testing Facility, NASA Glenn Research Center, Cleveland, Cuyahoga County, OH

  18. Testing Intelligently Includes Double-Checking Wechsler IQ Scores

    ERIC Educational Resources Information Center

    Kuentzel, Jeffrey G.; Hetterscheidt, Lesley A.; Barnett, Douglas

    2011-01-01

    The rigors of standardized testing make for numerous opportunities for examiner error, including simple computational mistakes in scoring. Although experts recommend that test scoring be double-checked, the extent to which independent double-checking would reduce scoring errors is not known. A double-checking procedure was established at a…

  19. A Test Scheduling Algorithm Based on Two-Stage GA

    NASA Astrophysics Data System (ADS)

    Yu, Y.; Peng, X. Y.; Peng, Y.

    2006-10-01

    In this paper, we present a new algorithm to co-optimize the core wrapper design and the SOC test scheduling. The SOC test scheduling problem is first formulated into a twodimension floorplan problem and a sequence pair architecture is used to represent it. Then we propose a two-stage GA (Genetic Algorithm) to solve the SOC test scheduling problem. Experiments on ITC'02 benchmark show that our algorithm can effectively reduce test time so as to decrease SOC test cost.

  20. Full motion video geopositioning algorithm integrated test bed

    NASA Astrophysics Data System (ADS)

    Dolloff, John; Hottel, Bryant; Doucette, Peter; Braun, Aaron; Theiss, Henry; Gurson, Adam

    2015-05-01

    In order to better understand the issues associated with Full Motion Video (FMV) geopositioning and to develop corresponding strategies and algorithms, an integrated test bed is required. It is used to evaluate the performance of various candidate algorithms associated with registration of the video frames and subsequent geopositioning using the registered frames. Major issues include reliable error propagation or predicted solution accuracy, optimal vs. suboptimal vs. divergent solutions, robust processing in the presence of poor or non-existent a priori estimates of sensor metadata, difficulty in the measurement of tie points between adjacent frames, poor imaging geometry including small field-of-view and little vertical relief, and no control (points). The test bed modules must be integrated with appropriate data flows between them. The test bed must also ingest/generate real and simulated data and support evaluation of corresponding performance based on module-internal metrics as well as comparisons to real or simulated "ground truth". Selection of the appropriate modules and algorithms must be both operator specifiable and specifiable as automatic. An FMV test bed has been developed and continues to be improved with the above characteristics. The paper describes its overall design as well as key underlying algorithms, including a recent update to "A matrix" generation, which allows for the computation of arbitrary inter-frame error cross-covariance matrices associated with Kalman filter (KF) registration in the presence of dynamic state vector definition, necessary for rigorous error propagation when the contents/definition of the KF state vector changes due to added/dropped tie points. Performance of a tested scenario is also presented.

  1. Variational Algorithms for Test Particle Trajectories

    NASA Astrophysics Data System (ADS)

    Ellison, C. Leland; Finn, John M.; Qin, Hong; Tang, William M.

    2015-11-01

    The theory of variational integration provides a novel framework for constructing conservative numerical methods for magnetized test particle dynamics. The retention of conservation laws in the numerical time advance captures the correct qualitative behavior of the long time dynamics. For modeling the Lorentz force system, new variational integrators have been developed that are both symplectic and electromagnetically gauge invariant. For guiding center test particle dynamics, discretization of the phase-space action principle yields multistep variational algorithms, in general. Obtaining the desired long-term numerical fidelity requires mitigation of the multistep method's parasitic modes or applying a discretization scheme that possesses a discrete degeneracy to yield a one-step method. Dissipative effects may be modeled using Lagrange-D'Alembert variational principles. Numerical results will be presented using a new numerical platform that interfaces with popular equilibrium codes and utilizes parallel hardware to achieve reduced times to solution. This work was supported by DOE Contract DE-AC02-09CH11466.

  2. Reliability based design including future tests and multiagent approaches

    NASA Astrophysics Data System (ADS)

    Villanueva, Diane

    The initial stages of reliability-based design optimization involve the formulation of objective functions and constraints, and building a model to estimate the reliability of the design with quantified uncertainties. However, even experienced hands often overlook important objective functions and constraints that affect the design. In addition, uncertainty reduction measures, such as tests and redesign, are often not considered in reliability calculations during the initial stages. This research considers two areas that concern the design of engineering systems: 1) the trade-off of the effect of a test and post-test redesign on reliability and cost and 2) the search for multiple candidate designs as insurance against unforeseen faults in some designs. In this research, a methodology was developed to estimate the effect of a single future test and post-test redesign on reliability and cost. The methodology uses assumed distributions of computational and experimental errors with re-design rules to simulate alternative future test and redesign outcomes to form a probabilistic estimate of the reliability and cost for a given design. Further, it was explored how modeling a future test and redesign provides a company an opportunity to balance development costs versus performance by simultaneously designing the design and the post-test redesign rules during the initial design stage. The second area of this research considers the use of dynamic local surrogates, or surrogate-based agents, to locate multiple candidate designs. Surrogate-based global optimization algorithms often require search in multiple candidate regions of design space, expending most of the computation needed to define multiple alternate designs. Thus, focusing on solely locating the best design may be wasteful. We extended adaptive sampling surrogate techniques to locate multiple optima by building local surrogates in sub-regions of the design space to identify optima. The efficiency of this method

  3. Development and Testing of Data Mining Algorithms for Earth Observation

    NASA Technical Reports Server (NTRS)

    Glymour, Clark

    2005-01-01

    The new algorithms developed under this project included a principled procedure for classification of objects, events or circumstances according to a target variable when a very large number of potential predictor variables is available but the number of cases that can be used for training a classifier is relatively small. These "high dimensional" problems require finding a minimal set of variables -called the Markov Blanket-- sufficient for predicting the value of the target variable. An algorithm, the Markov Blanket Fan Search, was developed, implemented and tested on both simulated and real data in conjunction with a graphical model classifier, which was also implemented. Another algorithm developed and implemented in TETRAD IV for time series elaborated on work by C. Granger and N. Swanson, which in turn exploited some of our earlier work. The algorithms in question learn a linear time series model from data. Given such a time series, the simultaneous residual covariances, after factoring out time dependencies, may provide information about causal processes that occur more rapidly than the time series representation allow, so called simultaneous or contemporaneous causal processes. Working with A. Monetta, a graduate student from Italy, we produced the correct statistics for estimating the contemporaneous causal structure from time series data using the TETRAD IV suite of algorithms. Two economists, David Bessler and Kevin Hoover, have independently published applications using TETRAD style algorithms to the same purpose. These implementations and algorithmic developments were separately used in two kinds of studies of climate data: Short time series of geographically proximate climate variables predicting agricultural effects in California, and longer duration climate measurements of temperature teleconnections.

  4. A Study of a Network-Flow Algorithm and a Noncorrecting Algorithm for Test Assembly.

    ERIC Educational Resources Information Center

    Armstrong, R. D.; And Others

    1996-01-01

    When the network-flow algorithm (NFA) and the average growth approximation algorithm (AGAA) were used for automated test assembly with American College Test and Armed Services Vocational Aptitude Battery item banks, results indicate that reasonable error in item parameters is not harmful for test assembly using NFA or AGAA. (SLD)

  5. An Evaluation of a Flight Deck Interval Management Algorithm Including Delayed Target Trajectories

    NASA Technical Reports Server (NTRS)

    Swieringa, Kurt A.; Underwood, Matthew C.; Barmore, Bryan; Leonard, Robert D.

    2014-01-01

    NASA's first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature air traffic management technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise timebased scheduling in the terminal airspace; Controller Managed Spacing (CMS), which provides controllers with decision support tools enabling precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain precise in-trail spacing. During high demand operations, TMA-TM may produce a schedule and corresponding aircraft trajectories that include delay to ensure that a particular aircraft will be properly spaced from other aircraft at each schedule waypoint. These delayed trajectories are not communicated to the automation onboard the aircraft, forcing the IM aircraft to use the published speeds to estimate the target aircraft's estimated time of arrival. As a result, the aircraft performing IM operations may follow an aircraft whose TMA-TM generated trajectories have substantial speed deviations from the speeds expected by the spacing algorithm. Previous spacing algorithms were not designed to handle this magnitude of uncertainty. A simulation was conducted to examine a modified spacing algorithm with the ability to follow aircraft flying delayed trajectories. The simulation investigated the use of the new spacing algorithm with various delayed speed profiles and wind conditions, as well as several other variables designed to simulate real-life variability. The results and conclusions of this study indicate that the new spacing algorithm generally exhibits good performance; however, some types of target aircraft speed profiles can cause the spacing algorithm to command less than optimal speed control behavior.

  6. Algorithms and Programs for Strong Gravitational Lensing In Kerr Space-time Including Polarization

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kantowski, Ronald; Dai, Xinyu; Baron, Eddie; Maddumage, Prasad

    2015-05-01

    Active galactic nuclei (AGNs) and quasars are important astrophysical objects to understand. Recently, microlensing observations have constrained the size of the quasar X-ray emission region to be of the order of 10 gravitational radii of the central supermassive black hole. For distances within a few gravitational radii, light paths are strongly bent by the strong gravity field of the central black hole. If the central black hole has nonzero angular momentum (spin), then a photon’s polarization plane will be rotated by the gravitational Faraday effect. The observed X-ray flux and polarization will then be influenced significantly by the strong gravity field near the source. Consequently, linear gravitational lensing theory is inadequate for such extreme circumstances. We present simple algorithms computing the strong lensing effects of Kerr black holes, including the effects on polarization. Our algorithms are realized in a program “KERTAP” in two versions: MATLAB and Python. The key ingredients of KERTAP are a graphic user interface, a backward ray-tracing algorithm, a polarization propagator dealing with gravitational Faraday rotation, and algorithms computing observables such as flux magnification and polarization angles. Our algorithms can be easily realized in other programming languages such as FORTRAN, C, and C++. The MATLAB version of KERTAP is parallelized using the MATLAB Parallel Computing Toolbox and the Distributed Computing Server. The Python code was sped up using Cython and supports full implementation of MPI using the “mpi4py” package. As an example, we investigate the inclination angle dependence of the observed polarization and the strong lensing magnification of AGN X-ray emission. We conclude that it is possible to perform complex numerical-relativity related computations using interpreted languages such as MATLAB and Python.

  7. A Boundary Condition Relaxation Algorithm for Strongly Coupled, Ablating Flows Including Shape Change

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.; Johnston, Christopher O.

    2011-01-01

    Implementations of a model for equilibrium, steady-state ablation boundary conditions are tested for the purpose of providing strong coupling with a hypersonic flow solver. The objective is to remove correction factors or film cooling approximations that are usually applied in coupled implementations of the flow solver and the ablation response. Three test cases are considered - the IRV-2, the Galileo probe, and a notional slender, blunted cone launched at 10 km/s from the Earth's surface. A successive substitution is employed and the order of succession is varied as a function of surface temperature to obtain converged solutions. The implementation is tested on a specified trajectory for the IRV-2 to compute shape change under the approximation of steady-state ablation. Issues associated with stability of the shape change algorithm caused by explicit time step limits are also discussed.

  8. Comparison of Automated Treponemal and Nontreponemal Test Algorithms as First-Line Syphilis Screening Assays

    PubMed Central

    Chung, Jae-Woo; Park, Seong Yeon; Chae, Seok Lae

    2016-01-01

    Background Automated Mediace Treponema pallidum latex agglutination (TPLA) and Mediace rapid plasma reagin (RPR) assays are used by many laboratories for syphilis diagnosis. This study compared the results of the traditional syphilis screening algorithm and a reverse algorithm using automated Mediace RPR or Mediace TPLA as first-line screening assays in subjects undergoing a health checkup. Methods Samples from 24,681 persons were included in this study. We routinely performed Mediace RPR and Mediace TPLA simultaneously. Results were analyzed according to both the traditional algorithm and reverse algorithm. Samples with discordant results on the reverse algorithm (e.g., positive Mediace TPLA, negative Mediace RPR) were tested with Treponema pallidum particle agglutination (TPPA). Results Among the 24,681 samples, 30 (0.1%) were found positive by traditional screening, and 190 (0.8%) by reverse screening. The identified syphilis rate and overall false-positive rate according to the traditional algorithm were lower than those according to the reverse algorithm (0.07% and 0.05% vs. 0.64% and 0.13%, respectively). A total of 173 discordant samples were tested with TPPA by using the reverse algorithm, of which 140 (80.9%) were TPPA positive. Conclusions Despite the increased false-positive results in populations with a low prevalence of syphilis, the reverse algorithm detected 140 samples with treponemal antibody that went undetected by the traditional algorithm. The reverse algorithm using Mediace TPLA as a screening test is more sensitive for the detection of syphilis. PMID:26522755

  9. Algorithms for Multiple Fault Diagnosis With Unreliable Tests

    NASA Technical Reports Server (NTRS)

    Shakeri, Mojdeh; Raghavan, Vijaya; Pattipati, Krishna R.; Patterson-Hine, Ann

    1997-01-01

    In this paper, we consider the problem of constructing optimal and near-optimal multiple fault diagnosis (MFD) in bipartite systems with unreliable (imperfect) tests. It is known that exact computation of conditional probabilities for multiple fault diagnosis is NP-hard. The novel feature of our diagnostic algorithms is the use of Lagrangian relaxation and subgradient optimization methods to provide: (1) near optimal solutions for the MFD problem, and (2) upper bounds for an optimal branch-and-bound algorithm. The proposed method is illustrated using several examples. Computational results indicate that: (1) our algorithm has superior computational performance to the existing algorithms (approximately three orders of magnitude improvement), (2) the near optimal algorithm generates the most likely candidates with a very high accuracy, and (3) our algorithm can find the most likely candidates in systems with as many as 1000 faults.

  10. Competency Testing: Will the LD Student Be Included?

    ERIC Educational Resources Information Center

    Amos, Katherine M.

    1980-01-01

    The discussion of minimum competency testing (MCT) for learning disabled students focuses on advantages and disadvantages, the need for test modifications, and the importance of coordinating the child's individualized education program with MCT considerations. (CL)

  11. A sparse reconstruction algorithm for ultrasonic images in nondestructive testing.

    PubMed

    Guarneri, Giovanni Alfredo; Pipa, Daniel Rodrigues; Neves Junior, Flávio; de Arruda, Lúcia Valéria Ramos; Zibetti, Marcelo Victor Wüst

    2015-01-01

    Ultrasound imaging systems (UIS) are essential tools in nondestructive testing (NDT). In general, the quality of images depends on two factors: system hardware features and image reconstruction algorithms. This paper presents a new image reconstruction algorithm for ultrasonic NDT. The algorithm reconstructs images from A-scan signals acquired by an ultrasonic imaging system with a monostatic transducer in pulse-echo configuration. It is based on regularized least squares using a l1 regularization norm. The method is tested to reconstruct an image of a point-like reflector, using both simulated and real data. The resolution of reconstructed image is compared with four traditional ultrasonic imaging reconstruction algorithms: B-scan, SAFT, ω-k SAFT and regularized least squares (RLS). The method demonstrates significant resolution improvement when compared with B-scan-about 91% using real data. The proposed scheme also outperforms traditional algorithms in terms of signal-to-noise ratio (SNR). PMID:25905700

  12. A Sparse Reconstruction Algorithm for Ultrasonic Images in Nondestructive Testing

    PubMed Central

    Guarneri, Giovanni Alfredo; Pipa, Daniel Rodrigues; Junior, Flávio Neves; de Arruda, Lúcia Valéria Ramos; Zibetti, Marcelo Victor Wüst

    2015-01-01

    Ultrasound imaging systems (UIS) are essential tools in nondestructive testing (NDT). In general, the quality of images depends on two factors: system hardware features and image reconstruction algorithms. This paper presents a new image reconstruction algorithm for ultrasonic NDT. The algorithm reconstructs images from A-scan signals acquired by an ultrasonic imaging system with a monostatic transducer in pulse-echo configuration. It is based on regularized least squares using a l1 regularization norm. The method is tested to reconstruct an image of a point-like reflector, using both simulated and real data. The resolution of reconstructed image is compared with four traditional ultrasonic imaging reconstruction algorithms: B-scan, SAFT, ω-k SAFT and regularized least squares (RLS). The method demonstrates significant resolution improvement when compared with B-scan—about 91% using real data. The proposed scheme also outperforms traditional algorithms in terms of signal-to-noise ratio (SNR). PMID:25905700

  13. Testing and assessment strategies, including alternative and new approaches.

    PubMed

    Meyer, Otto

    2003-04-11

    The object of toxicological testing is to predict possible adverse effect in humans when exposed to chemicals whether used as industrial chemicals, pharmaceuticals or pesticides. Animal models are predominantly used in identifying potential hazards of chemicals. The use of laboratory animals raises ethical concern. However, irrespective of animal welfare it is an important aspect of the discipline of toxicology that the primary object is human health. The ideal testing and assessment strategy is simple to use all the available test methods and preferably more in laboratory animal species from which we get as many data as possible in order to obtain the most extensive database for the toxicological evaluation of a chemical. Consequently, the society has decided that certain group of chemicals should be tested accordingly. However, realising that, this idea is not obtainable in practice because there are more than 100000 chemicals which are potential for human exposure, so the development of alternative testing and assessment strategies has taken place in the recent years. The toxicological evaluation should enable the society to cope with the simultaneous requirement of many chemicals for different uses and of the absence of health problems involved with their use. Thus, the regulatory toxicology is a cocktail of science and pragmatism added a crucial concern for animal welfare. Test methods are most often used in a testing sequence as bricks in a testing strategy. The main key driving forces for introducing assessment and testing strategies e.g. using a limited number of tests and/or alternative test methods are: (a) animal welfare considerations; (b) new scientific knowledge i.e. introducing tests for new endpoints and tests for better understanding of mode of action; and (c) lack of testing capacity/reduction of required resources economically as well as time wise. PMID:12676447

  14. Testing of infrared image enhancing algorithm in different spectral bands

    NASA Astrophysics Data System (ADS)

    Dulski, R.; Sosnowski, T.; Kastek, M.; Trzaskawka, P.

    2012-06-01

    The paper presents results of testing the infrared image quality enhancing algorithm based on histogram processing. Testing were performed on real images registered in NIR, MWIR, and LWIR spectral bands. Infrared images are a very specific type of information. The perception and interpretation of such image depends not only on radiative properties of observed objects and surrounding scenery. Probably still most important are skills and experience of an observer itself. In practice, the optimal settings of the camera as well as automatic temperature range or contrast control do not guarantee the displayed images are optimal from observer's point of view. The solution to this are algorithms of image quality enhancing based on digital image processing methods. Such algorithms can be implemented inside the camera or applied later, after image registration. They must improve the visibility of low-contrast objects. They should also provide effective dynamic contrast control not only across entire image but also selectively to specific areas in order to maintain optimal visualization of observed scenery. In the paper one histogram equalization algorithm was tested. Adaptive nature of the algorithm should assure significant improvement of the image quality and the same effectiveness of object detection. Another requirement and difficulty is that it should also be effective for any given thermal image and it should not cause a visible image degradation in unpredictable situations. The application of tested algorithm is a promising alternative to a very effective but complex algorithms due to its low complexity and real time operation.

  15. An algorithm for genetic testing of frontotemporal lobar degeneration

    PubMed Central

    Rademakers, R.; Huey, E.D.; Boxer, A.L.; Mayeux, R.; Miller, B.L.; Boeve, B.F.

    2011-01-01

    Objective: To derive an algorithm for genetic testing of patients with frontotemporal lobar degeneration (FTLD). Methods: A literature search was performed to review the clinical and pathologic phenotypes and family history associated with each FTLD gene. Results: Based on the literature review, an algorithm was developed to allow clinicians to use the clinical and neuroimaging phenotypes of the patient and the family history and autopsy information to decide whether or not genetic testing is warranted, and if so, the order for appropriate tests. Conclusions: Recent findings in genetics, pathology, and imaging allow clinicians to use the clinical presentation of the patient with FTLD to inform genetic testing decisions. PMID:21282594

  16. A computer algorithm for testing potential prokaryotic terminators.

    PubMed Central

    Brendel, V; Trifonov, E N

    1984-01-01

    The nucleotide sequences of 30 factor-independent terminators of transcription with RNA polymerase from E. coli have been compiled and analyzed. The standard features - a stretch of thymine residues and a preceding dyad symmetry - are shared by most sequences, but there are striking exceptions which indicate that these features alone are not sufficient to describe these sites. In two thirds of the sequences the 3'-half of the dyad symmetry contains the pentanucleotide CGGG (G/C) or a close derivative; about one third have TCTG or a close derivative just downstream of the termination point. The TCTG -box might be implied in termination of stringently controlled operons of E. coli. An algorithm to locate terminators in templates of known nucleotide sequence has been constructed on the basis of correlation to the distribution of dinucleotides along the aligned signal sequences. The algorithm has been tested on natural sequences of a total length of about 11,500 N. It finds all known independent terminators and only a few other sites, including some of the rho-dependent and putative terminators. PMID:6374619

  17. Implementation and testing of algorithms for data fitting

    NASA Astrophysics Data System (ADS)

    Monahan, Alison; Engelhardt, Larry

    2012-03-01

    This poster will describe an undergraduate senior research project involving the creation and testing of a java class to implement the Nelder-Mead algorithm, which can be used for data fitting. The performance between the Nelder-Mead algorithm and the Levenberg-Marquardt algorithm will be compared using a variety of different data. The new class will be made available at http://www.compadre.org/osp/items/detail.cfm?ID=11593. At the time of the presentation, this project will be nearing completion; and I will discuss my progress, successes, and challenges.

  18. TS: a test-split algorithm for inductive learning

    NASA Astrophysics Data System (ADS)

    Wu, Xindong

    1993-09-01

    This paper presents a new attribute-based learning algorithm, TS. Different from ID3, AQ11, and HCV in strategies, this algorithm operates in cycles of test and split. It uses those attribute values which occur only in positives but not in negatives to straightforwardly discriminate positives against negatives and chooses the attributes with least number of different values to split example sets. TS is natural, easy to implement, and low-order polynomial in time complexity.

  19. Automated segment matching algorithm-theory, test, and evaluation

    NASA Technical Reports Server (NTRS)

    Kalcic, M. T. (Principal Investigator)

    1982-01-01

    Results to automate the U.S. Department of Agriculture's process of segment shifting and obtain results within one-half pixel accuracy are presented. Given an initial registration, the digitized segment is shifted until a more precise fit to the LANDSAT data is found. The algorithm automates the shifting process and performs certain tests for matching and accepting the computed shift numbers. Results indicate the algorithm can obtain results within one-half pixel accuracy.

  20. Experimental study on subaperture testing with iterative triangulation algorithm.

    PubMed

    Yan, Lisong; Wang, Xiaokun; Zheng, Ligong; Zeng, Xuefeng; Hu, Haixiang; Zhang, Xuejun

    2013-09-23

    Applying the iterative triangulation stitching algorithm, we provide an experimental demonstration by testing a Φ120 mm flat mirror, a Φ1450 mm off-axis parabolic mirror and a convex hyperboloid mirror. By comparing the stitching results with the self-examine subaperture, it shows that the reconstruction results are in consistent with that of the subaperture testing. As all the experiments are conducted with a 5-dof adjustment platform with big adjustment errors, it proves that using the above mentioned algorithm, the subaperture stitching can be easily performed without a precise positioning system. In addition, with the algorithm, we accomplish the coordinate unification between the testing and processing that makes it possible to guide the processing by the stitching result. PMID:24104151

  1. Testing of hardware implementation of infrared image enhancing algorithm

    NASA Astrophysics Data System (ADS)

    Dulski, R.; Sosnowski, T.; PiÄ tkowski, T.; Trzaskawka, P.; Kastek, M.; Kucharz, J.

    2012-10-01

    The interpretation of IR images depends on radiative properties of observed objects and surrounding scenery. Skills and experience of an observer itself are also of great importance. The solution to improve the effectiveness of observation is utilization of algorithm of image enhancing capable to improve the image quality and the same effectiveness of object detection. The paper presents results of testing the hardware implementation of IR image enhancing algorithm based on histogram processing. Main issue in hardware implementation of complex procedures for image enhancing algorithms is high computational cost. As a result implementation of complex algorithms using general purpose processors and software usually does not bring satisfactory results. Because of high efficiency requirements and the need of parallel operation, the ALTERA's EP2C35F672 FPGA device was used. It provides sufficient processing speed combined with relatively low power consumption. A digital image processing and control module was designed and constructed around two main integrated circuits: a FPGA device and a microcontroller. Programmable FPGA device performs image data processing operations which requires considerable computing power. It also generates the control signals for array readout, performs NUC correction and bad pixel mapping, generates the control signals for display module and finally executes complex image processing algorithms. Implemented adaptive algorithm is based on plateau histogram equalization. Tests were performed on real IR images of different types of objects registered in different spectral bands. The simulations and laboratory experiments proved the correct operation of the designed system in executing the sophisticated image enhancement.

  2. Testing of Gyroless Estimation Algorithms for the Fuse Spacecraft

    NASA Technical Reports Server (NTRS)

    Harman, R.; Thienel, J.; Oshman, Yaakov

    2004-01-01

    This paper documents the testing and development of magnetometer-based gyroless attitude and rate estimation algorithms for the Far Ultraviolet Spectroscopic Explorer (FUSE). The results of two approaches are presented, one relies on a kinematic model for propagation, a method used in aircraft tracking, and the other is a pseudolinear Kalman filter that utilizes Euler's equations in the propagation of the estimated rate. Both algorithms are tested using flight data collected over a few months after the failure of two of the reaction wheels. The question of closed-loop stability is addressed. The ability of the controller to meet the science slew requirements, without the gyros, is analyzed.

  3. The Sys-Rem Detrending Algorithm: Implementation and Testing

    NASA Astrophysics Data System (ADS)

    Mazeh, T.; Tamuz, O.; Zucker, S.

    2007-07-01

    Sys-Rem (Tamuz, Mazeh & Zucker 2005) is a detrending algorithm designed to remove systematic effects in a large set of light curves obtained by a photometric survey. The algorithm works without any prior knowledge of the effects, as long as they appear in many stars of the sample. This paper presents the basic principles of Sys-Rem and discusses a parameterization used to determine the number of effects removed. We assess the performance of Sys-Rem on simulated transits injected into WHAT survey data. This test is proposed as a general scheme to assess the effectiveness of detrending algorithms. Application of Sys-Rem to the OGLE dataset demonstrates the power of the algorithm. We offer a coded implementation of Sys-Rem to the community.

  4. A test sheet generating algorithm based on intelligent genetic algorithm and hierarchical planning

    NASA Astrophysics Data System (ADS)

    Gu, Peipei; Niu, Zhendong; Chen, Xuting; Chen, Wei

    2013-03-01

    In recent years, computer-based testing has become an effective method to evaluate students' overall learning progress so that appropriate guiding strategies can be recommended. Research has been done to develop intelligent test assembling systems which can automatically generate test sheets based on given parameters of test items. A good multisubject test sheet depends on not only the quality of the test items but also the construction of the sheet. Effective and efficient construction of test sheets according to multiple subjects and criteria is a challenging problem. In this paper, a multi-subject test sheet generation problem is formulated and a test sheet generating approach based on intelligent genetic algorithm and hierarchical planning (GAHP) is proposed to tackle this problem. The proposed approach utilizes hierarchical planning to simplify the multi-subject testing problem and adopts genetic algorithm to process the layered criteria, enabling the construction of good test sheets according to multiple test item requirements. Experiments are conducted and the results show that the proposed approach is capable of effectively generating multi-subject test sheets that meet specified requirements and achieve good performance.

  5. A test sheet generating algorithm based on intelligent genetic algorithm and hierarchical planning

    NASA Astrophysics Data System (ADS)

    Gu, Peipei; Niu, Zhendong; Chen, Xuting; Chen, Wei

    2012-04-01

    In recent years, computer-based testing has become an effective method to evaluate students' overall learning progress so that appropriate guiding strategies can be recommended. Research has been done to develop intelligent test assembling systems which can automatically generate test sheets based on given parameters of test items. A good multisubject test sheet depends on not only the quality of the test items but also the construction of the sheet. Effective and efficient construction of test sheets according to multiple subjects and criteria is a challenging problem. In this paper, a multi-subject test sheet generation problem is formulated and a test sheet generating approach based on intelligent genetic algorithm and hierarchical planning (GAHP) is proposed to tackle this problem. The proposed approach utilizes hierarchical planning to simplify the multi-subject testing problem and adopts genetic algorithm to process the layered criteria, enabling the construction of good test sheets according to multiple test item requirements. Experiments are conducted and the results show that the proposed approach is capable of effectively generating multi-subject test sheets that meet specified requirements and achieve good performance.

  6. An enhanced bacterial foraging algorithm approach for optimal power flow problem including FACTS devices considering system loadability.

    PubMed

    Belwin Edward, J; Rajasekar, N; Sathiyasekar, K; Senthilnathan, N; Sarjila, R

    2013-09-01

    Obtaining optimal power flow solution is a strenuous task for any power system engineer. The inclusion of FACTS devices in the power system network adds to its complexity. The dual objective of OPF with fuel cost minimization along with FACTS device location for IEEE 30 bus is considered and solved using proposed Enhanced Bacterial Foraging algorithm (EBFA). The conventional Bacterial Foraging Algorithm (BFA) has the difficulty of optimal parameter selection. Hence, in this paper, BFA is enhanced by including Nelder-Mead (NM) algorithm for better performance. A MATLAB code for EBFA is developed and the problem of optimal power flow with inclusion of FACTS devices is solved. After several run with different initial values, it is found that the inclusion of FACTS devices such as SVC and TCSC in the network reduces the generation cost along with increased voltage stability limits. It is also observed that, the proposed algorithm requires lesser computational time compared to earlier proposed algorithms. PMID:23759251

  7. Development and Application of a Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Fulton, Christopher E.; Maul, William A.; Sowers, T. Shane

    2007-01-01

    This paper describes the development and initial demonstration of a Portable Health Algorithms Test (PHALT) System that is being developed by researchers at the NASA Glenn Research Center (GRC). The PHALT System was conceived as a means of evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT System allows systems health management algorithms to be developed in a graphical programming environment; to be tested and refined using system simulation or test data playback; and finally, to be evaluated in a real-time hardware-in-the-loop mode with a live test article. In this paper, PHALT System development is described through the presentation of a functional architecture, followed by the selection and integration of hardware and software. Also described is an initial real-time hardware-in-the-loop demonstration that used sensor data qualification algorithms to diagnose and isolate simulated sensor failures in a prototype Power Distribution Unit test-bed. Success of the initial demonstration is highlighted by the correct detection of all sensor failures and the absence of any real-time constraint violations.

  8. JPSS Cryosphere Algorithms: Integration and Testing in Algorithm Development Library (ADL)

    NASA Astrophysics Data System (ADS)

    Tsidulko, M.; Mahoney, R. L.; Meade, P.; Baldwin, D.; Tschudi, M. A.; Das, B.; Mikles, V. J.; Chen, W.; Tang, Y.; Sprietzer, K.; Zhao, Y.; Wolf, W.; Key, J.

    2014-12-01

    JPSS is a next generation satellite system that is planned to be launched in 2017. The satellites will carry a suite of sensors that are already on board the Suomi National Polar-orbiting Partnership (S-NPP) satellite. The NOAA/NESDIS/STAR Algorithm Integration Team (AIT) works within the Algorithm Development Library (ADL) framework which mimics the operational JPSS Interface Data Processing Segment (IDPS). The AIT contributes in development, integration and testing of scientific algorithms employed in the IDPS. This presentation discusses cryosphere related activities performed in ADL. The addition of a new ancillary data set - NOAA Global Multisensor Automated Snow/Ice data (GMASI) - with ADL code modifications is described. Preliminary GMASI impact on the gridded Snow/Ice product is estimated. Several modifications to the Ice Age algorithm that demonstrates mis-classification of ice type for certain areas/time periods are tested in the ADL. Sensitivity runs for day time, night time and terminator zone are performed and presented. Comparisons between the original and modified versions of the Ice Age algorithm are also presented.

  9. BROMOCEA Code: An Improved Grand Canonical Monte Carlo/Brownian Dynamics Algorithm Including Explicit Atoms.

    PubMed

    Solano, Carlos J F; Pothula, Karunakar R; Prajapati, Jigneshkumar D; De Biase, Pablo M; Noskov, Sergei Yu; Kleinekathöfer, Ulrich

    2016-05-10

    All-atom molecular dynamics simulations have a long history of applications studying ion and substrate permeation across biological and artificial pores. While offering unprecedented insights into the underpinning transport processes, MD simulations are limited in time-scales and ability to simulate physiological membrane potentials or asymmetric salt solutions and require substantial computational power. While several approaches to circumvent all of these limitations were developed, Brownian dynamics simulations remain an attractive option to the field. The main limitation, however, is an apparent lack of protein flexibility important for the accurate description of permeation events. In the present contribution, we report an extension of the Brownian dynamics scheme which includes conformational dynamics. To achieve this goal, the dynamics of amino-acid residues was incorporated into the many-body potential of mean force and into the Langevin equations of motion. The developed software solution, called BROMOCEA, was applied to ion transport through OmpC as a test case. Compared to fully atomistic simulations, the results show a clear improvement in the ratio of permeating anions and cations. The present tests strongly indicate that pore flexibility can enhance permeation properties which will become even more important in future applications to substrate translocation. PMID:27088446

  10. Improved ant algorithms for software testing cases generation.

    PubMed

    Yang, Shunkun; Man, Tianlong; Xu, Jiaqi

    2014-01-01

    Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to produce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations. PMID:24883391

  11. Improved Ant Algorithms for Software Testing Cases Generation

    PubMed Central

    Yang, Shunkun; Xu, Jiaqi

    2014-01-01

    Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to porduce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations. PMID:24883391

  12. Genetic algorithm testbed for expert system testing. Final report

    SciTech Connect

    Roache, E.

    1996-01-01

    In recent years, the electric utility industry has developed advisory and control software that makes use of expert system technology. The validation of the underlying knowledge representation in these expert systems is critical to their success. Most expert systems currently deployed have been validated by certifying that the expert system provides appropriate conclusions for specific test cases. While this type of testing is important, it does not test cases where unexpected inputs are presented to the expert system and potential errors are exposed. Exhaustive testing is not typically an option due to the complexity of the knowledge representation and the combinatorial effects associated with checking all possible inputs through all possible execution paths. Genetic algorithms are general purpose search techniques modeled on natural adaptive systems and selective breeding methods. Genetic algorithms have been used successfully for parameter optimization and efficient search. The goal of this project was to confirm or reject the hypothesis that genetic algorithms (GAs) are useful in expert system validation. The GA system specifically targeted errors in the study`s expert system that would be exposed by unexpected input cases. The GA system found errors in the expert system and the hypothesis was confirmed. This report describes the process and results of the project.

  13. An Algorithm for Testing the Efficient Market Hypothesis

    PubMed Central

    Boboc, Ioana-Andreea; Dinică, Mihai-Cristian

    2013-01-01

    The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH). PMID:24205148

  14. An algorithm for testing the efficient market hypothesis.

    PubMed

    Boboc, Ioana-Andreea; Dinică, Mihai-Cristian

    2013-01-01

    The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH). PMID:24205148

  15. A New Computer Algorithm for Simultaneous Test Construction of Two-Stage and Multistage Testing.

    ERIC Educational Resources Information Center

    Wu, Ing-Long

    2001-01-01

    Presents two binary programming models with a special network structure that can be explored computationally for simultaneous test construction. Uses an efficient special purpose network algorithm to solve these models. An empirical study illustrates the approach. (SLD)

  16. Predictive Value of HIV-1 Genotypic Resistance Test Interpretation Algorithms

    PubMed Central

    Rhee, Soo-Yon; Fessel, W. Jeffrey; Liu, Tommy F.; Marlowe, Natalia M.; Rowland, Charles M.; Rode, Richard A.; Vandamme, Anne-Mieke; Laethem, Kristel Van; Brun-Vezinet, Francçoise; Calvez, Vincent; Taylor, Jonathan; Hurley, Leo; Horberg, Michael; Shafer, Robert W.

    2016-01-01

    Background Interpreting human immunodeficienc virus type 1 (HIV-1) genotypic drug-resistance test results is challenging for clinicians treating HIV-1–infected patients. Multiple drug-resistance interpretation algorithms have been developed, but their predictive value has rarely been evaluated using contemporary clinical data sets. Methods We examined the predictive value of 4 algorithms at predicting virologic response (VR) during 734 treatment-change episodes (TCEs). VR was define as attaining plasma HIV-1 RNA levels below the limit of quantification Drug-specifi genotypic susceptibility scores (GSSs) were calculated by applying each algorithm to the baseline genotype. Weighted GSSs were calculated by multiplying drug-specifi GSSs by antiretroviral (ARV) potency factors. Regimen-specifi GSSs (rGSSs) were calculated by adding unweighted or weighted drug-specif c GSSs for each salvage therapy ARV. The predictive value of rGSSs were estimated by use of multivariate logistic regression. Results Of 734 TCEs, 475 (65%) were associated with VR. The rGSSs for the 4 algorithms were the variables most strongly predictive of VR. The adjusted rGSS odds ratios ranged from 1.6 to 2.2 (P < .001). Using 10-fold cross-validation, the averaged area under the receiver operating characteristic curve for all algorithms increased from 0.76 with unweighted rGSSs to 0.80 with weighted rGSSs. Conclusions Unweighted and weighted rGSSs of 4 genotypic resistance algorithms were the strongest independent predictors of VR. Optimizing ARV weighting may further improve VR predictions. PMID:19552527

  17. GPS 3-D cockpit displays: Sensors, algorithms, and flight testing

    NASA Astrophysics Data System (ADS)

    Barrows, Andrew Kevin

    Tunnel-in-the-Sky 3-D flight displays have been investigated for several decades as a means of enhancing aircraft safety and utility. However, high costs have prevented commercial development and seriously hindered research into their operational benefits. The rapid development of Differential Global Positioning Systems (DGPS), inexpensive computing power, and ruggedized displays is now changing this situation. A low-cost prototype system was built and flight tested to investigate implementation and operational issues. The display provided an "out the window" 3-D perspective view of the world, letting the pilot see the horizon, runway, and desired flight path even in instrument flight conditions. The flight path was depicted as a tunnel through which the pilot flew the airplane, while predictor symbology provided guidance to minimize path-following errors. Positioning data was supplied, by various DGPS sources including the Stanford Wide Area Augmentation System (WAAS) testbed. A combination of GPS and low-cost inertial sensors provided vehicle heading, pitch, and roll information. Architectural and sensor fusion tradeoffs made during system implementation are discussed. Computational algorithms used to provide guidance on curved paths over the earth geoid are outlined along with display system design issues. It was found that current technology enables low-cost Tunnel-in-the-Sky display systems with a target cost of $20,000 for large-scale commercialization. Extensive testing on Piper Dakota and Beechcraft Queen Air aircraft demonstrated enhanced accuracy and operational flexibility on a variety of complex flight trajectories. These included curved and segmented approaches, traffic patterns flown on instruments, and skywriting by instrument reference. Overlays to existing instrument approaches at airports in California and Alaska were flown and compared with current instrument procedures. These overlays demonstrated improved utility and situational awareness for

  18. Monte Carlo tests of the ELIPGRID-PC algorithm

    SciTech Connect

    Davidson, J.R.

    1995-04-01

    The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM{reg_sign} PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within {plus_minus}0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangular sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error.

  19. Oscillation Detection Algorithm Development Summary Report and Test Plan

    SciTech Connect

    Zhou, Ning; Huang, Zhenyu; Tuffner, Francis K.; Jin, Shuangshuang

    2009-10-03

    -based modal analysis algorithms have been developed. They include Prony analysis, Regularized Ro-bust Recursive Least Square (R3LS) algorithm, Yule-Walker algorithm, Yule-Walker Spectrum algorithm, and the N4SID algo-rithm. Each has been shown to be effective for certain situations, but not as effective for some other situations. For example, the traditional Prony analysis works well for disturbance data but not for ambient data, while Yule-Walker is designed for ambient data only. Even in an algorithm that works for both disturbance data and ambient data, such as R3LS, latency results from the time window used in the algorithm is an issue in timely estimation of oscillation modes. For ambient data, the time window needs to be longer to accumulate information for a reasonably accurate estimation; while for disturbance data, the time window can be significantly shorter so the latency in estimation can be much less. In addition, adding a known input signal such as noise probing signals can increase the knowledge of system oscillatory properties and thus improve the quality of mode estimation. System situations change over time. Disturbances can occur at any time, and probing signals can be added for a certain time period and then removed. All these observations point to the need to add intelligence to ModeMeter applications. That is, a ModeMeter needs to adaptively select different algorithms and adjust parameters for various situations. This project aims to develop systematic approaches for algorithm selection and parameter adjustment. The very first step is to detect occurrence of oscillations so the algorithm and parameters can be changed accordingly. The proposed oscillation detection approach is based on the signal-noise ratio of measurements.

  20. Testing of Gyroless Estimation Algorithms for the FUSE Spacecraft

    NASA Technical Reports Server (NTRS)

    Thienel, Julie; Harman, Rick; Oshman, Yaakov

    2003-01-01

    The Far Ultraviolet Spectroscopic Explorer (FUSE) is equipped with two ring laser gyros on each of the spacecraft body axes. In May 2001 one gyro failed. It is anticipated that all of the remaining gyros will also fail based on intensity warnings. In addition to the gyro failure, two of four reaction wheels failed in late 2001. The spacecraft control now relies heavily on magnetic torque to perform the necessary science maneuvers and hold on target. The only sensor consistently available during slews is a magnetometer. This paper documents the testing and development of magnetometer-based gyroless attitude and rate estimation algorithms for FUSE. The results of two approaches are presented, one relies on a kinematic model for propagation, a method used in aircraft tracking, and the other is a pseudo-linear Kalman filter that utilizes Euler's equations in the propagation of the estimated rate. Both algorithms are tested using flight data collected over a few months before and after the reaction wheel failure. Finally, the question of closed-loop stability is addressed. The ability of the controller to meet the science slew requirements, without the gyros, is tested through simulations.

  1. Porting and Testing NPOESS CrIMSS EDR Algorithms

    NASA Technical Reports Server (NTRS)

    Kizer, Susan; Liu, Xu

    2010-01-01

    As a part of the National Polar-orbiting Operational Environmental Satellite System (NPOESS) and the NPOESS Preparatory Project (NPP), the instruments Cross-track Infrared Sounder (CrIS) and Advanced Technology Microwave Sounder (ATMS) make up the Cross-track Infrared and Microwave Sounder Suite (CrIMSS). CrIMSS will primarily provide global temperature, moisture, and pressure profiles and calibrated radiances [1]. In preparation for the NPOESS/NPP launch, porting and testing of the CrIMSS Environmental Data Record (EDR) algorithms need to be performed.

  2. Faith in the algorithm, part 1: beyond the turing test

    SciTech Connect

    Rodriguez, Marko A; Pepe, Alberto

    2009-01-01

    Since the Turing test was first proposed by Alan Turing in 1950, the goal of artificial intelligence has been predicated on the ability for computers to imitate human intelligence. However, the majority of uses for the computer can be said to fall outside the domain of human abilities and it is exactly outside of this domain where computers have demonstrated their greatest contribution. Another definition for artificial intelligence is one that is not predicated on human mimicry, but instead, on human amplification, where the algorithms that are best at accomplishing this are deemed the most intelligent. This article surveys various systems that augment human and social intelligence.

  3. Testing of Gyroless Estimation Algorithms for the FUSE Spacecraft

    NASA Technical Reports Server (NTRS)

    Harman, Rick; Thienel, Julie; Oshman, Yaakov

    2003-01-01

    The Far Ultraviolet Spectroscopic Explorer (FUSE) is equipped with two ring laser gyros on each of the spacecraft body axes. In May 2001 one gyro failed. It is anticipated that all of the remaining gyros will fail, based on intensity warnings. In addition to the gyro failure, two of four reaction wheels failed in late 2001. The spacecraft control now relies heavily on magnetic torque to perform the necessary science maneuvers and hold on target. The only sensor consistently available during slews is a magnetometer. This paper documents the testing and development of magnetometer-based gyroless attitude and rate estimation algorithms for FUSE. The results of two approaches are presented, one relies on a kinematic model for propagation, a method used in aircraft tracking. The other is a pseudo-linear Kalman filter that utilizes Euler's equations in the propagation of the estimated rate. Both algorithms are tested using flight data collected over a few months after the reaction wheel failure. Finally, the question of closed-loop stability is addressed. The ability of the controller to meet the science slew requirements, without the gyros, is analyzed.

  4. MTF testing algorithms for sampled thermal imaging systems

    NASA Astrophysics Data System (ADS)

    Fantone, Stephen D.; Imrie, David A.; Orband, Daniel; Zhang, Jian

    2008-03-01

    The introduction of third generation thermal imagers brings a new challenge to the laboratory evaluation of the thermal imager resolution performance. Traditionally, the Modulation Transfer Function (MTF) is used to characterize the resolution performance of the thermal imager. These new third generation of thermal imagers can be categorized as sampled imaging system due to the finite pixel size of the elements comprising the focal plane array. As such, they violate the requirement of shift invariance required in most linear systems analyses. We present a number of approaches to measuring the resolution performance of these systems and conclude that source scanning at the object plane is essential for proper MTF testing of these sampled thermal-imaging systems. Source scanning serves dual purposes. It over-samples the intensity distribution to form an appropriate LSF and also generates the necessary phases between the thermal target image and the corresponding sensor pixels for accurate MTF calculation. We developed five MTF measurement algorithms to test both analog and digital video outputs of sampled imaging systems. The five algorithms are the Min/Max, Full Scan, Point Scan, Combo Scan, and Sloping Slit methods and they have all been implemented in a commercially available product.

  5. An efficient algorithm for solving coupled Schroedinger type ODE`s, whose potentials include {delta}-functions

    SciTech Connect

    Gousheh, S.S.

    1996-01-01

    I have used the shooting method to find the eigenvalues (bound state energies) of a set of strongly coupled Schroedinger type equations. I have discussed the advantages of the shooting method when the potentials include {delta}-functions. I have also discussed some points which are universal in these kind of problems, whose use make the algorithm much more efficient. These points include mapping the domain of the ODE into a finite one, using the asymptotic form of the solutions, best use of the normalization freedom, and converting the {delta}-functions into boundary conditions.

  6. A model for testing centerfinding algorithms for automated optical navigation

    NASA Technical Reports Server (NTRS)

    Griffin, M. D.; Breckenridge, W. G.

    1979-01-01

    An efficient software simulation of the imaging process for optical navigation is presented, illustrating results using simple examples. The problems of image definition and optical system modeling, including ideal image containing features and realistic models of optical filtering performed by the entire camera system, are examined. A digital signal processing technique is applied to the problem of developing methods of automated optical navigation and the subsequent mathematical formulation is presented. Specific objectives such as an analysis of the effects of camera defocusing on centerfinding of planar targets, addition of noise filtering to the algorithm, and implementation of multiple frame capability were investigated.

  7. Test Generation Algorithm for Fault Detection of Analog Circuits Based on Extreme Learning Machine

    PubMed Central

    Zhou, Jingyu; Tian, Shulin; Yang, Chenglin; Ren, Xuelong

    2014-01-01

    This paper proposes a novel test generation algorithm based on extreme learning machine (ELM), and such algorithm is cost-effective and low-risk for analog device under test (DUT). This method uses test patterns derived from the test generation algorithm to stimulate DUT, and then samples output responses of the DUT for fault classification and detection. The novel ELM-based test generation algorithm proposed in this paper contains mainly three aspects of innovation. Firstly, this algorithm saves time efficiently by classifying response space with ELM. Secondly, this algorithm can avoid reduced test precision efficiently in case of reduction of the number of impulse-response samples. Thirdly, a new process of test signal generator and a test structure in test generation algorithm are presented, and both of them are very simple. Finally, the abovementioned improvement and functioning are confirmed in experiments. PMID:25610458

  8. Test generation algorithm for fault detection of analog circuits based on extreme learning machine.

    PubMed

    Zhou, Jingyu; Tian, Shulin; Yang, Chenglin; Ren, Xuelong

    2014-01-01

    This paper proposes a novel test generation algorithm based on extreme learning machine (ELM), and such algorithm is cost-effective and low-risk for analog device under test (DUT). This method uses test patterns derived from the test generation algorithm to stimulate DUT, and then samples output responses of the DUT for fault classification and detection. The novel ELM-based test generation algorithm proposed in this paper contains mainly three aspects of innovation. Firstly, this algorithm saves time efficiently by classifying response space with ELM. Secondly, this algorithm can avoid reduced test precision efficiently in case of reduction of the number of impulse-response samples. Thirdly, a new process of test signal generator and a test structure in test generation algorithm are presented, and both of them are very simple. Finally, the abovementioned improvement and functioning are confirmed in experiments. PMID:25610458

  9. Construct Implications of Including Still Image or Video in Computer-Based Listening Tests

    ERIC Educational Resources Information Center

    Ockey, Gary J.

    2007-01-01

    Over the past decade, listening comprehension tests have been converting to computer-based tests that include visual input. However, little research is available to suggest how test takers engage with different types of visuals on such tests. The present study compared a series of still images to video in academic computer-based tests to determine…

  10. A Fano cavity test for Monte Carlo proton transport algorithms

    SciTech Connect

    Sterpin, Edmond; Sorriaux, Jefferson; Souris, Kevin; Vynckier, Stefaan; Bouchard, Hugo

    2014-01-15

    Purpose: In the scope of reference dosimetry of radiotherapy beams, Monte Carlo (MC) simulations are widely used to compute ionization chamber dose response accurately. Uncertainties related to the transport algorithm can be verified performing self-consistency tests, i.e., the so-called “Fano cavity test.” The Fano cavity test is based on the Fano theorem, which states that under charged particle equilibrium conditions, the charged particle fluence is independent of the mass density of the media as long as the cross-sections are uniform. Such tests have not been performed yet for MC codes simulating proton transport. The objectives of this study are to design a new Fano cavity test for proton MC and to implement the methodology in two MC codes: Geant4 and PENELOPE extended to protons (PENH). Methods: The new Fano test is designed to evaluate the accuracy of proton transport. Virtual particles with an energy ofE{sub 0} and a mass macroscopic cross section of (Σ)/(ρ) are transported, having the ability to generate protons with kinetic energy E{sub 0} and to be restored after each interaction, thus providing proton equilibrium. To perform the test, the authors use a simplified simulation model and rigorously demonstrate that the computed cavity dose per incident fluence must equal (ΣE{sub 0})/(ρ) , as expected in classic Fano tests. The implementation of the test is performed in Geant4 and PENH. The geometry used for testing is a 10 × 10 cm{sup 2} parallel virtual field and a cavity (2 × 2 × 0.2 cm{sup 3} size) in a water phantom with dimensions large enough to ensure proton equilibrium. Results: For conservative user-defined simulation parameters (leading to small step sizes), both Geant4 and PENH pass the Fano cavity test within 0.1%. However, differences of 0.6% and 0.7% were observed for PENH and Geant4, respectively, using larger step sizes. For PENH, the difference is attributed to the random-hinge method that introduces an artificial energy

  11. An efficient algorithm to perform multiple testing in epistasis screening

    PubMed Central

    2013-01-01

    Background Research in epistasis or gene-gene interaction detection for human complex traits has grown over the last few years. It has been marked by promising methodological developments, improved translation efforts of statistical epistasis to biological epistasis and attempts to integrate different omics information sources into the epistasis screening to enhance power. The quest for gene-gene interactions poses severe multiple-testing problems. In this context, the maxT algorithm is one technique to control the false-positive rate. However, the memory needed by this algorithm rises linearly with the amount of hypothesis tests. Gene-gene interaction studies will require a memory proportional to the squared number of SNPs. A genome-wide epistasis search would therefore require terabytes of memory. Hence, cache problems are likely to occur, increasing the computation time. In this work we present a new version of maxT, requiring an amount of memory independent from the number of genetic effects to be investigated. This algorithm was implemented in C++ in our epistasis screening software MBMDR-3.0.3. We evaluate the new implementation in terms of memory efficiency and speed using simulated data. The software is illustrated on real-life data for Crohn’s disease. Results In the case of a binary (affected/unaffected) trait, the parallel workflow of MBMDR-3.0.3 analyzes all gene-gene interactions with a dataset of 100,000 SNPs typed on 1000 individuals within 4 days and 9 hours, using 999 permutations of the trait to assess statistical significance, on a cluster composed of 10 blades, containing each four Quad-Core AMD Opteron(tm) Processor 2352 2.1 GHz. In the case of a continuous trait, a similar run takes 9 days. Our program found 14 SNP-SNP interactions with a multiple-testing corrected p-value of less than 0.05 on real-life Crohn’s disease (CD) data. Conclusions Our software is the first implementation of the MB-MDR methodology able to solve large-scale SNP

  12. Development of a Smart Release Algorithm for Mid-Air Separation of Parachute Test Articles

    NASA Technical Reports Server (NTRS)

    Moore, James W.

    2011-01-01

    The Crew Exploration Vehicle Parachute Assembly System (CPAS) project is currently developing an autonomous method to separate a capsule-shaped parachute test vehicle from an air-drop platform for use in the test program to develop and validate the parachute system for the Orion spacecraft. The CPAS project seeks to perform air-drop tests of an Orion-like boilerplate capsule. Delivery of the boilerplate capsule to the test condition has proven to be a critical and complicated task. In the current concept, the boilerplate vehicle is extracted from an aircraft on top of a Type V pallet and then separated from the pallet in mid-air. The attitude of the vehicles at separation is critical to avoiding re-contact and successfully deploying the boilerplate into a heatshield-down orientation. Neither the pallet nor the boilerplate has an active control system. However, the attitude of the mated vehicle as a function of time is somewhat predictable. CPAS engineers have designed an avionics system to monitor the attitude of the mated vehicle as it is extracted from the aircraft and command a release when the desired conditions are met. The algorithm includes contingency capabilities designed to release the test vehicle before undesirable orientations occur. The algorithm was verified with simulation and ground testing. The pre-flight development and testing is discussed and limitations of ground testing are noted. The CPAS project performed a series of three drop tests as a proof-of-concept of the release technique. These tests helped to refine the attitude instrumentation and software algorithm to be used on future tests. The drop tests are described in detail and the evolution of the release system with each test is described.

  13. Richardson-Lucy/maximum likelihood image restoration algorithm for fluorescence microscopy: further testing.

    PubMed

    Holmes, T J; Liu, Y H

    1989-11-15

    A maximum likelihood based iterative algorithm adapted from nuclear medicine imaging for noncoherent optical imaging was presented in a previous publication with some initial computer-simulation testing. This algorithm is identical in form to that previously derived in a different way by W. H. Richardson "Bayesian-Based Iterative Method of Image Restoration," J. Opt. Soc. Am. 62, 55-59 (1972) and L. B. Lucy "An Iterative Technique for the Rectification of Observed Distributions," Astron. J. 79, 745-765 (1974). Foreseen applications include superresolution and 3-D fluorescence microscopy. This paper presents further simulation testing of this algorithm and a preliminary experiment with a defocused camera. The simulations show quantified resolution improvement as a function of iteration number, and they show qualitatively the trend in limitations on restored resolution when noise is present in the data. Also shown are results of a simulation in restoring missing-cone information for 3-D imaging. Conclusions are in support of the feasibility of using these methods with real systems, while computational cost and timing estimates indicate that it should be realistic to implement these methods. Itis suggested in the Appendix that future extensions to the maximum likelihood based derivation of this algorithm will address some of the limitations that are experienced with the nonextended form of the algorithm presented here. PMID:20555971

  14. Testing of the on-board attitude determination and control algorithms for SAMPEX

    NASA Technical Reports Server (NTRS)

    Mccullough, Jon D.; Flatley, Thomas W.; Henretty, Debra A.; Markley, F. Landis; San, Josephine K.

    1993-01-01

    Algorithms for on-board attitude determination and control of the Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) have been expanded to include a constant gain Kalman filter for the spacecraft angular momentum, pulse width modulation for the reaction wheel command, an algorithm to avoid pointing the Heavy Ion Large Telescope (HILT) instrument boresight along the spacecraft velocity vector, and the addition of digital sun sensor (DSS) failure detection logic. These improved algorithms were tested in a closed-loop environment for three orbit geometries, one with the sun perpendicular to the orbit plane, and two with the sun near the orbit plane - at Autumnal Equinox and at Winter Solstice. The closed-loop simulator was enhanced and used as a truth model for the control systems' performance evaluation and sensor/actuator contingency analysis. The simulations were performed on a VAX 8830 using a prototype version of the on-board software.

  15. Testing of Gyroless Estimation Algorithms for the FUSE Spacecraft

    NASA Technical Reports Server (NTRS)

    Thienel, Julie; Harman, Rick; Oshman, Yaakov

    2003-01-01

    The Far Ultraviolet Spectroscopic Explorer (FUSE) is equipped with two ring laser gyros on each of the spacecraft body axes. In May 2001 one gyro failed. It is anticipated that all of the remaining gyros will also fail, based on intensity warnings. In addition to the gyro failure, two of four reaction wheels failed in late 2001. The spacecraft control, now relies heavily on magnetic torque to perform the necessary science maneuvers. The only sensor available during slews is a magnetometer. This paper documents the testing and development of gyroless attitude and rate estimation algorithms for FUSE. The results of two approaches are presented, one relies on a kinematics model for propagation, a method used in aircraft tracking, and the other is a traditional Extended Kalman filter that utilizes Euler's equations in the propagation of the estimated rate. Finally, the question of closed-loop stability is addressed. The ability of the controller to meet the science slew requirements, without the gyros, is tested through simulations.

  16. A new cardiopulmonary exercise testing prognosticating algorithm for heart failure patients treated with beta-blockers.

    PubMed

    Corrà, Ugo; Mezzani, Alessandro; Giordano, Andrea; Caruso, Roberto; Giannuzzi, Pantaleo

    2012-04-01

    In 2004, a cardiopulmonary exercise testing (CPET) prognosticating algorithm for heart failure (HF) patients was proposed. The algorithm employed a stepwise assessment of peak oxygen consumption (VO2), slope of regression relating minute ventilation to carbon dioxide output (VE/VCO2) and peak respiratory exchange ratio (RER), and was proposed as an alternative to the traditional strategy of using a single CPET parameter to describe prognosis. Since its initial proposal, the prognosticating algorithm has not been reassessed, although a re-evaluation is in order given the fact that new HF therapies, such as beta-blocker therapy, have significantly improved survival in HF. The present review, based on a critical examination of CPET outcome studies in HF patients regularly treated with beta-blockers, suggests a new prognosticating algorithm. The algorithm comprises four CPET parameters: peak RER, exertional oscillatory ventilation (EOV), peak VO2 and peak systolic blood pressure (SBP). Compared to previous proposals, the present preliminary attempt includes EOV instead of VE/VCO2 slope as ventilatory CPET parameter, and peak SBP as hemodynamic-derived index. PMID:21450608

  17. New algorithms for phase unwrapping: implementation and testing

    NASA Astrophysics Data System (ADS)

    Kotlicki, Krzysztof

    1998-11-01

    In this paper it is shown how the regularization theory was used for the new noise immune algorithm for phase unwrapping. The algorithm were developed by M. Servin, J.L. Marroquin and F.J. Cuevas in Centro de Investigaciones en Optica A.C. and Centro de Investigacion en Matematicas A.C. in Mexico. The theory was presented. The objective of the work was to implement the algorithm into the software able to perform the off-line unwrapping on the fringe pattern. The algorithms are present as well as the result and also the software developed for the implementation.

  18. Development of a computer algorithm for the analysis of variable-frequency AC drives: Case studies included

    NASA Technical Reports Server (NTRS)

    Kankam, M. David; Benjamin, Owen

    1991-01-01

    The development of computer software for performance prediction and analysis of voltage-fed, variable-frequency AC drives for space power applications is discussed. The AC drives discussed include the pulse width modulated inverter (PWMI), a six-step inverter and the pulse density modulated inverter (PDMI), each individually connected to a wound-rotor induction motor. Various d-q transformation models of the induction motor are incorporated for user-selection of the most applicable model for the intended purpose. Simulation results of selected AC drives correlate satisfactorily with published results. Future additions to the algorithm are indicated. These improvements should enhance the applicability of the computer program to the design and analysis of space power systems.

  19. A Test of Genetic Algorithms in Relevance Feedback.

    ERIC Educational Resources Information Center

    Lopez-Pujalte, Cristina; Guerrero Bote, Vicente P.; Moya Anegon, Felix de

    2002-01-01

    Discussion of information retrieval, query optimization techniques, and relevance feedback focuses on genetic algorithms, which are derived from artificial intelligence techniques. Describes an evaluation of different genetic algorithms using a residual collection method and compares results with the Ide dec-hi method (Salton and Buckley, 1990…

  20. Evaluating Knowledge Structure-Based Adaptive Testing Algorithms and System Development

    ERIC Educational Resources Information Center

    Wu, Huey-Min; Kuo, Bor-Chen; Yang, Jinn-Min

    2012-01-01

    In recent years, many computerized test systems have been developed for diagnosing students' learning profiles. Nevertheless, it remains a challenging issue to find an adaptive testing algorithm to both shorten testing time and precisely diagnose the knowledge status of students. In order to find a suitable algorithm, four adaptive testing…

  1. Statistical algorithms for a comprehensive test ban treaty discrimination framework

    SciTech Connect

    Foote, N.D.; Anderson, D.N.; Higbee, K.T.; Miller, N.E.; Redgate, T.; Rohay, A.C.; Hagedorn, D.N.

    1996-10-01

    Seismic discrimination is the process of identifying a candidate seismic event as an earthquake or explosion using information from seismic waveform features (seismic discriminants). In the CTBT setting, low energy seismic activity must be detected and identified. A defensible CTBT discrimination decision requires an understanding of false-negative (declaring an event to be an earthquake given it is an explosion) and false-position (declaring an event to be an explosion given it is an earthquake) rates. These rates are derived from a statistical discrimination framework. A discrimination framework can be as simple as a single statistical algorithm or it can be a mathematical construct that integrates many different types of statistical algorithms and CTBT technologies. In either case, the result is the identification of an event and the numerical assessment of the accuracy of an identification, that is, false-negative and false-positive rates. In Anderson et al., eight statistical discrimination algorithms are evaluated relative to their ability to give results that effectively contribute to a decision process and to be interpretable with physical (seismic) theory. These algorithms can be discrimination frameworks individually or components of a larger framework. The eight algorithms are linear discrimination (LDA), quadratic discrimination (QDA), variably regularized discrimination (VRDA), flexible discrimination (FDA), logistic discrimination, K-th nearest neighbor (KNN), kernel discrimination, and classification and regression trees (CART). In this report, the performance of these eight algorithms, as applied to regional seismic data, is documented. Based on the findings in Anderson et al. and this analysis: CART is an appropriate algorithm for an automated CTBT setting.

  2. Fast mode decision algorithm in MPEG-2 to H.264/AVC transcoding including group of picture structure conversion

    NASA Astrophysics Data System (ADS)

    Lee, Kangjun; Jeon, Gwanggil; Jeong, Jechang

    2009-05-01

    The H.264/AVC baseline profile is used in many applications, including digital multimedia broadcasting, Internet protocol television, and storage devices, while the MPEG-2 main profile is widely used in applications, such as high-definition television and digital versatile disks. The MPEG-2 main profile supports B pictures for bidirectional motion prediction. Therefore, transcoding the MPEG-2 main profile to the H.264/AVC baseline is necessary for universal multimedia access. In the cascaded pixel domain transcoder architecture, the calculation of the rate distortion cost as part of the mode decision process in the H.264/AVC encoder requires extremely complex computations. To reduce the complexity inherent in the implementation of a real-time transcoder, we propose a fast mode decision algorithm based on complexity information from the reference region that is used for motion compensation. In this study, an adaptive mode decision process was used based on the modes assigned to the reference regions. Simulation results indicated that a significant reduction in complexity was achieved without significant degradation of video quality.

  3. A new free-surface stabilization algorithm for geodynamical modelling: Theory and numerical tests

    NASA Astrophysics Data System (ADS)

    Andrés-Martínez, Miguel; Morgan, Jason P.; Pérez-Gussinyé, Marta; Rüpke, Lars

    2015-09-01

    The surface of the solid Earth is effectively stress free in its subaerial portions, and hydrostatic beneath the oceans. Unfortunately, this type of boundary condition is difficult to treat computationally, and for computational convenience, numerical models have often used simpler approximations that do not involve a normal stress-loaded, shear-stress free top surface that is free to move. Viscous flow models with a computational free surface typically confront stability problems when the time step is bigger than the viscous relaxation time. The small time step required for stability (< 2 Kyr) makes this type of model computationally intensive, so there remains a need to develop strategies that mitigate the stability problem by making larger (at least ∼10 Kyr) time steps stable and accurate. Here we present a new free-surface stabilization algorithm for finite element codes which solves the stability problem by adding to the Stokes formulation an intrinsic penalization term equivalent to a portion of the future load at the surface nodes. Our algorithm is straightforward to implement and can be used with both Eulerian or Lagrangian grids. It includes α and β parameters to respectively control both the vertical and the horizontal slope-dependent penalization terms, and uses Uzawa-like iterations to solve the resulting system at a cost comparable to a non-stress free surface formulation. Four tests were carried out in order to study the accuracy and the stability of the algorithm: (1) a decaying first-order sinusoidal topography test, (2) a decaying high-order sinusoidal topography test, (3) a Rayleigh-Taylor instability test, and (4) a steep-slope test. For these tests, we investigate which α and β parameters give the best results in terms of both accuracy and stability. We also compare the accuracy and the stability of our algorithm with a similar implicit approach recently developed by Kaus et al. (2010). We find that our algorithm is slightly more accurate

  4. Development of Online Cognitive and Algorithm Tests as Assessment Tools in Introductory Computer Science Courses

    ERIC Educational Resources Information Center

    Avancena, Aimee Theresa; Nishihara, Akinori; Vergara, John Paul

    2012-01-01

    This paper presents the online cognitive and algorithm tests, which were developed in order to determine if certain cognitive factors and fundamental algorithms correlate with the performance of students in their introductory computer science course. The tests were implemented among Management Information Systems majors from the Philippines and…

  5. Testing Algorithmic Skills in Traditional and Non-Traditional Programming Environments

    ERIC Educational Resources Information Center

    Csernoch, Mária; Biró, Piroska; Máth, János; Abari, Kálmán

    2015-01-01

    The Testing Algorithmic and Application Skills (TAaAS) project was launched in the 2011/2012 academic year to test first year students of Informatics, focusing on their algorithmic skills in traditional and non-traditional programming environments, and on the transference of their knowledge of Informatics from secondary to tertiary education. The…

  6. An open-source framework for stress-testing non-invasive foetal ECG extraction algorithms.

    PubMed

    Andreotti, Fernando; Behar, Joachim; Zaunseder, Sebastian; Oster, Julien; Clifford, Gari D

    2016-05-01

    Over the past decades, many studies have been published on the extraction of non-invasive foetal electrocardiogram (NI-FECG) from abdominal recordings. Most of these contributions claim to obtain excellent results in detecting foetal QRS (FQRS) complexes in terms of location. A small subset of authors have investigated the extraction of morphological features from the NI-FECG. However, due to the shortage of available public databases, the large variety of performance measures employed and the lack of open-source reference algorithms, most contributions cannot be meaningfully assessed. This article attempts to address these issues by presenting a standardised methodology for stress testing NI-FECG algorithms, including absolute data, as well as extraction and evaluation routines. To that end, a large database of realistic artificial signals was created, totaling 145.8 h of multichannel data and over one million FQRS complexes. An important characteristic of this dataset is the inclusion of several non-stationary events (e.g. foetal movements, uterine contractions and heart rate fluctuations) that are critical for evaluating extraction routines. To demonstrate our testing methodology, three classes of NI-FECG extraction algorithms were evaluated: blind source separation (BSS), template subtraction (TS) and adaptive methods (AM). Experiments were conducted to benchmark the performance of eight NI-FECG extraction algorithms on the artificial database focusing on: FQRS detection and morphological analysis (foetal QT and T/QRS ratio). The overall median FQRS detection accuracies (i.e. considering all non-stationary events) for the best performing methods in each group were 99.9% for BSS, 97.9% for AM and 96.0% for TS. Both FQRS detections and morphological parameters were shown to heavily depend on the extraction techniques and signal-to-noise ratio. Particularly, it is shown that their evaluation in the source domain, obtained after using a BSS technique, should be

  7. Perceptual Tests of an Algorithm for Musical Key-Finding

    ERIC Educational Resources Information Center

    Schmuckler, Mark A.; Tomovski, Robert

    2005-01-01

    Perceiving the tonality of a musical passage is a fundamental aspect of the experience of hearing music. Models for determining tonality have thus occupied a central place in music cognition research. Three experiments investigated 1 well-known model of tonal determination: the Krumhansl-Schmuckler key-finding algorithm. In Experiment 1,…

  8. An Efficient Functional Test Generation Method For Processors Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Hudec, Ján; Gramatová, Elena

    2015-07-01

    The paper presents a new functional test generation method for processors testing based on genetic algorithms and evolutionary strategies. The tests are generated over an instruction set architecture and a processor description. Such functional tests belong to the software-oriented testing. Quality of the tests is evaluated by code coverage of the processor description using simulation. The presented test generation method uses VHDL models of processors and the professional simulator ModelSim. The rules, parameters and fitness functions were defined for various genetic algorithms used in automatic test generation. Functionality and effectiveness were evaluated using the RISC type processor DP32.

  9. Small sample training and test selection method for optimized anomaly detection algorithms in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Mindrup, Frank M.; Friend, Mark A.; Bauer, Kenneth W.

    2012-01-01

    There are numerous anomaly detection algorithms proposed for hyperspectral imagery. Robust parameter design (RPD) techniques provide an avenue to select robust settings capable of operating consistently across a large variety of image scenes. Many researchers in this area are faced with a paucity of data. Unfortunately, there are no data splitting methods for model validation of datasets with small sample sizes. Typically, training and test sets of hyperspectral images are chosen randomly. Previous research has developed a framework for optimizing anomaly detection in HSI by considering specific image characteristics as noise variables within the context of RPD; these characteristics include the Fisher's score, ratio of target pixels and number of clusters. We have developed method for selecting hyperspectral image training and test subsets that yields consistent RPD results based on these noise features. These subsets are not necessarily orthogonal, but still provide improvements over random training and test subset assignments by maximizing the volume and average distance between image noise characteristics. The small sample training and test selection method is contrasted with randomly selected training sets as well as training sets chosen from the CADEX and DUPLEX algorithms for the well known Reed-Xiaoli anomaly detector.

  10. LPT. Plot plan and site layout. Includes shield test pool/EBOR ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    LPT. Plot plan and site layout. Includes shield test pool/EBOR facility. (TAN-645 and -646) low power test building (TAN-640 and -641), water storage tanks, guard house (TAN-642), pump house (TAN-644), driveways, well, chlorination building (TAN-643), septic system. Ralph M. Parsons 1229-12 ANP/GE-7-102. November 1956. Approved by INEEL Classification Office for public release. INEEL index code no. 038-0102-00-693-107261 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  11. Algorithms for Computerized Test Construction Using Classical Item Parameters.

    ERIC Educational Resources Information Center

    Adema, Jos J.; van der Linden, Wim J.

    1989-01-01

    Two zero-one linear programing models for constructing tests using classical item and test parameters are given. These models are useful, for instance, when classical test theory must serve as an interface between an item response theory-based item banking system and a test constructor unfamiliar with the underlying theory. (TJH)

  12. Particle-In-Cell Multi-Algorithm Numerical Test-Bed

    NASA Astrophysics Data System (ADS)

    Meyers, M. D.; Yu, P.; Tableman, A.; Decyk, V. K.; Mori, W. B.

    2015-11-01

    We describe a numerical test-bed that allows for the direct comparison of different numerical simulation schemes using only a single code. It is built from the UPIC Framework, which is a set of codes and modules for constructing parallel PIC codes. In this test-bed code, Maxwell's equations are solved in Fourier space in two dimensions. One can readily examine the numerical properties of a real space finite difference scheme by including its operators' Fourier space representations in the Maxwell solver. The fields can be defined at the same location in a simulation cell or can be offset appropriately by half-cells, as in the Yee finite difference time domain scheme. This allows for the accurate comparison of numerical properties (dispersion relations, numerical stability, etc.) across finite difference schemes, or against the original spectral scheme. We have also included different options for the charge and current deposits, including a strict charge conserving current deposit. The test-bed also includes options for studying the analytic time domain scheme, which eliminates numerical dispersion errors in vacuum. We will show examples from the test-bed that illustrate how the properties of some numerical instabilities vary between different PIC algorithms. Work supported by the NSF grant ACI 1339893 and DOE grant DE-SC0008491.

  13. Implementation and testing of a simple data assimilation algorithm in the regional air pollution forecast model, DEOM

    NASA Astrophysics Data System (ADS)

    Frydendall, J.; Brandt, J.; Christensen, J. H.

    2009-08-01

    A simple data assimilation algorithm based on statistical interpolation has been developed and coupled to a long-range chemistry transport model, the Danish Eulerian Operational Model (DEOM), applied for air pollution forecasting at the National Environmental Research Institute (NERI), Denmark. In this paper, the algorithm and the results from experiments designed to find the optimal setup of the algorithm are described. The algorithm has been developed and optimized via eight different experiments where the results from different model setups have been tested against measurements from the EMEP (European Monitoring and Evaluation Programme) network covering a half-year period, April-September 1999. The best performing setup of the data assimilation algorithm for surface ozone concentrations has been found, including the combination of determining the covariances using the Hollingsworth method, varying the correlation length according to the number of adjacent observation stations and applying the assimilation routine at three successive hours during the morning. Improvements in the correlation coefficient in the range of 0.1 to 0.21 between the results from the reference and the optimal configuration of the data assimilation algorithm, were found. The data assimilation algorithm will in the future be used in the operational THOR integrated air pollution forecast system, which includes the DEOM.

  14. Implementation and testing of a simple data assimilation algorithm in the regional air pollution forecast model, DEOM

    NASA Astrophysics Data System (ADS)

    Frydendall, J.; Brandt, J.; Christensen, J. H.

    2009-03-01

    A simple data assimilation algorithm based on statistical interpolation has been developed and coupled to a long-range chemistry transport model, the Danish Eulerian Operational Model (DEOM), applied for air pollution forecasting at the National Environmental Research Institute (NERI), Denmark. In this paper, the algorithm and the results from experiments designed to find the optimal setup of the algorithm are described. The algorithm has been developed and optimized via eight different experiments where the results from different model setups have been tested against measurements from the EMEP (European Monitoring and Evaluation Programme) network covering a half-year period, April-September 1999. The best performing setup of the data assimilation algorithm for surface ozone concentrations has been found, including the combination of determining the covariances using the Hollingsworth method, varying the correlation length according to the number of adjacent observation stations and applying the assimilation routine at three successive hours during the morning. Improvements in the correlation coefficient in the range of 0.1 to 0.21 between the results from the reference and the optimal configuration of the data assimilation algorithm, were found. The data assimilation algorithm will in the future be used in the operational THOR integrated air pollution forecast system, which includes the DEOM.

  15. Computational Analysis of Arc-Jet Wedge Tests Including Ablation and Shape Change

    NASA Technical Reports Server (NTRS)

    Goekcen, Tahir; Chen, Yih-Kanq; Skokova, Kristina A.; Milos, Frank S.

    2010-01-01

    Coupled fluid-material response analyses of arc-jet wedge ablation tests conducted in a NASA Ames arc-jet facility are considered. These tests were conducted using blunt wedge models placed in a free jet downstream of the 6-inch diameter conical nozzle in the Ames 60-MW Interaction Heating Facility. The fluid analysis includes computational Navier-Stokes simulations of the nonequilibrium flowfield in the facility nozzle and test box as well as the flowfield over the models. The material response analysis includes simulation of two-dimensional surface ablation and internal heat conduction, thermal decomposition, and pyrolysis gas flow. For ablating test articles undergoing shape change, the material response and fluid analyses are coupled in order to calculate the time dependent surface heating and pressure distributions that result from shape change. The ablating material used in these arc-jet tests was Phenolic Impregnated Carbon Ablator. Effects of the test article shape change on fluid and material response simulations are demonstrated, and computational predictions of surface recession, shape change, and in-depth temperatures are compared with the experimental measurements.

  16. Small-scale rotor test rig capabilities for testing vibration alleviation algorithms

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.; Leyland, Jane Anne

    1987-01-01

    A test was conducted to assess the capabilities of a small scale rotor test rig for implementing higher harmonic control and stability augmentation algorithms. The test rig uses three high speed actuators to excite the swashplate over a range of frequencies. The actuator position signals were monitored to measure the response amplitudes at several frequencies. The ratio of response amplitude to excitation amplitude was plotted as a function of frequency. In addition to actuator performance, acceleration from six accelerometers placed on the test rig was monitored to determine whether a linear relationship exists between the harmonics of N/Rev control input and the least square error (LSE) identification technique was used to identify local and global transfer matrices for two rotor speeds at two batch sizes each. It was determined that the multicyclic control computer system interfaced very well with the rotor system and kept track of the input accelerometer signals and their phase angles. However, the current high speed actuators were found to be incapable of providing sufficient control authority at the higher excitation frequencies.

  17. Comparison of two extractable nuclear antigen testing algorithms: ALBIA versus ELISA/line immunoassay.

    PubMed

    Chandratilleke, Dinusha; Silvestrini, Roger; Culican, Sue; Campbell, David; Byth-Wilson, Karen; Swaminathan, Sanjay; Lin, Ming-Wei

    2016-08-01

    Extractable nuclear antigen (ENA) antibody testing is often requested in patients with suspected connective tissue diseases. Most laboratories in Australia use a two step process involving a high sensitivity screening assay followed by a high specificity confirmation test. Multiplexing technology with Addressable Laser Bead Immunoassay (e.g., FIDIS) offers simultaneous detection of multiple antibody specificities, allowing a single step screening and confirmation. We compared our current diagnostic laboratory testing algorithm [Organtec ELISA screen / Euroimmun line immunoassay (LIA) confirmation] and the FIDIS Connective Profile. A total of 529 samples (443 consecutive+86 known autoantibody positivity) were run through both algorithms, and 479 samples (90.5%) were concordant. The same autoantibody profile was detected in 100 samples (18.9%) and 379 were concordant negative samples (71.6%). The 50 discordant samples (9.5%) were subdivided into 'likely FIDIS or current method correct' or 'unresolved' based on ancillary data. 'Unresolved' samples (n = 25) were subclassified into 'potentially' versus 'potentially not' clinically significant based on the change to clinical interpretation. Only nine samples (1.7%) were deemed to be 'potentially clinically significant'. Overall, we found that the FIDIS Connective Profile ENA kit is non-inferior to the current ELISA screen/LIA characterisation. Reagent and capital costs may be limiting factors in using the FIDIS, but potential benefits include a single step analysis and simultaneous detection of dsDNA antibodies. PMID:27316331

  18. An Algorithm for Real-Time Optimal Photocurrent Estimation Including Transient Detection for Resource-Constrained Imaging Applications

    NASA Astrophysics Data System (ADS)

    Zemcov, Michael; Crill, Brendan; Ryan, Matthew; Staniszewski, Zak

    2016-06-01

    Mega-pixel charge-integrating detectors are common in near-IR imaging applications. Optimal signal-to-noise ratio estimates of the photocurrents, which are particularly important in the low-signal regime, are produced by fitting linear models to sequential reads of the charge on the detector. Algorithms that solve this problem have a long history, but can be computationally intensive. Furthermore, the cosmic ray background is appreciable for these detectors in Earth orbit, particularly above the Earth’s magnetic poles and the South Atlantic Anomaly, and on-board reduction routines must be capable of flagging affected pixels. In this paper, we present an algorithm that generates optimal photocurrent estimates and flags random transient charge generation from cosmic rays, and is specifically designed to fit on a computationally restricted platform. We take as a case study the Spectro-Photometer for the History of the Universe, Epoch of Reionization, and Ices Explorer (SPHEREx), a NASA Small Explorer astrophysics experiment concept, and show that the algorithm can easily fit in the resource-constrained environment of such a restricted platform. Detailed simulations of the input astrophysical signals and detector array performance are used to characterize the fitting routines in the presence of complex noise properties and charge transients. We use both Hubble Space Telescope Wide Field Camera-3 and Wide-field Infrared Survey Explorer to develop an empirical understanding of the susceptibility of near-IR detectors in low earth orbit and build a model for realistic cosmic ray energy spectra and rates. We show that our algorithm generates an unbiased estimate of the true photocurrent that is identical to that from a standard line fitting package, and characterize the rate, energy, and timing of both detected and undetected transient events. This algorithm has significant potential for imaging with charge-integrating detectors in astrophysics, earth science, and remote

  19. Automated Test Assembly for Cognitive Diagnosis Models Using a Genetic Algorithm

    ERIC Educational Resources Information Center

    Finkelman, Matthew; Kim, Wonsuk; Roussos, Louis A.

    2009-01-01

    Much recent psychometric literature has focused on cognitive diagnosis models (CDMs), a promising class of instruments used to measure the strengths and weaknesses of examinees. This article introduces a genetic algorithm to perform automated test assembly alongside CDMs. The algorithm is flexible in that it can be applied whether the goal is to…

  20. Photo Library of the Nevada Site Office (Includes historical archive of nuclear testing images)

    DOE Data Explorer

    The Nevada Site Office makes available publicly released photos from their archive that includes photos from both current programs and historical activities. The historical collections include atmospheric and underground nuclear testing photos and photos of other events and people related to the Nevada Test Site. Current collections are focused on homeland security, stockpile stewardship, and environmental management and restoration. See also the Historical Film Library at http://www.nv.doe.gov/library/films/testfilms.aspx and the Current Film Library at http://www.nv.doe.gov/library/films/current.aspx. Current films can be viewed online, but only short clips of the historical films are viewable. They can be ordered via an online request form for a very small shipping and handling fee.

  1. Anemia analyzer: algorithm and reflex testing in clinical practice leading to efficiency and cost savings.

    PubMed

    Haq, Samir M

    2009-01-01

    Anemia is a common disease affecting about 3.5 million people in the United States. In present day clinical practice, a clinician makes a diagnosis of anemia based on low hemoglobin levels discovered during a complete blood count (CBC) test. If the etiology of the anemia is not readily apparent, the clinician orders additional testing to discover the cause of the anemia. Which tests are ordered, in what order these tests are run, and how the information gathered from the tests is used is based primarily on the individual physician's knowledge and expertise. Using this system to determine the cause of anemia is not only labor and resource intensive but it carries a potential for morbidity and an occasional mortality. Utilizing previously published data, we created an algorithmic approach to analyze the cause of anemia in the majority of cases. The algorithm accepts as input three parameters from a CBC test: (1) mean corpuscular volume, (2) red cell distribution width, and (3) reticulocyte count. With these three parameters, the algorithm generates a probable etiology of the anemia. Additionally, the algorithm will automatically order reflex tests needed to confirm the diagnosis. These reflex tests can be modified depending on the policies of the institution using the algorithm, as different institutions may order different tests based on availability and costs. This is a simple algorithm that could be integrated into the CBC test output. When a low hemoglobin level is found, the algorithm suggests the probable etiology and orders reflex tests if they are desired. Such an approach would not only provide cost efficiency and time savings but would also elevate the level of every clinician ordering a CBC to that of an expert hematologist. PMID:19380908

  2. A Test Generation Framework for Distributed Fault-Tolerant Algorithms

    NASA Technical Reports Server (NTRS)

    Goodloe, Alwyn; Bushnell, David; Miner, Paul; Pasareanu, Corina S.

    2009-01-01

    Heavyweight formal methods such as theorem proving have been successfully applied to the analysis of safety critical fault-tolerant systems. Typically, the models and proofs performed during such analysis do not inform the testing process of actual implementations. We propose a framework for generating test vectors from specifications written in the Prototype Verification System (PVS). The methodology uses a translator to produce a Java prototype from a PVS specification. Symbolic (Java) PathFinder is then employed to generate a collection of test cases. A small example is employed to illustrate how the framework can be used in practice.

  3. ZEUS-2D: A radiation magnetohydrodynamics code for astrophysical flows in two space dimensions. I - The hydrodynamic algorithms and tests.

    NASA Astrophysics Data System (ADS)

    Stone, James M.; Norman, Michael L.

    1992-06-01

    A detailed description of ZEUS-2D, a numerical code for the simulation of fluid dynamical flows including a self-consistent treatment of the effects of magnetic fields and radiation transfer is presented. Attention is given to the hydrodynamic (HD) algorithms which form the foundation for the more complex MHD and radiation HD algorithms. The effect of self-gravity on the flow dynamics is accounted for by an iterative solution of the sparse-banded matrix resulting from discretizing the Poisson equation in multidimensions. The results of an extensive series of HD test problems are presented. A detailed description of the MHD algorithms in ZEUS-2D is presented. A new method of computing the electromotive force is developed using the method of characteristics (MOC). It is demonstrated through the results of an extensive series of MHD test problems that the resulting hybrid MOC-constrained transport method provides for the accurate evolution of all modes of MHD wave families.

  4. The Arzt Algorithm and other Divisibility Tests for 7

    ERIC Educational Resources Information Center

    Arzt, Joshua; Gaze, Eric

    2004-01-01

    Divisibility tests for digits other than 7 are well known and rely on the base 10 representation of numbers. For example, a natural number is divisible by 4 if the last 2 digits are divisible by 4 because 4 divides 10[sup k] for all k equal to or greater than 2. Divisibility tests for 7, while not nearly as well known, do exist and are also…

  5. Flight test results of failure detection and isolation algorithms for a redundant strapdown inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Morrell, F. R.; Motyka, P. R.; Bailey, M. L.

    1990-01-01

    Flight test results for two sensor fault-tolerant algorithms developed for a redundant strapdown inertial measurement unit are presented. The inertial measurement unit (IMU) consists of four two-degrees-of-freedom gyros and accelerometers mounted on the faces of a semi-octahedron. Fault tolerance is provided by edge vector test and generalized likelihood test algorithms, each of which can provide dual fail-operational capability for the IMU. To detect the wide range of failure magnitudes in inertial sensors, which provide flight crucial information for flight control and navigation, failure detection and isolation are developed in terms of a multi level structure. Threshold compensation techniques, developed to enhance the sensitivity of the failure detection process to navigation level failures, are presented. Four flight tests were conducted in a commercial transport-type environment to compare and determine the performance of the failure detection and isolation methods. Dual flight processors enabled concurrent tests for the algorithms. Failure signals such as hard-over, null, or bias shift, were added to the sensor outputs as simple or multiple failures during the flights. Both algorithms provided timely detection and isolation of flight control level failures. The generalized likelihood test algorithm provided more timely detection of low-level sensor failures, but it produced one false isolation. Both algorithms demonstrated the capability to provide dual fail-operational performance for the skewed array of inertial sensors.

  6. Creating IRT-Based Parallel Test Forms Using the Genetic Algorithm Method

    ERIC Educational Resources Information Center

    Sun, Koun-Tem; Chen, Yu-Jen; Tsai, Shu-Yen; Cheng, Chien-Fen

    2008-01-01

    In educational measurement, the construction of parallel test forms is often a combinatorial optimization problem that involves the time-consuming selection of items to construct tests having approximately the same test information functions (TIFs) and constraints. This article proposes a novel method, genetic algorithm (GA), to construct parallel…

  7. Genetic Algorithm-Based Test Data Generation for Multiple Paths via Individual Sharing

    PubMed Central

    Gong, Dunwei

    2014-01-01

    The application of genetic algorithms in automatically generating test data has aroused broad concerns and obtained delightful achievements in recent years. However, the efficiency of genetic algorithm-based test data generation for path testing needs to be further improved. In this paper, we establish a mathematical model of generating test data for multiple paths coverage. Then, a multipopulation genetic algorithm with individual sharing is presented to solve the established model. We not only analyzed the performance of the proposed method theoretically, but also applied it to various programs under test. The experimental results show that the proposed method can improve the efficiency of generating test data for many paths' coverage significantly. PMID:25691894

  8. Testing the MCS Deconvolution Algorithm on Infrared Data

    NASA Astrophysics Data System (ADS)

    Egan, M.

    Magain, Courbin and Sohy (MCS 1998, AJ, 494, 472) proposed a two-channel (separable point source and extended background) method for astronomical image deconvolution. Unlike the two-channel Richardson-Lucy algorithm, the MCS method does not require prior knowledge of the point source amplitudes and positions. MCS have claimed that their method produces accurate astrometry and photometry in crowded fields and in the presence of variable backgrounds. This paper compares MSX 8 micron Galactic plane images deconvolved via the MCS method with Spitzer Space Telescope IRAC 8 micron images of the same regions. The improved sampling and final image PSF for the deconvolved MSX image is chosen to match the Spitzer observation. In the parlance of MCS, this determines the light distribution for an 85 cm telescope (Spitzer) by deconvolving data taken with a 33 cm space telescope (MSX). Deconvolution of both the Spitzer and MSX data are also presented that reconstruct the image at resolution consistent with that expected from the 6.5 meter aperture James Webb Space Telescope. I will present results for varying degrees of background complexity and examine the limitations of the MCS method for use on infrared data in regions of high source density and bright, complex backgrounds.

  9. More Than Just Accuracy: A Novel Method to Incorporate Multiple Test Attributes in Evaluating Diagnostic Tests Including Point of Care Tests

    PubMed Central

    Weigl, Bernhard; Fitzpatrick, Annette; Ide, Nicole

    2016-01-01

    Current frameworks for evaluating diagnostic tests are constrained by a focus on diagnostic accuracy, and assume that all aspects of the testing process and test attributes are discrete and equally important. Determining the balance between the benefits and harms associated with new or existing tests has been overlooked. Yet, this is critically important information for stakeholders involved in developing, testing, and implementing tests. This is particularly important for point of care tests (POCTs) where tradeoffs exist between numerous aspects of the testing process and test attributes. We developed a new model that multiple stakeholders (e.g., clinicians, patients, researchers, test developers, industry, regulators, and health care funders) can use to visualize the multiple attributes of tests, the interactions that occur between these attributes, and their impacts on health outcomes. We use multiple examples to illustrate interactions between test attributes (test availability, test experience, and test results) and outcomes, including several POCTs. The model could be used to prioritize research and development efforts, and inform regulatory submissions for new diagnostics. It could potentially provide a way to incorporate the relative weights that various subgroups or clinical settings might place on different test attributes. Our model provides a novel way that multiple stakeholders can use to visualize test attributes, their interactions, and impacts on individual and population outcomes. We anticipate that this will facilitate more informed decision making around diagnostic tests. PMID:27574576

  10. More Than Just Accuracy: A Novel Method to Incorporate Multiple Test Attributes in Evaluating Diagnostic Tests Including Point of Care Tests.

    PubMed

    Thompson, Matthew; Weigl, Bernhard; Fitzpatrick, Annette; Ide, Nicole

    2016-01-01

    Current frameworks for evaluating diagnostic tests are constrained by a focus on diagnostic accuracy, and assume that all aspects of the testing process and test attributes are discrete and equally important. Determining the balance between the benefits and harms associated with new or existing tests has been overlooked. Yet, this is critically important information for stakeholders involved in developing, testing, and implementing tests. This is particularly important for point of care tests (POCTs) where tradeoffs exist between numerous aspects of the testing process and test attributes. We developed a new model that multiple stakeholders (e.g., clinicians, patients, researchers, test developers, industry, regulators, and health care funders) can use to visualize the multiple attributes of tests, the interactions that occur between these attributes, and their impacts on health outcomes. We use multiple examples to illustrate interactions between test attributes (test availability, test experience, and test results) and outcomes, including several POCTs. The model could be used to prioritize research and development efforts, and inform regulatory submissions for new diagnostics. It could potentially provide a way to incorporate the relative weights that various subgroups or clinical settings might place on different test attributes. Our model provides a novel way that multiple stakeholders can use to visualize test attributes, their interactions, and impacts on individual and population outcomes. We anticipate that this will facilitate more informed decision making around diagnostic tests. PMID:27574576

  11. Imagery test suites and their implication on the testability of computer vision algorithms

    NASA Astrophysics Data System (ADS)

    Segal, Andrew C.; Greene, Richard; Kero, Robert; Steuer, Daniel

    1992-04-01

    A fundamental question in the ability to determine the effectiveness of any computer vision algorithm is the construction and application of proper test data suites. The purpose of this paper is to develop an understanding of the underlying requirements necessary in forming test suites, and the limitations that restricted sample sizes have on determining the testability of computer vision algorithms. With the relatively recent emergence of high performance computing, it is now highly desirable to perform statistically significant testing of algorithms using a test suite containing a full range of data, from simple binary images to textured images and multi-scale images. Additionally, a common database of test suites would enable direct comparisons of competing imagery exploitation algorithms. The initial step necessary in building a test suite is the selection of adequate measures necessary to estimate the subjective attributes of images, similar to the quantitative measures from speech quality. We will discuss image measures, their relation to the construction of test suites and the use of real sensor data or computer generated synthetic images. By using the latest technology in computer graphics, synthetically generated images varying in degrees of distortion both from sensors models and other noise source models can be formed if ground-truth information of the images is known. Our eventual goal is to intelligently construct statistically significant test suites which would allow for A/B comparisons between various computer vision algorithms.

  12. Test and evaluation of the HIDEC engine uptrim algorithm

    NASA Technical Reports Server (NTRS)

    Ray, R. J.; Myers, L. P.

    1986-01-01

    The highly integrated digital electronic control (HIDEC) program will demonstrate and evaluate the improvements in performance and mission effectiveness that result from integrated engine-airframe control systems. Performance improvements will result from an adaptive engine stall margin mode, a highly integrated mode that uses the airplane flight conditions and the resulting inlet distortion to continuously compute engine stall margin. When there is excessive stall margin, the engine is uptrimmed for more thrust by increasing engine pressure ratio (EPR). The EPR uptrim logic has been evaluated and implemented into computer simulations. Thrust improvements over 10 percent are predicted for subsonic flight conditions. The EPR uptrim was successfully demonstrated during engine ground tests. Test results verify model predictions at the conditions tested.

  13. Low voltage 30-cm ion thruster development. [including performance and structural integrity (vibration) tests

    NASA Technical Reports Server (NTRS)

    King, H. J.

    1974-01-01

    The basic goal was to advance the development status of the 30-cm electron bombardment ion thruster from a laboratory model to a flight-type engineering model (EM) thruster. This advancement included the more conventional aspects of mechanical design and testing for launch loads, weight reduction, fabrication process development, reliability and quality assurance, and interface definition, as well as a relatively significant improvement in thruster total efficiency. The achievement of this goal was demonstrated by the successful completion of a series of performance and structural integrity (vibration) tests. In the course of the program, essentially every part and feature of the original 30-cm Thruster was critically evaluated. These evaluations, led to new or improved designs for the ion optical system, discharge chamber, cathode isolator vaporizer assembly, main isolator vaporizer assembly, neutralizer assembly, packaging for thermal control, electrical terminations and structure.

  14. In vivo optic nerve head biomechanics: performance testing of a three-dimensional tracking algorithm

    PubMed Central

    Girard, Michaël J. A.; Strouthidis, Nicholas G.; Desjardins, Adrien; Mari, Jean Martial; Ethier, C. Ross

    2013-01-01

    Measurement of optic nerve head (ONH) deformations could be useful in the clinical management of glaucoma. Here, we propose a novel three-dimensional tissue-tracking algorithm designed to be used in vivo. We carry out preliminary verification of the algorithm by testing its accuracy and its robustness. An algorithm based on digital volume correlation was developed to extract ONH tissue displacements from two optical coherence tomography (OCT) volumes of the ONH (undeformed and deformed). The algorithm was tested by applying artificial deformations to a baseline OCT scan while manipulating speckle noise, illumination and contrast enhancement. Tissue deformations determined by our algorithm were compared with the known (imposed) values. Errors in displacement magnitude, orientation and strain decreased with signal averaging and were 0.15 µm, 0.15° and 0.0019, respectively (for optimized algorithm parameters). Previous computational work suggests that these errors are acceptable to provide in vivo characterization of ONH biomechanics. Our algorithm is robust to OCT speckle noise as well as to changes in illumination conditions, and increasing signal averaging can produce better results. This algorithm has potential be used to quantify ONH three-dimensional strains in vivo, of benefit in the diagnosis and identification of risk factors in glaucoma. PMID:23883953

  15. A Review of Scoring Algorithms for Ability and Aptitude Tests.

    ERIC Educational Resources Information Center

    Chevalier, Shirley A.

    In conventional practice, most educators and educational researchers score cognitive tests using a dichotomous right-wrong scoring system. Although simple and straightforward, this method does not take into consideration other factors, such as partial knowledge or guessing tendencies and abilities. This paper discusses alternative scoring models:…

  16. Improved zonal wavefront reconstruction algorithm for Hartmann type test with arbitrary grid patterns

    NASA Astrophysics Data System (ADS)

    Li, Mengyang; Li, Dahai; Zhang, Chen; E, Kewei; Hong, Zhihan; Li, Chengxu

    2015-08-01

    Zonal wavefront reconstruction by use of the well known Southwell algorithm with rectangular grid patterns has been considered in the literature. However, when the grid patterns are nonrectangular, modal wavefront reconstruction has been extensively used. We propose an improved zonal wavefront reconstruction algorithm for Hartmann type test with arbitrary grid patterns. We develop the mathematical expressions to show that the wavefront over arbitrary grid patterns, such as misaligned, partly obscured, and non-square mesh grids, can be estimated well. Both iterative solution and least-square solution for the proposed algorithm are described and compared. Numerical calculation shows that the zonal wavefront reconstruction over nonrectangular profile with the proposed algorithm results in a significant improvement in comparison with the Southwell algorithm.

  17. Classifying performance impairment in response to sleep loss using pattern recognition algorithms on single session testing

    PubMed Central

    St. Hilaire, Melissa A.; Sullivan, Jason P.; Anderson, Clare; Cohen, Daniel A.; Barger, Laura K.; Lockley, Steven W.; Klerman, Elizabeth B.

    2012-01-01

    There is currently no “gold standard” marker of cognitive performance impairment resulting from sleep loss. We utilized pattern recognition algorithms to determine which features of data collected under controlled laboratory conditions could most reliably identify cognitive performance impairment in response to sleep loss using data from only one testing session, such as would occur in the “real world” or field conditions. A training set for testing the pattern recognition algorithms was developed using objective Psychomotor Vigilance Task (PVT) and subjective Karolinska Sleepiness Scale (KSS) data collected from laboratory studies during which subjects were sleep deprived for 26 – 52 hours. The algorithm was then tested in data from both laboratory and field experiments. The pattern recognition algorithm was able to identify performance impairment with a single testing session in individuals studied under laboratory conditions using PVT, KSS, length of time awake and time of day information with sensitivity and specificity as high as 82%. When this algorithm was tested on data collected under real-world conditions from individuals whose data were not in the training set, accuracy of predictions for individuals categorized with low performance impairment were as high as 98%. Predictions for medium and severe performance impairment were less accurate. We conclude that pattern recognition algorithms may be a promising method for identifying performance impairment in individuals using only current information about the individual’s behavior. Single testing features (e.g., number of PVT lapses) with high correlation with performance impairment in the laboratory setting may not be the best indicators of performance impairment under real-world conditions. Pattern recognition algorithms should be further tested for their ability to be used in conjunction with other assessments of sleepiness in real-world conditions to quantify performance impairment in

  18. Generalized wave-front reconstruction algorithm applied in a Shack-Hartmann test.

    NASA Astrophysics Data System (ADS)

    Weiyao, Zou; Zhang, Zhenchao

    2000-01-01

    A generalized numerical wave-front reconstruction method is proposed that is suitable for diversified irregular pupil shapes of optical systems to be measured. That is, to make a generalized and regular normal equation set, the test domain is extended to a regular square shape. The compatibility of this method is discussed in detail, and efficient algorithms (such as the Cholesky method) for solving this normal equation set are given. In addition, the authors give strict analyses of not only the error propagation in the wave-front estimate but also of the discretization errors of this domain extension algorithm. Finally, some application examples are given to demonstrate this algorithm.

  19. A Runs-Test Algorithm: Contingent Reinforcement and Response Run Structures

    ERIC Educational Resources Information Center

    Hachiga, Yosuke; Sakagami, Takayuki

    2010-01-01

    Four rats' choices between two levers were differentially reinforced using a runs-test algorithm. On each trial, a runs-test score was calculated based on the last 20 choices. In Experiment 1, the onset of stimulus lights cued when the runs score was smaller than criterion. Following cuing, the correct choice was occasionally reinforced with food,…

  20. Test Driving ToxCast: Endocrine Profiling for 1858 Chemicals Included in Phase II

    PubMed Central

    Filer, Dayne; Patisaul, Heather B.; Schug, Thaddeus; Reif, David; Thayer, Kristina

    2014-01-01

    Identifying chemicals, beyond those already implicated, to test for potential endocrine disruption is a challenge and high throughput approaches have emerged as a potential tool for this type of screening. This review focused the Environmental Protection Agency’s (EPA) ToxCast™ high throughput in vitro screening (HTS) program. Utility for identifying compounds was assessed and reviewed by using it to run the recently expanded chemical library (from 309 compounds to 1858) through the ToxPi™ prioritization scheme for endocrine disruption. The analysis included metabolic and neuroendocrine targets. This investigative approach simultaneously assessed the utility of ToxCast, and helped identify novel chemicals which may have endocrine activity. Results from this exercise suggest the spectrum of environmental chemicals with potential endocrine activity is much broader than indicated, and that some aspects of endocrine disruption are not fully covered in ToxCast. PMID:25460227

  1. Test driving ToxCast: endocrine profiling for 1858 chemicals included in phase II.

    PubMed

    Filer, Dayne; Patisaul, Heather B; Schug, Thaddeus; Reif, David; Thayer, Kristina

    2014-12-01

    Identifying chemicals, beyond those already implicated, to test for potential endocrine disruption is a challenge and high throughput approaches have emerged as a potential tool for this type of screening. This review focused the Environmental Protection Agency's (EPA) ToxCast(TM) high throughput in vitro screening (HTS) program. Utility for identifying compounds was assessed and reviewed by using it to run the recently expanded chemical library (from 309 compounds to 1858) through the ToxPi(TM) prioritization scheme for endocrine disruption. The analysis included metabolic and neuroendocrine targets. This investigative approach simultaneously assessed the utility of ToxCast, and helped identify novel chemicals which may have endocrine activity. Results from this exercise suggest the spectrum of environmental chemicals with potential endocrine activity is much broader than indicated, and that some aspects of endocrine disruption are not fully covered in ToxCast. PMID:25460227

  2. Reducing the need for central dual-energy X-ray absorptiometry in postmenopausal women: efficacy of a clinical algorithm including peripheral densitometry.

    PubMed

    Jiménez-Núñez, Francisco Gabriel; Manrique-Arija, Sara; Ureña-Garnica, Inmaculada; Romero-Barco, Carmen María; Panero-Lamothe, Blanca; Descalzo, Miguel Angel; Carmona, Loreto; Rodríguez-Pérez, Manuel; Fernández-Nebro, Antonio

    2013-07-01

    We evaluated the efficacy of a triage approach based on a combination of osteoporosis risk-assessment tools plus peripheral densitometry to identify low bone density accurately enough to be useful for clinical decision making in postmenopausal women. We conducted a cross-sectional diagnostic study in postmenopausal Caucasian women from primary and tertiary care. All women underwent dual-energy X-ray absorptiometric (DXA) measurement at the hip and lumbar spine and were categorized as osteoporotic or not. Additionally, patients had a nondominant heel densitometry performed with a PIXI densitometer. Four osteoporosis risk scores were tested: SCORE, ORAI, OST, and OSIRIS. All measurements were cross-blinded. We estimated the area under the curve (AUC) to predict the DXA results of 16 combinations of PIXI plus risk scores. A formula including the best combination was derived from a regression model and its predictability estimated. We included 505 women, in whom the prevalence of osteoporosis was 20 %, similar in both settings. The best algorithm was a combination of PIXI + OST + SCORE with an AUC of 0.826 (95 % CI 0.782-0.869). The proposed formula is Risk = (-12) × [PIXI + (-5)] × [OST + (-2)] × SCORE and showed little bias in the estimation (0.0016). If the formula had been implemented and the intermediate risk cutoff set at -5 to 20, the system would have saved 4,606.34 in the study year. The formula proposed, derived from previously validated risk scores plus a peripheral bone density measurement, can be used reliably in primary care to avoid unnecessary central DXA measurements in postmenopausal women. PMID:23608922

  3. Unified Framework for Development, Deployment and Robust Testing of Neuroimaging Algorithms

    PubMed Central

    Joshi, Alark; Scheinost, Dustin; Okuda, Hirohito; Belhachemi, Dominique; Murphy, Isabella; Staib, Lawrence H.; Papademetris, Xenophon

    2011-01-01

    Developing both graphical and command-line user interfaces for neuroimaging algorithms requires considerable effort. Neuroimaging algorithms can meet their potential only if they can be easily and frequently used by their intended users. Deployment of a large suite of such algorithms on multiple platforms requires consistency of user interface controls, consistent results across various platforms and thorough testing. We present the design and implementation of a novel object-oriented framework that allows for rapid development of complex image analysis algorithms with many reusable components and the ability to easily add graphical user interface controls. Our framework also allows for simplified yet robust nightly testing of the algorithms to ensure stability and cross platform interoperability. All of the functionality is encapsulated into a software object requiring no separate source code for user interfaces, testing or deployment. This formulation makes our framework ideal for developing novel, stable and easy-to-use algorithms for medical image analysis and computer assisted interventions. The framework has been both deployed at Yale and released for public use in the open source multi-platform image analysis software—BioImage Suite (bioimagesuite.org). PMID:21249532

  4. Operational feasibility of using whole blood in the rapid HIV testing algorithm of a resource-limited settings like Bangladesh

    PubMed Central

    Munshi, Saif U.; Oyewale, Tajudeen O.; Begum, Shahnaz; Uddin, Ziya; Tabassum, Shahina

    2016-01-01

    Background Serum-based rapid HIV testing algorithm in Bangladesh constitutes operational challenge to scaleup HIV testing and counselling (HTC) in the country. This study explored the operational feasibility of using whole blood as alternative to serum for rapid HIV testing in Bangladesh. Methods Whole blood specimens were collected from two study groups. The groups included HIV-positive patients (n = 200) and HIV-negative individuals (n = 200) presenting at the reference laboratory in Dhaka, Bangladesh. The specimens were subjected to rapid HIV tests using the national algorithm with A1 = Alere Determine (United States), A2 = Uni-Gold (Ireland), and A3 = First Response (India). The sensitivity and specificity of the test results, and the operational cost were compared with current serum-based testing. Results The sensitivities [95% of confidence interval (CI)] for A1, A2, and A3 tests using whole blood were 100% (CI: 99.1–100%), 100% (CI: 99.1–100%), and 97% (CI: 96.4–98.2%), respectively, and specificities of all test kits were 100% (CI: 99.1–100%). Significant (P < 0.05) reduction in the cost of establishing HTC centre and consumables by 94 and 61%, respectively, were observed. The cost of administration and external quality assurance reduced by 39 and 43%, respectively. Overall, there was a 36% cost reduction in total operational cost of rapid HIV testing with blood when compared with serum. Conclusion Considering the similar sensitivity and specificity of the two specimens, and significant cost reduction, rapid HIV testing with whole blood is feasible. A review of the national HIV rapid testing algorithm with whole blood will contribute toward improving HTC coverage in Bangladesh. PMID:26945143

  5. Economics of resynchronization strategies including chemical tests to identify nonpregnant cows.

    PubMed

    Giordano, J O; Fricke, P M; Cabrera, V E

    2013-02-01

    Our objectives were to assess (1) the economic value of decreasing the interval between timed artificial insemination (TAI) services when using a pregnancy test that allows earlier identification of nonpregnant cows; and (2) the effect of pregnancy loss and inaccuracy of a chemical test (CT) on the economic value of a pregnancy test for dairy farms. Simulation experiments were performed using a spreadsheet-based decision support tool. In experiment 1, we assessed the effect of changing the interbreeding interval (IBI) for cows receiving TAI on the value of reproductive programs by simulating a 1,000-cow dairy herd using a combination of detection of estrus (30 to 80% of cows detected in estrus) and TAI. The IBI was incremented by 7d from 28 to 56 d to reflect intervals either observed (35 to 56 d) or potentially observed (28 d) in dairy operations. In experiment 2, we evaluated the effect of accuracy of the CT and additional pregnancy loss due to earlier testing on the value of reproductive programs. The first scenario compared the use of a CT 31 ± 3 d after a previous AI with rectal palpation (RP) 39 ± 3 d after AI. The second scenario used a CT 24 ± 3 d after AI or transrectal ultrasound (TU) 32 d after AI. Parameters evaluated included sensitivity (Se), specificity (Sp), questionable diagnosis (Qd), cost of the CT, and expected pregnancy loss. Sensitivity analysis was performed for all possible combinations of parameter values to determine their relative importance on the value of the CT. In experiment 1, programs with a shorter IBI had greater economic net returns at all levels of detection of estrus, and use of chemical tests available on the market today might be beneficial compared with RP. In experiment 2, the economic value of programs using a CT could be either greater or less than that of RP and TU, depending on the value for each of the parameters related to the CT evaluated. The value of the program using the CT was affected (in order) by (1) Se, (2

  6. Considerations When Including Students with Disabilities in Test Security Policies. NCEO Policy Directions. Number 23

    ERIC Educational Resources Information Center

    Lazarus, Sheryl; Thurlow, Martha

    2015-01-01

    Sound test security policies and procedures are needed to ensure test security and confidentiality, and to help prevent cheating. In this era when cheating on tests draws regular media attention, there is a need for thoughtful consideration of the ways in which possible test security measures may affect accessibility for some students with…

  7. A new algorithm for generating highly accurate benchmark solutions to transport test problems

    SciTech Connect

    Azmy, Y.Y.

    1997-06-01

    We present a new algorithm for solving the neutron transport equation in its discrete-variable form. The new algorithm is based on computing the full matrix relating the scalar flux spatial moments in all cells to the fixed neutron source spatial moments, foregoing the need to compute the angular flux spatial moments, and thereby eliminating the need for sweeping the spatial mesh in each discrete-angular direction. The matrix equation is solved exactly in test cases, producing a solution vector that is free from iteration convergence error, and subject only to truncation and roundoff errors. Our algorithm is designed to provide method developers with a quick and simple solution scheme to test their new methods on difficult test problems without the need to develop sophisticated solution techniques, e.g. acceleration, before establishing the worthiness of their innovation. We demonstrate the utility of the new algorithm by applying it to the Arbitrarily High Order Transport Nodal (AHOT-N) method, and using it to solve two of Burre`s Suite of Test Problems (BSTP). Our results provide highly accurate benchmark solutions, that can be distributed electronically and used to verify the pointwise accuracy of other solution methods and algorithms.

  8. Performance of humans vs. exploration algorithms on the Tower of London Test.

    PubMed

    Fimbel, Eric; Lauzon, Stéphane; Rainville, Constant

    2009-01-01

    The Tower of London Test (TOL) used to assess executive functions was inspired in Artificial Intelligence tasks used to test problem-solving algorithms. In this study, we compare the performance of humans and of exploration algorithms. Instead of absolute execution times, we focus on how the execution time varies with the tasks and/or the number of moves. This approach used in Algorithmic Complexity provides a fair comparison between humans and computers, although humans are several orders of magnitude slower. On easy tasks (1 to 5 moves), healthy elderly persons performed like exploration algorithms using bounded memory resources, i.e., the execution time grew exponentially with the number of moves. This result was replicated with a group of healthy young participants. However, for difficult tasks (5 to 8 moves) the execution time of young participants did not increase significantly, whereas for exploration algorithms, the execution time keeps on increasing exponentially. A pre-and post-test control task showed a 25% improvement of visuo-motor skills but this was insufficient to explain this result. The findings suggest that naive participants used systematic exploration to solve the problem but under the effect of practice, they developed markedly more efficient strategies using the information acquired during the test. PMID:19787066

  9. A modified stitching algorithm for testing rotationally symmetric aspherical surfaces with annular sub-apertures

    NASA Astrophysics Data System (ADS)

    Hou, Xi; Wu, Fan; Yang, Li; Wu, Shi-bin; Chen, Qiang

    2006-02-01

    Annular sub-aperture stitching technique has been developed for low cost and flexible testing rotationally symmetric aspherical surfaces, of which combining accurately the sub-aperture measurement data corrupted by misalignments into a complete surface figure is the key problem. An existed stitching algorithm of annular sub-apertures can convert sub-aperture Zernike coefficients into full-aperture Zernike coefficients, in which use of Zernike circle polynomials represents sub-aperture data over both circle and annular domain. Since Zernike circle polynomials are not orthogonal over annular dominion, the fitting results may give wrong results. In this paper, the Zernike polynomials and existed stitching algorithm have been reviewed, and a modified stitching algorithm with Zernike annular polynomials is provided. The performances of a modified algorithm on the reconstruction precision are studied by comparing with the algorithm existed. The results of computer simulation show that the sub-aperture data reduction with the modified algorithm is more accurate than that obtained with the existed algorithm based on Zernike circle polynomials, and the undergoing matrix manipulation is simpler.

  10. A parameter estimation algorithm for spatial sine testing - Theory and evaluation

    NASA Technical Reports Server (NTRS)

    Rost, R. W.; Deblauwe, F.

    1992-01-01

    This paper presents the theory and an evaluation of a spatial sine testing parameter estimation algorithm that uses directly the measured forced mode of vibration and the measured force vector. The parameter estimation algorithm uses an ARMA model and a recursive QR algorithm is applied for data reduction. In this first evaluation, the algorithm has been applied to a frequency response matrix (which is a particular set of forced mode of vibration) using a sliding frequency window. The objective of the sliding frequency window is to execute the analysis simultaneously with the data acquisition. Since the pole values and the modal density are obtained from this analysis during the acquisition, the analysis information can be used to help determine the forcing vectors during the experimental data acquisition.

  11. Vertical drop test of a transport fuselage center section including the wheel wells

    NASA Technical Reports Server (NTRS)

    Williams, M. S.; Hayduk, R. J.

    1983-01-01

    A Boeing 707 fuselage section was drop tested to measure structural, seat, and anthropomorphic dummy response to vertical crash loads. The specimen had nominally zero pitch, roll and yaw at impact with a sink speed of 20 ft/sec. Results from this drop test and other drop tests of different transport sections will be used to prepare for a full-scale crash test of a B-720.

  12. Reduction in Radiation Exposure through a Stress Test Algorithm in an Emergency Department Observation Unit

    PubMed Central

    Pena, Margarita E.; Jakob, Michael R.; Cohen, Gerald I.; Irvin, Charlene B.; Solano, Nastaran; Bowerman, Ashley R.; Szpunar, Susan M.; Dixon, Mason K.

    2016-01-01

    Introduction Clinicians are urged to decrease radiation exposure from unnecessary medical procedures. Many emergency department (ED) patients placed in an observation unit (EDOU) do not require chest pain evaluation with a nuclear stress test (NucST). We sought to implement a simple ST algorithm that favors non-nuclear stress test (Non-NucST) options to evaluate the effect of the algorithm on the proportion of patients exposed to radiation by comparing use of NucST versus Non-NucST pre- and post-algorithm. Methods An ST algorithm was introduced favoring Non-NucST and limiting NucST to a subset of EDOU patients in October 2008. We analyzed aggregate data before (Jan-Sept 2008, period 1) and after (Jan-Sept 2009 and Jan-Sept 2010, periods 2 and 3 respectively) algorithm introduction. A random sample of 240 EDOU patients from each period was used to compare 30-day major adverse cardiac events (MACE). We calculated confidence intervals for proportions or the difference between two proportions. Results A total of 5,047 STs were performed from Jan-Sept 2008–2010. NucST in the EDOU decreased after algorithm introduction from period 1 to 2 (40.7%, 95% CI [38.3–43.1] vs. 22.1%, 95% CI [20.1–24.1]), and remained at 22.1%, 95% CI [20.3–24.0] in period 3. There was no difference in 30-day MACE rates before and after algorithm use (0.1% for period 1 and 3, 0% for period 2). Conclusion Use of a simple ST algorithm that favors non-NucST options decreases the proportion of EDOU chest pain patients exposed to radiation exposure from ST almost 50% by limiting NucST to a subset of patients, without a change in 30-day MACE. PMID:26973734

  13. Test Scheduling for Core-Based SOCs Using Genetic Algorithm Based Heuristic Approach

    NASA Astrophysics Data System (ADS)

    Giri, Chandan; Sarkar, Soumojit; Chattopadhyay, Santanu

    This paper presents a Genetic algorithm (GA) based solution to co-optimize test scheduling and wrapper design for core based SOCs. Core testing solutions are generated as a set of wrapper configurations, represented as rectangles with width equal to the number of TAM (Test Access Mechanism) channels and height equal to the corresponding testing time. A locally optimal best-fit heuristic based bin packing algorithm has been used to determine placement of rectangles minimizing the overall test times, whereas, GA has been utilized to generate the sequence of rectangles to be considered for placement. Experimental result on ITC'02 benchmark SOCs shows that the proposed method provides better solutions compared to the recent works reported in the literature.

  14. Adaptive and robust algorithms and tests for visual-based navigation of a space robotic manipulator

    NASA Astrophysics Data System (ADS)

    Sabatini, Marco; Monti, Riccardo; Gasbarri, Paolo; Palmerini, Giovanni B.

    2013-02-01

    Optical navigation for guidance and control of robotic systems is a well-established technique from both theoretic and practical points of view. According to the positioning of the camera, the problem can be approached in two ways: the first one, "hand-in-eye", deals with a fixed camera, external to the robot, which allows to determine the position of the target object to be reached. The second one, "eye-in-hand", consists in a camera accommodated on the end-effector of the manipulator. Here, the target object position is not determined in an absolute reference frame, but with respect to the image plane of the mobile camera. In this paper, the algorithms and the test campaign applied to the case of the planar multibody manipulator developed in the Guidance and Navigation Lab at the University of Rome La Sapienza are reported with respect to the eye-in-hand case. In fact, being the space environment the target application for this research activity, it is quite difficult to imagine a fixed, non-floating camera in the case of an orbital grasping maneuver. The classic approach of Image Base Visual Servoing considers the evaluation of the control actions directly on the basis of the error between the current image of a feature and the image of the same feature in a final desired configuration. Both simulation and experimental tests show that such a classic approach can fail when navigation errors and actuation delays are included. Moreover, changing light conditions or the presence of unexpected obstacles can lead to a camera failure in target acquisition. In order to overcome these two problems, a Modified Image Based Visual Servoing algorithm and an Extended Kalman Filtering for feature position estimation are developed and applied. In particular, the filtering shows a quite good performance if target's depth information is supplied. A simple procedure for estimating initial target depth is therefore developed and tested. As a result of the application of all the

  15. Interpretation of Colloid-Homologue Tracer Test 10-03, Including Comparisons to Test 10-01

    SciTech Connect

    Reimus, Paul W.

    2012-06-26

    This presentation covers the interpretations of colloid-homologue tracer test 10-03 conducted at the Grimsel Test Site, Switzerland, in 2010. It also provides a comparison of the interpreted test results with those of tracer test 10-01, which was conducted in the same fracture flow system and using the same tracers than test 10-03, but at a higher extraction flow rate. A method of correcting for apparent uranine degradation in test 10-03 is presented. Conclusions are: (1) Uranine degradation occurred in test 10-03, but not in 10-01; (2) Uranine correction based on apparent degradation rate in injection loop in test 11-02 seems reasonable when applied to data from test 10-03; (3) Colloid breakthrough curves quite similar in the two tests with similar recoveries relative to uranine (after correction); and (4) Much slower apparent desorption of homologues in test 10-03 than in 10-01 (any effect of residual homologues from test 10-01 in test 10-03?).

  16. Algorithms for Developing Test Questions from Sentences in Instructional Materials: An Extension of an Earlier Study.

    ERIC Educational Resources Information Center

    Roid, Gale H.; And Others

    An earlier study was extended and replicated to examine the feasibility of generating multiple-choice test questions by transforming sentences from prose instructional material. In the first study, a computer-based algorithm was used to analyze prose subject matter and to identify high-information words. Sentences containing selected words were…

  17. The Langley thermal protection system test facility: A description including design operating boundaries

    NASA Technical Reports Server (NTRS)

    Klich, G. F.

    1976-01-01

    A description of the Langley thermal protection system test facility is presented. This facility was designed to provide realistic environments and times for testing thermal protection systems proposed for use on high speed vehicles such as the space shuttle. Products from the combustion of methane-air-oxygen mixtures, having a maximum total enthalpy of 10.3 MJ/kg, are used as a test medium. Test panels with maximum dimensions of 61 cm x 91.4 cm are mounted in the side wall of the test region. Static pressures in the test region can range from .005 to .1 atm and calculated equilibrium temperatures of test panels range from 700 K to 1700 K. Test times can be as long as 1800 sec. Some experimental data obtained while using combustion products of methane-air mixtures are compared with theory, and calibration of the facility is being continued to verify calculated values of parameters which are within the design operating boundaries.

  18. Development, analysis, and testing of robust nonlinear guidance algorithms for space applications

    NASA Astrophysics Data System (ADS)

    Wibben, Daniel R.

    not identical. Finally, this work has a large focus on the application of these various algorithms to a large number of space based applications. These include applications to powered-terminal descent for landing on planetary bodies such as the moon and Mars and to proximity operations (landing, hovering, or maneuvering) about small bodies such as an asteroid or a comet. Further extensions of these algorithms have allowed for adaptation of a hybrid control strategy for planetary landing, and the combined modeling and simultaneous control of both the vehicle's position and orientation implemented within a full six degree-of-freedom spacecraft simulation.

  19. Evaluation of five simple rapid HIV assays for potential use in the Brazilian national HIV testing algorithm.

    PubMed

    da Motta, Leonardo Rapone; Vanni, Andréa Cristina; Kato, Sérgio Kakuta; Borges, Luiz Gustavo dos Anjos; Sperhacke, Rosa Dea; Ribeiro, Rosangela Maria M; Inocêncio, Lilian Amaral

    2013-12-01

    Since 2005, the Department of Sexually Transmitted Diseases (STDs), Acquired Immunodeficiency Syndrome (AIDS) and Viral Hepatitis under the Health Surveillance Secretariat in Brazil's Ministry of Health has approved a testing algorithm for using rapid human immunodeficiency virus (HIV) tests in the country. Given the constant emergence of new rapid HIV tests in the market, it is necessary to maintain an evaluation program for them. Conscious of this need, this multicenter study was conducted to evaluate five commercially available rapid HIV tests used to detect anti-HIV antibodies in Brazil. The five commercial rapid tests under assessment were the VIKIA HIV-1/2 (bioMérieux, Rio de Janeiro, Brazil), the Rapid Check HIV 1 & 2 (Center of Infectious Diseases, Federal University of Espírito Santo, Vitória, Brazil), the HIV-1/2 3.0 Strip Test Bioeasy (S.D., Kyonggi-do, South Korea), the Labtest HIV (Labtest Diagnóstica, Lagoa Santa, Brazil) and the HIV-1/2 Rapid Test Bio-Manguinhos (Oswaldo Cruz Foundation, Rio de Janeiro, Brazil). A total of 972 whole-blood samples were collected from HIV-infected patients, pregnant women and individuals seeking voluntary counselling and testing who were recruited from five centers in different regions of the country. Informed consent was obtained from the study participants. The results were compared with those obtained using the HIV algorithm used currently in Brazil, which includes two enzyme immunoassays and one Western blot test. The operational performance of each assay was also compared to the defined criteria. A total of 972 samples were tested using reference assays, and the results indicated 143 (14.7%) reactive samples and 829 (85.3%) nonreactive samples. Sensitivity values ranged from 99.3 to 100%, and specificity was 100% for all five rapid tests. All of the rapid tests performed well, were easy to perform and yielded high scores in the operational performance analysis. Three tests, however, fulfilled all of the

  20. Tests of Large Airfoils in the Propeller Research Tunnel, Including Two with Corrugated Surfaces

    NASA Technical Reports Server (NTRS)

    Wood, Donald H

    1930-01-01

    This report gives the results of the tests of seven 2 by 12 foot airfoils (Clark Y, smooth and corrugated, Gottingen 398, N.A.C.A. M-6, and N.A.C.A. 84). The tests were made in the propeller research tunnel of the National Advisory Committee for Aeronautics at Reynolds numbers up to 2,000,000. The Clark Y airfoil was tested with three degrees of surface smoothness. Corrugating the surface causes a flattening of the lift curve at the burble point and an increase in drag at small flying angles.

  1. Evolution of Testing Algorithms at a University Hospital for Detection of Clostridium difficile Infections

    PubMed Central

    Culbreath, Karissa; Ager, Edward; Nemeyer, Ronald J.; Kerr, Alan

    2012-01-01

    We present the evolution of testing algorithms at our institution in which the C. Diff Quik Chek Complete immunochromatographic cartridge assay determines the presence of both glutamate dehydrogenase and Clostridium difficile toxins A and B as a primary screen for C. difficile infection and indeterminate results (glutamate dehydrogenase positive, toxin A and B negative) are confirmed by the GeneXpert C. difficile PCR assay. This two-step algorithm is a cost-effective method for highly sensitive detection of toxigenic C. difficile. PMID:22718938

  2. Simple but novel test method for quantitatively comparing robot mapping algorithms using SLAM and dead reckoning

    NASA Astrophysics Data System (ADS)

    Davey, Neil S.; Godil, Haris

    2013-05-01

    This article presents a comparative study between a well-known SLAM (Simultaneous Localization and Mapping) algorithm, called Gmapping, and a standard Dead-Reckoning algorithm; the study is based on experimental results of both approaches by using a commercial skid-based turning robot, P3DX. Five main base-case scenarios are conducted to evaluate and test the effectiveness of both algorithms. The results show that SLAM outperformed the Dead Reckoning in terms of map-making accuracy in all scenarios but one, since SLAM did not work well in a rapidly changing environment. Although the main conclusion about the excellence of SLAM is not surprising, the presented test method is valuable to professionals working in this area of mobile robots, as it is highly practical, and provides solid and valuable results. The novelty of this study lies in its simplicity. The simple but novel test method for quantitatively comparing robot mapping algorithms using SLAM and Dead Reckoning and some applications using autonomous robots are being patented by the authors in U.S. Patent Application Nos. 13/400,726 and 13/584,862.

  3. Nuclear Rocket Test Facility Decommissioning Including Controlled Explosive Demolition of a Neutron-Activated Shield Wall

    SciTech Connect

    Michael Kruzic

    2007-09-01

    Located in Area 25 of the Nevada Test Site, the Test Cell A Facility was used in the 1960s for the testing of nuclear rocket engines, as part of the Nuclear Rocket Development Program. The facility was decontaminated and decommissioned (D&D) in 2005 using the Streamlined Approach For Environmental Restoration (SAFER) process, under the Federal Facilities Agreement and Consent Order (FFACO). Utilities and process piping were verified void of contents, hazardous materials were removed, concrete with removable contamination decontaminated, large sections mechanically demolished, and the remaining five-foot, five-inch thick radiologically-activated reinforced concrete shield wall demolished using open-air controlled explosive demolition (CED). CED of the shield wall was closely monitored and resulted in no radiological exposure or atmospheric release.

  4. Manufacture of fiber-epoxy test specimens: Including associated jigs and instrumentation

    NASA Technical Reports Server (NTRS)

    Mathur, S. B.; Felbeck, D. K.

    1980-01-01

    Experimental work on the manufacture and strength of graphite-epoxy composites is considered. The correct data and thus a true assessment of the strength properties based on a proper and scientifically modeled test specimen with engineered design, construction, and manufacture has led to claims of a very broad spread in optimized values. Such behavior is in the main due to inadequate control during manufacture of test specimen, improper curing, and uneven scatter in the fiber orientation. The graphite fibers are strong but brittle. Even with various epoxy matrices and volume fraction, the fracture toughness is still relatively low. Graphite-epoxy prepreg tape was investigated as a sandwich construction with intermittent interlaminar bonding between the laminates in order to produce high strength, high fracture toughness composites. The quality and control of manufacture of the multilaminate test specimen blanks was emphasized. The dimensions, orientation and cure must be meticulous in order to produce the desired mix.

  5. Performance testing of thermoelectric generators including Voyager and LES 8/9 flight results

    NASA Technical Reports Server (NTRS)

    Garvey, L.; Stapfer, G.

    1979-01-01

    Several thermoelectric generators ranging in output power from 0.5 to 155 W have been completed or are undergoing testing at JPL. These generators represent a wide range of technologies, using Bi2Te3, PbTe and SiGe thermoelectric materials. Several of these generators are of a developmental type, such as HPG S/N2, and others are representative of Transit and Multi-Hundred Watt (MHW) Technology. Representative flight performance data of LES 8/9 and Voyager RTG's are presented and compared with the DEGRA computer program based on the data observed from tests of SiGe couples, modules and MHW generators.

  6. Drop and Flight Tests on NY-2 Landing Gears Including Measurements of Vertical Velocities at Landing

    NASA Technical Reports Server (NTRS)

    Peck, W D; Beard, A P

    1933-01-01

    This investigation was conducted to obtain quantitative information on the effectiveness of three landing gears for the NY-2 (consolidated training) airplane. The investigation consisted of static, drop, and flight tests on landing gears of the oleo-rubber-disk and the mercury rubber-chord types, and flight tests only on a landing gear of the conventional split-axle rubber-cord type. The results show that the oleo gear is the most effective of the three landing gears in minimizing impact forces and in dissipating the energy taken.

  7. Bees Algorithm for Construction of Multiple Test Forms in E-Testing

    ERIC Educational Resources Information Center

    Songmuang, Pokpong; Ueno, Maomi

    2011-01-01

    The purpose of this research is to automatically construct multiple equivalent test forms that have equivalent qualities indicated by test information functions based on item response theory. There has been a trade-off in previous studies between the computational costs and the equivalent qualities of test forms. To alleviate this problem, we…

  8. Test driving ToxCast: endocrine profiling for1858 chemicals included in phase II

    EPA Science Inventory

    Introduction: Identifying chemicals to test for potential endocrine disruption beyond those already implicated in the peer-reviewed literature is a challenge. This review is intended to help by summarizing findings from the Environmental Protection Agency’s (EPA) ToxCast™ high th...

  9. Battery algorithm verification and development using hardware-in-the-loop testing

    NASA Astrophysics Data System (ADS)

    He, Yongsheng; Liu, Wei; Koch, Brain J.

    Battery algorithms play a vital role in hybrid electric vehicles (HEVs), plug-in hybrid electric vehicles (PHEVs), extended-range electric vehicles (EREVs), and electric vehicles (EVs). The energy management of hybrid and electric propulsion systems needs to rely on accurate information on the state of the battery in order to determine the optimal electric drive without abusing the battery. In this study, a cell-level hardware-in-the-loop (HIL) system is used to verify and develop state of charge (SOC) and power capability predictions of embedded battery algorithms for various vehicle applications. Two different batteries were selected as representative examples to illustrate the battery algorithm verification and development procedure. One is a lithium-ion battery with a conventional metal oxide cathode, which is a power battery for HEV applications. The other is a lithium-ion battery with an iron phosphate (LiFePO 4) cathode, which is an energy battery for applications in PHEVs, EREVs, and EVs. The battery cell HIL testing provided valuable data and critical guidance to evaluate the accuracy of the developed battery algorithms, to accelerate battery algorithm future development and improvement, and to reduce hybrid/electric vehicle system development time and costs.

  10. Using Neighborhood-Algorithm Inversion to Test and Calibrate Landscape Evolution Models

    NASA Astrophysics Data System (ADS)

    Perignon, M. C.; Tucker, G. E.; Van Der Beek, P.; Hilley, G. E.; Arrowsmith, R.

    2011-12-01

    Landscape evolution models use mass transport rules to simulate the development of topography over timescales too long for humans to observe. The ability of models to reproduce various attributes of real landscapes must be tested against natural systems in which driving forces, boundary conditions, and timescales of landscape evolution can be well constrained. We test and calibrate a landscape evolution model by comparing it with a well-constrained natural experiment using a formal inversion method to obtain best-fitting parameter values. Our case study is the Dragon's Back Pressure Ridge, a region of elevated terrain parallel to the south central San Andreas Fault that serves as a natural laboratory for studying how the timing and spatial distribution of uplift affects topography. We apply an optimization procedure to identify the parameter ranges and combinations that best account for the observed topography. Through the use of repeat forward modeling, direct-search inversion models can be used to convert observations from such natural systems into inferences of the processes that governed their formation. Simple inversion techniques have been used before in landscape evolution modeling, but these are imprecise and computationally expensive. We present the application of a more efficient inversion technique, the Neighborhood Algorithm (NA), to optimize the search for the model parameters values that are most consistent with the formation of the Dragon's Back Pressure Ridge through repeat forward modeling using CHILD. Inversion techniques require the comparison of model results with direct observations to evaluate misfit. For our target landscape, this is done through a series of topographic metrics that include hypsometry, slope-area curves, and channel concavity. NA uses an initial Monte Carlo simulation for which misfits have been calculated to guide a new iteration of forward models. At each iteration, NA uses n-dimensional Voronoi cells to explore the

  11. Reader reaction: A note on the evaluation of group testing algorithms in the presence of misclassification.

    PubMed

    Malinovsky, Yaakov; Albert, Paul S; Roy, Anindya

    2016-03-01

    In the context of group testing screening, McMahan, Tebbs, and Bilder (2012, Biometrics 68, 287-296) proposed a two-stage procedure in a heterogenous population in the presence of misclassification. In earlier work published in Biometrics, Kim, Hudgens, Dreyfuss, Westreich, and Pilcher (2007, Biometrics 63, 1152-1162) also proposed group testing algorithms in a homogeneous population with misclassification. In both cases, the authors evaluated performance of the algorithms based on the expected number of tests per person, with the optimal design being defined by minimizing this quantity. The purpose of this article is to show that although the expected number of tests per person is an appropriate evaluation criteria for group testing when there is no misclassification, it may be problematic when there is misclassification. Specifically, a valid criterion needs to take into account the amount of correct classification and not just the number of tests. We propose, a more suitable objective function that accounts for not only the expected number of tests, but also the expected number of correct classifications. We then show how using this objective function that accounts for correct classification is important for design when considering group testing under misclassification. We also present novel analytical results which characterize the optimal Dorfman (1943) design under the misclassification. PMID:26393800

  12. Simulation System of Car Crash Test in C-NCAP Analysis Based on an Improved Apriori Algorithm*

    NASA Astrophysics Data System (ADS)

    Xiang, LI

    In order to analysis car crash test in C-NCAP, an improved algorithm is given based on Apriori algorithm in this paper. The new algorithm is implemented with vertical data layout, breadth first searching, and intersecting. It takes advantage of the efficiency of vertical data layout and intersecting, and prunes candidate frequent item sets like Apriori. Finally, the new algorithm is applied in simulation of car crash test analysis system. The result shows that the relations will affect the C-NCAP test results, and it can provide a reference for the automotive design.

  13. Selecting training and test images for optimized anomaly detection algorithms in hyperspectral imagery through robust parameter design

    NASA Astrophysics Data System (ADS)

    Mindrup, Frank M.; Friend, Mark A.; Bauer, Kenneth W.

    2011-06-01

    There are numerous anomaly detection algorithms proposed for hyperspectral imagery. Robust parameter design (RPD) techniques have been applied to some of these algorithms in an attempt to choose robust settings capable of operating consistently across a large variety of image scenes. Typically, training and test sets of hyperspectral images are chosen randomly. Previous research developed a frameworkfor optimizing anomaly detection in HSI by considering specific image characteristics as noise variables within the context of RPD; these characteristics include the Fisher's score, ratio of target pixels and number of clusters. This paper describes a method for selecting hyperspectral image training and test subsets yielding consistent RPD results based on these noise features. These subsets are not necessarily orthogonal, but still provide improvements over random training and test subset assignments by maximizing the volume and average distance between image noise characteristics. Several different mathematical models representing the value of a training and test set based on such measures as the D-optimal score and various distance norms are tested in a simulation experiment.

  14. Application of a Smart Parachute Release Algorithm to the CPAS Test Architecture

    NASA Technical Reports Server (NTRS)

    Bledsoe, Kristin

    2013-01-01

    One of the primary test vehicles for the Capsule Parachute Assembly System (CPAS) is the Parachute Test Vehicle (PTV), a capsule shaped structure similar to the Orion design but truncated to fit in the cargo area of a C-17 aircraft. The PTV has a full Orion-like parachute compartment and similar aerodynamics; however, because of the single point attachment of the CPAS parachutes and the lack of Orion-like Reaction Control System (RCS), the PTV has the potential to reach significant body rates. High body rates at the time of the Drogue release may cause the PTV to flip while the parachutes deploy, which may result in the severing of the Pilot or Main risers. In order to prevent high rates at the time of Drogue release, a "smart release" algorithm was implemented in the PTV avionics system. This algorithm, which was developed for the Orion Flight system, triggers the Drogue parachute release when the body rates are near a minimum. This paper discusses the development and testing of the smart release algorithm; its implementation in the PTV avionics and the pretest simulation; and the results of its use on two CPAS tests.

  15. Pilot's Guide to an Airline Career, Including Sample Pre-Employment Tests.

    ERIC Educational Resources Information Center

    Traylor, W.L.

    Occupational information for persons considering a career as an airline pilot includes a detailed description of the pilot's duties and material concerning preparation for occupational entry and determining the relative merits of available jobs. The book consists of four parts: Part I, The Job, provides an overview of a pilot's duties in his daily…

  16. Solar Energy Education. Home economics: teacher's guide. Field test edition. [Includes glossary

    SciTech Connect

    Not Available

    1981-06-01

    An instructional aid is provided for home economics teachers who wish to integrate the subject of solar energy into their classroom activities. This teacher's guide was produced along with the student activities book for home economics by the US Department of Energy Solar Energy Education. A glossary of solar energy terms is included. (BCS)

  17. Using modified fruit fly optimisation algorithm to perform the function test and case studies

    NASA Astrophysics Data System (ADS)

    Pan, Wen-Tsao

    2013-06-01

    Evolutionary computation is a computing mode established by practically simulating natural evolutionary processes based on the concept of Darwinian Theory, and it is a common research method. The main contribution of this paper was to reinforce the function of searching for the optimised solution using the fruit fly optimization algorithm (FOA), in order to avoid the acquisition of local extremum solutions. The evolutionary computation has grown to include the concepts of animal foraging behaviour and group behaviour. This study discussed three common evolutionary computation methods and compared them with the modified fruit fly optimization algorithm (MFOA). It further investigated the ability of the three mathematical functions in computing extreme values, as well as the algorithm execution speed and the forecast ability of the forecasting model built using the optimised general regression neural network (GRNN) parameters. The findings indicated that there was no obvious difference between particle swarm optimization and the MFOA in regards to the ability to compute extreme values; however, they were both better than the artificial fish swarm algorithm and FOA. In addition, the MFOA performed better than the particle swarm optimization in regards to the algorithm execution speed, and the forecast ability of the forecasting model built using the MFOA's GRNN parameters was better than that of the other three forecasting models.

  18. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The development of the Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large complex systems engineering challenge being addressed in part by focusing on the specific subsystems handling of off-nominal mission and fault tolerance. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML), the Mission and Fault Management (M&FM) algorithms are crafted and vetted in specialized Integrated Development Teams composed of multiple development disciplines. NASA also has formed an M&FM team for addressing fault management early in the development lifecycle. This team has developed a dedicated Vehicle Management End-to-End Testbed (VMET) that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. The flexibility of VMET enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the algorithms utilizing actual subsystem models. The intent is to validate the algorithms and substantiate them with performance baselines for each of the vehicle subsystems in an independent platform exterior to flight software test processes. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test processes. Risk reduction is addressed by working with other organizations such as S

  19. An evaluation of the NASA Tech House, including live-in test results, volume 1

    NASA Technical Reports Server (NTRS)

    Abbott, I. H. A.; Hopping, K. A.; Hypes, W. D.

    1979-01-01

    The NASA Tech House was designed and constructed at the NASA Langley Research Center, Hampton, Virginia, to demonstrate and evaluate new technology potentially applicable for conservation of energy and resources and for improvements in safety and security in a single-family residence. All technology items, including solar-energy systems and a waste-water-reuse system, were evaluated under actual living conditions for a 1 year period with a family of four living in the house in their normal lifestyle. Results are presented which show overall savings in energy and resources compared with requirements for a defined similar conventional house under the same conditions. General operational experience and performance data are also included for all the various items and systems of technology incorporated into the house design.

  20. Formal analysis, hardness, and algorithms for extracting internal structure of test-based problems.

    PubMed

    Jaśkowski, Wojciech; Krawiec, Krzysztof

    2011-01-01

    Problems in which some elementary entities interact with each other are common in computational intelligence. This scenario, typical for coevolving artificial life agents, learning strategies for games, and machine learning from examples, can be formalized as a test-based problem and conveniently embedded in the common conceptual framework of coevolution. In test-based problems, candidate solutions are evaluated on a number of test cases (agents, opponents, examples). It has been recently shown that every test of such problem can be regarded as a separate objective, and the whole problem as multi-objective optimization. Research on reducing the number of such objectives while preserving the relations between candidate solutions and tests led to the notions of underlying objectives and internal problem structure, which can be formalized as a coordinate system that spatially arranges candidate solutions and tests. The coordinate system that spans the minimal number of axes determines the so-called dimension of a problem and, being an inherent property of every problem, is of particular interest. In this study, we investigate in-depth the formalism of a coordinate system and its properties, relate them to properties of partially ordered sets, and design an exact algorithm for finding a minimal coordinate system. We also prove that this problem is NP-hard and come up with a heuristic which is superior to the best algorithm proposed so far. Finally, we apply the algorithms to three abstract problems and demonstrate that the dimension of the problem is typically much lower than the number of tests, and for some problems converges to the intrinsic parameter of the problem--its a priori dimension. PMID:21815770

  1. Directionally solidified lamellar eutectic superalloys by edge-defined, film-fed growth. [including tensile tests

    NASA Technical Reports Server (NTRS)

    Hurley, G. F.

    1975-01-01

    A program was performed to scale up the edge-defined, film-fed growth (EFG) method for the gamma/gamma prime-beta eutectic alloy of the nominal composition Ni-19.7 Cb - 6 Cr-2.5 Al. Procedures and problem areas are described. Flat bars approximately 12 x 1.7 x 200 mm were grown, mostly at speeds of 38 mm/hr, and tensile tests on these bars at 25 and 1000 C showed lower strength than expected. The feasibility of growing hollow airfoils was also demonstrated by growing bars over 200 mm long with a teardrop shaped cross-section, having a major dimension of 12 mm and a maximum width of 5 mm.

  2. Quality assurance testing of an explosives trace analysis laboratory--further improvements to include peroxide explosives.

    PubMed

    Crowson, Andrew; Cawthorne, Richard

    2012-12-01

    The Forensic Explosives Laboratory (FEL) operates within the Defence Science and Technology Laboratory (DSTL) which is part of the UK Government Ministry of Defence (MOD). The FEL provides support and advice to the Home Office and UK police forces on matters relating to the criminal misuse of explosives. During 1989 the FEL established a weekly quality assurance testing regime in its explosives trace analysis laboratory. The purpose of the regime is to prevent the accumulation of explosives traces within the laboratory at levels that could, if other precautions failed, result in the contamination of samples and controls. Designated areas within the laboratory are swabbed using cotton wool swabs moistened with ethanol:water mixture, in equal amounts. The swabs are then extracted, cleaned up and analysed using Gas Chromatography with Thermal Energy Analyser detectors or Liquid Chromatography with triple quadrupole Mass Spectrometry. This paper follows on from two previous published papers which described the regime and summarised results from approximately 14years of tests. This paper presents results from the subsequent 7years setting them within the context of previous results. It also discusses further improvements made to the systems and procedures and the inclusion of quality assurance sampling for the peroxide explosives TATP and HMTD. Monitoring samples taken from surfaces within the trace laboratories and trace vehicle examination bay have, with few exceptions, revealed only low levels of contamination, predominantly of RDX. Analysis of the control swabs, processed alongside the monitoring swabs, has demonstrated that in this environment the risk of forensic sample contamination, assuming all the relevant anti-contamination procedures have been followed, is so small that it is considered to be negligible. The monitoring regime has also been valuable in assessing the process of continuous improvement, allowing sources of contamination transfer into the trace

  3. Scoring Divergent Thinking Tests by Computer With a Semantics-Based Algorithm.

    PubMed

    Beketayev, Kenes; Runco, Mark A

    2016-05-01

    Divergent thinking (DT) tests are useful for the assessment of creative potentials. This article reports the semantics-based algorithmic (SBA) method for assessing DT. This algorithm is fully automated: Examinees receive DT questions on a computer or mobile device and their ideas are immediately compared with norms and semantic networks. This investigation compared the scores generated by the SBA method with the traditional methods of scoring DT (i.e., fluency, originality, and flexibility). Data were collected from 250 examinees using the "Many Uses Test" of DT. The most important finding involved the flexibility scores from both scoring methods. This was critical because semantic networks are based on conceptual structures, and thus a high SBA score should be highly correlated with the traditional flexibility score from DT tests. Results confirmed this correlation (r = .74). This supports the use of algorithmic scoring of DT. The nearly-immediate computation time required by SBA method may make it the method of choice, especially when it comes to moderate- and large-scale DT assessment investigations. Correlations between SBA scores and GPA were insignificant, providing evidence of the discriminant and construct validity of SBA scores. Limitations of the present study and directions for future research are offered. PMID:27298632

  4. Classification of audiograms by sequential testing: reliability and validity of an automated behavioral hearing screening algorithm.

    PubMed

    Eilers, R E; Ozdamar, O; Steffens, M L

    1993-05-01

    In 1990, CAST (classification of audiograms by sequential testing) was proposed and developed as an automated, innovative approach to screening infant hearing using a modified Bayesian method. The method generated a four-frequency audiogram in a minimal number of test trials using VRA (visual reinforcement audiometry) techniques. Computer simulations were used to explore the properties (efficiency and accuracy) of the paradigm. The current work is designed to further test the utility of the paradigm with human infants and young children. Accordingly, infants and children between 6 months and 2 years of age were screened for hearing loss. The algorithm's efficacy was studied with respect to validity and reliability. Validity was evaluated by comparing CAST results with tympanometric data and outcomes of staircase-based testing. Test-retest reliability was also assessed. Results indicate that CAST is a valid, efficient, reliable, and potentially cost-effective screening method. PMID:8318708

  5. Results of a Saxitoxin Proficiency Test Including Characterization of Reference Material and Stability Studies

    PubMed Central

    Harju, Kirsi; Rapinoja, Marja-Leena; Avondet, Marc-André; Arnold, Werner; Schär, Martin; Luginbühl, Werner; Kremp, Anke; Suikkanen, Sanna; Kankaanpää, Harri; Burrell, Stephen; Söderström, Martin; Vanninen, Paula

    2015-01-01

    A saxitoxin (STX) proficiency test (PT) was organized as part of the Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk (EQuATox) project. The aim of this PT was to provide an evaluation of existing methods and the European laboratories’ capabilities for the analysis of STX and some of its analogues in real samples. Homogenized mussel material and algal cell materials containing paralytic shellfish poisoning (PSP) toxins were produced as reference sample matrices. The reference material was characterized using various analytical methods. Acidified algal extract samples at two concentration levels were prepared from a bulk culture of PSP toxins producing dinoflagellate Alexandrium ostenfeldii. The homogeneity and stability of the prepared PT samples were studied and found to be fit-for-purpose. Thereafter, eight STX PT samples were sent to ten participating laboratories from eight countries. The PT offered the participating laboratories the possibility to assess their performance regarding the qualitative and quantitative detection of PSP toxins. Various techniques such as official Association of Official Analytical Chemists (AOAC) methods, immunoassays, and liquid chromatography-mass spectrometry were used for sample analyses. PMID:26602927

  6. The QCRad Value Added Product: Surface Radiation Measurement Quality Control Testing, Including Climatology Configurable Limits

    SciTech Connect

    Long, CN; Shi, Y

    2006-09-01

    This document describes the QCRad methodology, which uses climatological analyses of the surface radiation measurements to define reasonable limits for testing the data for unusual data values. The main assumption is that the majority of the climatological data are “good” data, which for field sites operated with care such as those of the Atmospheric Radiation Measurement (ARM) Program is a reasonable assumption. Data that fall outside the normal range of occurrences are labeled either “indeterminate” (meaning that the measurements are possible, but rarely occurring, and thus the values cannot be identified as good) or “bad” depending on how far outside the normal range the particular data reside. The methodology not only sets fairly standard maximum and minimum value limits, but also compares what we have learned about the behavior of these instruments in the field to other value-added products (VAPs), such as the Diffuse infrared (IR) Loss Correction VAP (Younkin and Long 2004) and the Best Estimate Flux VAP (Shi and Long 2002).

  7. Results of a Saxitoxin Proficiency Test Including Characterization of Reference Material and Stability Studies.

    PubMed

    Harju, Kirsi; Rapinoja, Marja-Leena; Avondet, Marc-André; Arnold, Werner; Schär, Martin; Luginbühl, Werner; Kremp, Anke; Suikkanen, Sanna; Kankaanpää, Harri; Burrell, Stephen; Söderström, Martin; Vanninen, Paula

    2015-12-01

    A saxitoxin (STX) proficiency test (PT) was organized as part of the Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk (EQuATox) project. The aim of this PT was to provide an evaluation of existing methods and the European laboratories' capabilities for the analysis of STX and some of its analogues in real samples. Homogenized mussel material and algal cell materials containing paralytic shellfish poisoning (PSP) toxins were produced as reference sample matrices. The reference material was characterized using various analytical methods. Acidified algal extract samples at two concentration levels were prepared from a bulk culture of PSP toxins producing dinoflagellate Alexandrium ostenfeldii. The homogeneity and stability of the prepared PT samples were studied and found to be fit-for-purpose. Thereafter, eight STX PT samples were sent to ten participating laboratories from eight countries. The PT offered the participating laboratories the possibility to assess their performance regarding the qualitative and quantitative detection of PSP toxins. Various techniques such as official Association of Official Analytical Chemists (AOAC) methods, immunoassays, and liquid chromatography-mass spectrometry were used for sample analyses. PMID:26602927

  8. Activity of faropenem tested against Neisseria gonorrhoeae isolates including fluoroquinolone-resistant strains.

    PubMed

    Jones, Ronald N; Critchley, Ian A; Whittington, William L H; Janjic, Nebojsa; Pottumarthy, Sudha

    2005-12-01

    We evaluated the anti-gonococcal potency of faropenem along with 7 comparator reference antimicrobials against a preselected collection of clinical isolates. The 265 isolates were inclusive of 2 subsets: 1) 76 well-characterized resistant phenotypes of gonococcal strains (53 quinolone-resistant strains--31 with documented quinolone resistance-determining region changes from Japan, 15 strains resistant to penicillin and tetracycline, and 8 strains with intermediate susceptibility to penicillin) and 2) 189 recent isolates from clinical specimens in 2004 from 6 states across the United States where quinolone resistance is prevalent. Activity of faropenem was adversely affected by l-cysteine hydrochloride in IsoVitaleX (4-fold increase in [minimal inhibitory concentration] MIC50; 0.06 versus 0.25 microg/mL). The rank order of potency of the antimicrobials for the entire collection was ceftriaxone (MIC90, 0.06 microg/mL) > faropenem (0.25 microg/mL) > azithromycin (0.5 microg/mL) > cefuroxime (1 microg/mL) > tetracycline (2 microg/mL) > penicillin = ciprofloxacin = levofloxacin (4 microg/mL). Using MIC90 for comparison, faropenem was 4-fold more potent than cefuroxime (0.25 versus 1 microg/mL), but was 4-fold less active than ceftriaxone (0.25 versus 0.06 microg/mL). Although the activity of faropenem was not affected by either penicillinase production (MIC90, 0.12 microg/mL, penicillinase-positive) or increasing ciprofloxacin MIC (0.25 microg/mL, ciprofloxacin-resistant), increasing penicillin MIC was associated with an increase in MIC90 values (0.016 microg/mL for penicillin-susceptible to 0.25 microg/mL for penicillin-resistant strains). Among the recent (2004) clinical gonococcal isolates tested, reduced susceptibility to penicillins, tetracycline, and fluoroquinolones was high (28.0-94.2%). Geographic distribution of the endemic resistance rates of gonococci varied considerably, with 16.7-66.7% of the gonococcal isolates being ciprofloxacin-resistant in Oregon

  9. A New Lidar Data Processing Algorithm Including Full Uncertainty Budget and Standardized Vertical Resolution for use Within the NDACC and GRUAN Networks

    NASA Astrophysics Data System (ADS)

    Leblanc, T.; Haefele, A.; Sica, R. J.; van Gijsel, A.

    2014-12-01

    A new lidar data processing algorithm for the retrieval of ozone, temperature and water vapor has been developed for centralized use within the Network for the Detection of Atmospheric Composition Change (NDACC) and the GCOS Reference Upper Air Network (GRUAN). The program is written with the objective that raw data from a large number of lidar instruments can be analyzed consistently. The uncertainty budget includes 13 sources of uncertainty that are explicitly propagated taking into account vertical and inter-channel dependencies. Several standardized definitions of vertical resolution can be used, leading to a maximum flexibility, and to the production of tropospheric ozone, stratospheric ozone, middle atmospheric temperature and tropospheric water vapor profiles optimized for multiple user needs such as long-term monitoring, process studies and model and satellite validation. A review of the program's functionalities as well as the first retrieved products will be presented.

  10. Improving the quantitative testing of fast aspherics surfaces with null screen using Dijkstra algorithm

    NASA Astrophysics Data System (ADS)

    Moreno Oliva, Víctor Iván; Castañeda Mendoza, Álvaro; Campos García, Manuel; Díaz Uribe, Rufino

    2011-09-01

    The null screen is a geometric method that allows the testing of fast aspherical surfaces, this method measured the local slope at the surface and by numerical integration the shape of the surface is measured. The usual technique for the numerical evaluation of the surface is the trapezoidal rule, is well-known fact that the truncation error increases with the second power of the spacing between spots of the integration path. Those paths are constructed following spots reflected on the surface and starting in an initial select spot. To reduce the numerical errors in this work we propose the use of the Dijkstra algorithm.1 This algorithm can find the shortest path from one spot (or vertex) to another spot in a weighted connex graph. Using a modification of the algorithm it is possible to find the minimal path from one select spot to all others ones. This automates and simplifies the integration process in the test with null screens. In this work is shown the efficient proposed evaluating a previously surface with a traditional process.

  11. Scoring Divergent Thinking Tests by Computer With a Semantics-Based Algorithm

    PubMed Central

    Beketayev, Kenes; Runco, Mark A.

    2016-01-01

    Divergent thinking (DT) tests are useful for the assessment of creative potentials. This article reports the semantics-based algorithmic (SBA) method for assessing DT. This algorithm is fully automated: Examinees receive DT questions on a computer or mobile device and their ideas are immediately compared with norms and semantic networks. This investigation compared the scores generated by the SBA method with the traditional methods of scoring DT (i.e., fluency, originality, and flexibility). Data were collected from 250 examinees using the “Many Uses Test” of DT. The most important finding involved the flexibility scores from both scoring methods. This was critical because semantic networks are based on conceptual structures, and thus a high SBA score should be highly correlated with the traditional flexibility score from DT tests. Results confirmed this correlation (r = .74). This supports the use of algorithmic scoring of DT. The nearly-immediate computation time required by SBA method may make it the method of choice, especially when it comes to moderate- and large-scale DT assessment investigations. Correlations between SBA scores and GPA were insignificant, providing evidence of the discriminant and construct validity of SBA scores. Limitations of the present study and directions for future research are offered. PMID:27298632

  12. Testing of Lagrange multiplier damped least-squares control algorithm for woofer-tweeter adaptive optics

    PubMed Central

    Zou, Weiyao; Burns, Stephen A.

    2012-01-01

    A Lagrange multiplier-based damped least-squares control algorithm for woofer-tweeter (W-T) dual deformable-mirror (DM) adaptive optics (AO) is tested with a breadboard system. We show that the algorithm can complementarily command the two DMs to correct wavefront aberrations within a single optimization process: the woofer DM correcting the high-stroke, low-order aberrations, and the tweeter DM correcting the low-stroke, high-order aberrations. The optimal damping factor for a DM is found to be the median of the eigenvalue spectrum of the influence matrix of that DM. Wavefront control accuracy is maximized with the optimized control parameters. For the breadboard system, the residual wavefront error can be controlled to the precision of 0.03 μm in root mean square. The W-T dual-DM AO has applications in both ophthalmology and astronomy. PMID:22441462

  13. The Research on Web-Based Testing Environment Using Simulated Annealing Algorithm

    PubMed Central

    2014-01-01

    The computerized evaluation is now one of the most important methods to diagnose learning; with the application of artificial intelligence techniques in the field of evaluation, the computerized adaptive testing gradually becomes one of the most important evaluation methods. In this test, the computer dynamic updates the learner's ability level and selects tailored items from the item pool. In order to meet the needs of the test it requires that the system has a relatively high efficiency of the implementation. To solve this problem, we proposed a novel method of web-based testing environment based on simulated annealing algorithm. In the development of the system, through a series of experiments, we compared the simulated annealing method and other methods of the efficiency and efficacy. The experimental results show that this method ensures choosing nearly optimal items from the item bank for learners, meeting a variety of assessment needs, being reliable, and having valid judgment in the ability of learners. In addition, using simulated annealing algorithm to solve the computing complexity of the system greatly improves the efficiency of select items from system and near-optimal solutions. PMID:24959600

  14. Development and Implementation of a Hardware In-the-Loop Test Bed for Unmanned Aerial Vehicle Control Algorithms

    NASA Technical Reports Server (NTRS)

    Nyangweso, Emmanuel; Bole, Brian

    2014-01-01

    Successful prediction and management of battery life using prognostic algorithms through ground and flight tests is important for performance evaluation of electrical systems. This paper details the design of test beds suitable for replicating loading profiles that would be encountered in deployed electrical systems. The test bed data will be used to develop and validate prognostic algorithms for predicting battery discharge time and battery failure time. Online battery prognostic algorithms will enable health management strategies. The platform used for algorithm demonstration is the EDGE 540T electric unmanned aerial vehicle (UAV). The fully designed test beds developed and detailed in this paper can be used to conduct battery life tests by controlling current and recording voltage and temperature to develop a model that makes a prediction of end-of-charge and end-of-life of the system based on rapid state of health (SOH) assessment.

  15. Development of region processing algorithm for HSTAMIDS: status and field test results

    NASA Astrophysics Data System (ADS)

    Ngan, Peter; Burke, Sean; Cresci, Roger; Wilson, Joseph N.; Gader, Paul; Ho, K. C.; Bartosz, Elizabeth; Duvoisin, Herbert

    2007-04-01

    The Region Processing Algorithm (RPA) has been developed by the Office of the Army Humanitarian Demining Research and Development (HD R&D) Program as part of improvements for the AN/PSS-14. The effort was a collaboration between the HD R&D Program, L-3 Communication CyTerra Corporation, University of Florida, Duke University and University of Missouri. RPA has been integrated into and implemented in a real-time AN/PSS-14. The subject unit was used to collect data and tested for its performance at three Army test sites within the United States of America. This paper describes the status of the technology and its recent test results.

  16. Brief Communication: A new testing field for debris flow warning systems and algorithms

    NASA Astrophysics Data System (ADS)

    Arattano, M.; Coviello, V.; Cavalli, M.; Comiti, F.; Macconi, P.; Marchi, L.; Theule, J.; Crema, S.

    2015-03-01

    Early warning systems (EWSs) are among the measures adopted for the mitigation of debris flow hazards. EWSs often employ algorithms that require careful and long testing to grant their effectiveness. A permanent installation has been so equipped in the Gadria basin (Eastern Italian Alps) for the systematic test of event-EWSs. The installation is conceived to produce didactic videos and host informative visits. The populace involvement and education is in fact an essential step in any hazard mitigation activity and it should envisaged in planning any research activity. The occurrence of a debris flow in the Gadria creek, in the summer of 2014, allowed a first test of the installation and the recording of an informative video on EWSs.

  17. GUEST EDITORS' INTRODUCTION: Testing inversion algorithms against experimental data: inhomogeneous targets

    NASA Astrophysics Data System (ADS)

    Belkebir, Kamal; Saillard, Marc

    2005-12-01

    This special section deals with the reconstruction of scattering objects from experimental data. A few years ago, inspired by the Ipswich database [1 4], we started to build an experimental database in order to validate and test inversion algorithms against experimental data. In the special section entitled 'Testing inversion algorithms against experimental data' [5], preliminary results were reported through 11 contributions from several research teams. (The experimental data are free for scientific use and can be downloaded from the web site.) The success of this previous section has encouraged us to go further and to design new challenges for the inverse scattering community. Taking into account the remarks formulated by several colleagues, the new data sets deal with inhomogeneous cylindrical targets and transverse electric (TE) polarized incident fields have also been used. Among the four inhomogeneous targets, three are purely dielectric, while the last one is a `hybrid' target mixing dielectric and metallic cylinders. Data have been collected in the anechoic chamber of the Centre Commun de Ressources Micro-ondes in Marseille. The experimental setup as well as the layout of the files containing the measurements are presented in the contribution by J-M Geffrin, P Sabouroux and C Eyraud. The antennas did not change from the ones used previously [5], namely wide-band horn antennas. However, improvements have been achieved by refining the mechanical positioning devices. In order to enlarge the scope of applications, both TE and transverse magnetic (TM) polarizations have been carried out for all targets. Special care has been taken not to move the target under test when switching from TE to TM measurements, ensuring that TE and TM data are available for the same configuration. All data correspond to electric field measurements. In TE polarization the measured component is orthogonal to the axis of invariance. Contributions A Abubakar, P M van den Berg and T M

  18. LOTOS code for local earthquake tomographic inversion: benchmarks for testing tomographic algorithms

    NASA Astrophysics Data System (ADS)

    Koulakov, I. Yu.

    2009-04-01

    We present the LOTOS-07 code for performing local earthquake tomographic (LET) inversion, which is freely available at www.ivan-art.com/science/LOTOS_07. The initial data for the code are the arrival times from local seismicity and coordinates of the stations. It does not require any information about the sources. The calculations start from absolute location of sources and estimates of an optimal 1D velocity model. Then the sources are relocated simultaneously with the 3D velocity distribution during iterative coupled tomographic inversions. The code allows results to be compared based on node or cell parameterizations. Both Vp-Vs and Vp - Vp/Vs inversion schemes can be performed by the LOTOS code. The working ability of the LOTOS code is illustrated with different real and synthetic datasets. Some of the tests are used to disprove existing stereotypes of LET schemes such as using trade-off curves for evaluation of damping parameters and GAP criterion for selection of events. We also present a series of synthetic datasets with unknown sources and velocity models (www.ivan-art.com/science/benchmark) that can be used as blind benchmarks for testing different tomographic algorithms. We encourage other users of tomography algorithms to join the program on creating benchmarks that can be used to check existing codes. The program codes and testing datasets will be freely distributed during the poster presentation.

  19. Implementation and Initial Testing of Advanced Processing and Analysis Algorithms for Correlated Neutron Counting

    SciTech Connect

    Santi, Peter Angelo; Cutler, Theresa Elizabeth; Favalli, Andrea; Koehler, Katrina Elizabeth; Henzl, Vladimir; Henzlova, Daniela; Parker, Robert Francis; Croft, Stephen

    2015-12-01

    In order to improve the accuracy and capabilities of neutron multiplicity counting, additional quantifiable information is needed in order to address the assumptions that are present in the point model. Extracting and utilizing higher order moments (Quads and Pents) from the neutron pulse train represents the most direct way of extracting additional information from the measurement data to allow for an improved determination of the physical properties of the item of interest. The extraction of higher order moments from a neutron pulse train required the development of advanced dead time correction algorithms which could correct for dead time effects in all of the measurement moments in a self-consistent manner. In addition, advanced analysis algorithms have been developed to address specific assumptions that are made within the current analysis model, namely that all neutrons are created at a single point within the item of interest, and that all neutrons that are produced within an item are created with the same energy distribution. This report will discuss the current status of implementation and initial testing of the advanced dead time correction and analysis algorithms that have been developed in an attempt to utilize higher order moments to improve the capabilities of correlated neutron measurement techniques.

  20. Adaptive time stepping algorithm for Lagrangian transport models: Theory and idealised test cases

    NASA Astrophysics Data System (ADS)

    Shah, Syed Hyder Ali Muttaqi; Heemink, Arnold Willem; Gräwe, Ulf; Deleersnijder, Eric

    2013-08-01

    Random walk simulations have an excellent potential in marine and oceanic modelling. This is essentially due to their relative simplicity and their ability to represent advective transport without being plagued by the deficiencies of the Eulerian methods. The physical and mathematical foundations of random walk modelling of turbulent diffusion have become solid over the years. Random walk models rest on the theory of stochastic differential equations. Unfortunately, the latter and the related numerical aspects have not attracted much attention in the oceanic modelling community. The main goal of this paper is to help bridge the gap by developing an efficient adaptive time stepping algorithm for random walk models. Its performance is examined on two idealised test cases of turbulent dispersion; (i) pycnocline crossing and (ii) non-flat isopycnal diffusion, which are inspired by shallow-sea dynamics and large-scale ocean transport processes, respectively. The numerical results of the adaptive time stepping algorithm are compared with the fixed-time increment Milstein scheme, showing that the adaptive time stepping algorithm for Lagrangian random walk models is more efficient than its fixed step-size counterpart without any loss in accuracy.

  1. Preliminary test results of a flight management algorithm for fuel conservative descents in a time based metered traffic environment. [flight tests of an algorithm to minimize fuel consumption of aircraft based on flight time

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Cannon, D. G.

    1979-01-01

    A flight management algorithm designed to improve the accuracy of delivering the airplane fuel efficiently to a metering fix at a time designated by air traffic control is discussed. The algorithm provides a 3-D path with time control (4-D) for a test B 737 airplane to make an idle thrust, clean configured descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path is calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithms and the results of the flight tests are discussed.

  2. Application of the HWVP measurement error model and feed test algorithms to pilot scale feed testing

    SciTech Connect

    Adams, T.L.

    1996-03-01

    The purpose of the feed preparation subsystem in the Hanford Waste Vitrification Plant (HWVP) is to provide, for control of the properties of the slurry that are sent to the melter. The slurry properties are adjusted so that two classes of constraints are satisfied. Processability constraints guarantee that the process conditions required by the melter can be obtained. For example, there are processability constraints associated with electrical conductivity and viscosity. Acceptability constraints guarantee that the processed glass can be safely stored in a repository. An example of an acceptability constraint is the durability of the product glass. The primary control focus for satisfying both processability and acceptability constraints is the composition of the slurry. The primary mechanism for adjusting the composition of the slurry is mixing the waste slurry with frit of known composition. Spent frit from canister decontamination is also recycled by adding it to the melter feed. A number of processes in addition to mixing are used to condition the waste slurry prior to melting, including evaporation and the addition of formic acid. These processes also have an effect on the feed composition.

  3. A super-resolution algorithm for enhancement of flash lidar data: flight test results

    NASA Astrophysics Data System (ADS)

    Bulyshev, Alexander; Amzajerdian, Farzin; Roback, Eric; Reisse, Robert

    2013-03-01

    This paper describes the results of a 3D super-resolution algorithm applied to the range data obtained from a recent Flash Lidar helicopter flight test. The flight test was conducted by the NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project over a simulated lunar terrain facility at NASA Kennedy Space Center. ALHAT is developing the technology for safe autonomous landing on the surface of celestial bodies: Moon, Mars, asteroids. One of the test objectives was to verify the ability of 3D super-resolution technique to generate high resolution digital elevation models (DEMs) and to determine time resolved relative positions and orientations of the vehicle. 3D super-resolution algorithm was developed earlier and tested in computational modeling, and laboratory experiments, and in a few dynamic experiments using a moving truck. Prior to the helicopter flight test campaign, a 100mX100m hazard field was constructed having most of the relevant extraterrestrial hazard: slopes, rocks, and craters with different sizes. Data were collected during the flight and then processed by the super-resolution code. The detailed DEM of the hazard field was constructed using independent measurement to be used for comparison. ALHAT navigation system data were used to verify abilities of super-resolution method to provide accurate relative navigation information. Namely, the 6 degree of freedom state vector of the instrument as a function of time was restored from super-resolution data. The results of comparisons show that the super-resolution method can construct high quality DEMs and allows for identifying hazards like rocks and craters within the accordance of ALHAT requirements.

  4. A Super-Resolution Algorithm for Enhancement of FLASH LIDAR Data: Flight Test Results

    NASA Technical Reports Server (NTRS)

    Bulyshev, Alexander; Amzajerdian, Farzin; Roback, Eric; Reisse Robert

    2014-01-01

    This paper describes the results of a 3D super-resolution algorithm applied to the range data obtained from a recent Flash Lidar helicopter flight test. The flight test was conducted by the NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project over a simulated lunar terrain facility at NASA Kennedy Space Center. ALHAT is developing the technology for safe autonomous landing on the surface of celestial bodies: Moon, Mars, asteroids. One of the test objectives was to verify the ability of 3D super-resolution technique to generate high resolution digital elevation models (DEMs) and to determine time resolved relative positions and orientations of the vehicle. 3D super-resolution algorithm was developed earlier and tested in computational modeling, and laboratory experiments, and in a few dynamic experiments using a moving truck. Prior to the helicopter flight test campaign, a 100mX100m hazard field was constructed having most of the relevant extraterrestrial hazard: slopes, rocks, and craters with different sizes. Data were collected during the flight and then processed by the super-resolution code. The detailed DEM of the hazard field was constructed using independent measurement to be used for comparison. ALHAT navigation system data were used to verify abilities of super-resolution method to provide accurate relative navigation information. Namely, the 6 degree of freedom state vector of the instrument as a function of time was restored from super-resolution data. The results of comparisons show that the super-resolution method can construct high quality DEMs and allows for identifying hazards like rocks and craters within the accordance of ALHAT requirements.

  5. 40 CFR 1039.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 34 2012-07-01 2012-07-01 false How do I test engines using steady-state duty cycles, including ramped-modal testing? 1039.505 Section 1039.505 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM NEW AND IN-USE NONROAD COMPRESSION-IGNITION ENGINES...

  6. 75 FR 11915 - Chrysler, LLC, Sterling Heights Vehicle Test Center, Including On-Site Leased Workers From...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-12

    ..., Michigan. The notice was published in the Federal Register on July 14, 2009 (74 FR 34038). At the request... Employment and Training Administration Chrysler, LLC, Sterling Heights Vehicle Test Center, Including On-Site Leased Workers From Caravan Knight Facilities Management LLC; Sterling Heights, MI; Amended...

  7. Benchmarks for testing community detection algorithms on directed and weighted graphs with overlapping communities

    NASA Astrophysics Data System (ADS)

    Lancichinetti, Andrea; Fortunato, Santo

    2009-07-01

    Many complex networks display a mesoscopic structure with groups of nodes sharing many links with the other nodes in their group and comparatively few with nodes of different groups. This feature is known as community structure and encodes precious information about the organization and the function of the nodes. Many algorithms have been proposed but it is not yet clear how they should be tested. Recently we have proposed a general class of undirected and unweighted benchmark graphs, with heterogeneous distributions of node degree and community size. An increasing attention has been recently devoted to develop algorithms able to consider the direction and the weight of the links, which require suitable benchmark graphs for testing. In this paper we extend the basic ideas behind our previous benchmark to generate directed and weighted networks with built-in community structure. We also consider the possibility that nodes belong to more communities, a feature occurring in real systems, such as social networks. As a practical application, we show how modularity optimization performs on our benchmark.

  8. The Cyborg Astrobiologist: testing a novelty detection algorithm on two mobile exploration systems at Rivas Vaciamadrid in Spain and at the Mars Desert Research Station in Utah

    NASA Astrophysics Data System (ADS)

    McGuire, P. C.; Gross, C.; Wendt, L.; Bonnici, A.; Souza-Egipsy, V.; Ormö, J.; Díaz-Martínez, E.; Foing, B. H.; Bose, R.; Walter, S.; Oesker, M.; Ontrup, J.; Haschke, R.; Ritter, H.

    2010-01-01

    In previous work, a platform was developed for testing computer-vision algorithms for robotic planetary exploration. This platform consisted of a digital video camera connected to a wearable computer for real-time processing of images at geological and astrobiological field sites. The real-time processing included image segmentation and the generation of interest points based upon uncommonness in the segmentation maps. Also in previous work, this platform for testing computer-vision algorithms has been ported to a more ergonomic alternative platform, consisting of a phone camera connected via the Global System for Mobile Communications (GSM) network to a remote-server computer. The wearable-computer platform has been tested at geological and astrobiological field sites in Spain (Rivas Vaciamadrid and Riba de Santiuste), and the phone camera has been tested at a geological field site in Malta. In this work, we (i) apply a Hopfield neural-network algorithm for novelty detection based upon colour, (ii) integrate a field-capable digital microscope on the wearable computer platform, (iii) test this novelty detection with the digital microscope at Rivas Vaciamadrid, (iv) develop a Bluetooth communication mode for the phone-camera platform, in order to allow access to a mobile processing computer at the field sites, and (v) test the novelty detection on the Bluetooth-enabled phone camera connected to a netbook computer at the Mars Desert Research Station in Utah. This systems engineering and field testing have together allowed us to develop a real-time computer-vision system that is capable, for example, of identifying lichens as novel within a series of images acquired in semi-arid desert environments. We acquired sequences of images of geologic outcrops in Utah and Spain consisting of various rock types and colours to test this algorithm. The algorithm robustly recognized previously observed units by their colour, while requiring only a single image or a few images to

  9. A procedure for testing the quality of LANDSAT atmospheric correction algorithms

    NASA Technical Reports Server (NTRS)

    Dias, L. A. V. (Principal Investigator); Vijaykumar, N. L.; Neto, G. C.

    1982-01-01

    There are two basic methods for testing the quality of an algorithm to minimize atmospheric effects on LANDSAT imagery: (1) test the results a posteriori, using ground truth or control points; (2) use a method based on image data plus estimation of additional ground and/or atmospheric parameters. A procedure based on the second method is described. In order to select the parameters, initially the image contrast is examined for a series of parameter combinations. The contrast improves for better corrections. In addition the correlation coefficient between two subimages, taken at different times, of the same scene is used for parameter's selection. The regions to be correlated should not have changed considerably in time. A few examples using this proposed procedure are presented.

  10. The COST-HOME monthly benchmark dataset with temperature and precipitation data for testing homogenisation algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.

    2009-04-01

    noise. The idealised dataset is valuable because its statistical characteristics are assumed in most homogenisation algorithms and Gaussian white noise is the signal most used for testing the algorithms. The surrogate and synthetic data represent homogeneous climate data. To this data known inhomogeneities are added: outliers, as well as break inhomogeneities and local trends. Furthermore missing data is simulated and a global trend is added. Every scientist working on homogenisation is invited to join this intercomparison. For more information on the COST Action on homogenisation see: http://www.homogenisation.org/ For more information on - and for downloading - the benchmark dataset see: http://www.meteo.uni-bonn.de/venema/themes/homogenisation/

  11. Flight test results of a vector-based failure detection and isolation algorithm for a redundant strapdown inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Morrell, F. R.; Bailey, M. L.; Motyka, P. R.

    1988-01-01

    Flight test results of a vector-based fault-tolerant algorithm for a redundant strapdown inertial measurement unit are presented. Because the inertial sensors provide flight-critical information for flight control and navigation, failure detection and isolation is developed in terms of a multi-level structure. Threshold compensation techniques for gyros and accelerometers, developed to enhance the sensitivity of the failure detection process to low-level failures, are presented. Four flight tests, conducted in a commercial transport type environment, were used to determine the ability of the failure detection and isolation algorithm to detect failure signals, such a hard-over, null, or bias shifts. The algorithm provided timely detection and correct isolation of flight control- and low-level failures. The flight tests of the vector-based algorithm demonstrated its capability to provide false alarm free dual fail-operational performance for the skewed array of inertial sensors.

  12. Evaluation and Comparison of Multiple Test Methods, Including Real-time PCR, for Legionella Detection in Clinical Specimens

    PubMed Central

    Peci, Adriana; Winter, Anne-Luise; Gubbay, Jonathan B.

    2016-01-01

    Legionella is a Gram-negative bacterium that can cause Pontiac fever, a mild upper respiratory infection and Legionnaire’s disease, a more severe illness. We aimed to compare the performance of urine antigen, culture, and polymerase chain reaction (PCR) test methods and to determine if sputum is an acceptable alternative to the use of more invasive bronchoalveolar lavage (BAL). Data for this study included specimens tested for Legionella at Public Health Ontario Laboratories from 1st January, 2010 to 30th April, 2014, as part of routine clinical testing. We found sensitivity of urinary antigen test (UAT) compared to culture to be 87%, specificity 94.7%, positive predictive value (PPV) 63.8%, and negative predictive value (NPV) 98.5%. Sensitivity of UAT compared to PCR was 74.7%, specificity 98.3%, PPV 77.7%, and NPV 98.1%. Out of 146 patients who had a Legionella-positive result by PCR, only 66 (45.2%) also had a positive result by culture. Sensitivity for culture was the same using either sputum or BAL (13.6%); sensitivity for PCR was 10.3% for sputum and 12.8% for BAL. Both sputum and BAL yield similar results regardless testing methods (Fisher Exact p-values = 1.0, for each test). In summary, all test methods have inherent weaknesses in identifying Legionella; therefore, more than one testing method should be used. Obtaining a single specimen type from patients with pneumonia limits the ability to diagnose Legionella, particularly when urine is the specimen type submitted. Given ease of collection and similar sensitivity to BAL, clinicians are encouraged to submit sputum in addition to urine when BAL submission is not practical from patients being tested for Legionella.

  13. DATA SUMMARY REPORT SMALL SCALE MELTER TESTING OF HLW ALGORITHM GLASSES MATRIX1 TESTS VSL-07S1220-1 REV 0 7/25/07

    SciTech Connect

    KRUGER AA; MATLACK KS; PEGG IL

    2011-12-29

    Eight tests using different HLW feeds were conducted on the DM100-BL to determine the effect of variations in glass properties and feed composition on processing rates and melter conditions (off-gas characteristics, glass processing, foaming, cold cap, etc.) at constant bubbling rate. In over seven hundred hours of testing, the property extremes of glass viscosity, electrical conductivity, and T{sub 1%}, as well as minimum and maximum concentrations of several major and minor glass components were evaluated using glass compositions that have been tested previously at the crucible scale. Other parameters evaluated with respect to glass processing properties were +/-15% batching errors in the addition of glass forming chemicals (GFCs) to the feed, and variation in the sources of boron and sodium used in the GFCs. Tests evaluating batching errors and GFC source employed variations on the HLW98-86 formulation (a glass composition formulated for HLW C-106/AY-102 waste and processed in several previous melter tests) in order to best isolate the effect of each test variable. These tests are outlined in a Test Plan that was prepared in response to the Test Specification for this work. The present report provides summary level data for all of the tests in the first test matrix (Matrix 1) in the Test Plan. Summary results from the remaining tests, investigating minimum and maximum concentrations of major and minor glass components employing variations on the HLW98-86 formulation and glasses generated by the HLW glass formulation algorithm, will be reported separately after those tests are completed. The test data summarized herein include glass production rates, the type and amount of feed used, a variety of measured melter parameters including temperatures and electrode power, feed sample analysis, measured glass properties, and gaseous emissions rates. More detailed information and analysis from the melter tests with complete emission chemistry, glass durability, and

  14. An E-M algorithm and testing strategy for multiple-locus haplotypes

    SciTech Connect

    Long, J.C.; Williams, R.C.; Urbanek, M.

    1995-03-01

    This paper gives an expectation maximization (EM) algorithm to obtain allele frequencies, haplotype frequencies, and gametic disequilibrium coefficients for multiple-locus systems. It permits high polymorphism and null alleles at all loci. This approach effectively deals with the primary estimation problems associated with such systems; that is, there is not a one-to-one correspondence between phenotypic and genotypic categories, and sample sizes tend to be much smaller than the number of phenotypic categories. The EM method provides maximum-likelihood estimates and therefore allows hypothesis tests using likelihood ratio statistics that have X{sup 2} distributions with large sample sizes. We also suggest a data resampling approach to estimate test statistic sampling distributions. The resampling approach is more computer intensive, but it is applicable to all sample sizes. A strategy to test hypotheses about aggregate groups of gametic disequilibrium coefficients is recommended. This strategy minimizes the number of necessary hypothesis tests while at the same time describing the structure of equilibrium. These methods are applied to three unlinked dinucleotide repeat loci in Navajo Indians and to three linked HLA loci in Gila River (Pima) Indians. The likelihood functions of both data sets are shown to be maximized by the EM estimates, and the testing strategy provides a useful description of the structure of gametic disequilibrium. Following these applications, a number of simulation experiments are performed to test how well the likelihood-ratio statistic distributions are approximated by X{sup 2} distributions. In most circumstances X{sup 2} grossly underestimated the probability of type I errors. However, at times they also overestimated the type 1 error probability. Accordingly, we recommend hypothesis tests that use the resampling method. 41 refs., 3 figs., 6 tabs.

  15. An E-M algorithm and testing strategy for multiple-locus haplotypes.

    PubMed Central

    Long, J C; Williams, R C; Urbanek, M

    1995-01-01

    This paper gives an expectation maximization (EM) algorithm to obtain allele frequencies, haplotype frequencies, and gametic disequilibrium coefficients for multiple-locus systems. It permits high polymorphism and null alleles at all loci. This approach effectively deals with the primary estimation problems associated with such systems; that is, there is not a one-to-one correspondence between phenotypic and genotypic categories, and sample sizes tend to be much smaller than the number of phenotypic categories. The EM method provides maximum-likelihood estimates and therefore allows hypothesis tests using likelihood ratio statistics that have chi 2 distributions with large sample sizes. We also suggest a data resampling approach to estimate test statistic sampling distributions. The resampling approach is more computer intensive, but it is applicable to all sample sizes. A strategy to test hypotheses about aggregate groups of gametic disequilibrium coefficients is recommended. This strategy minimizes the number of necessary hypothesis tests while at the same time describing the structure of disequilibrium. These methods are applied to three unlinked dinucleotide repeat loci in Navajo Indians and to three linked HLA loci in Gila River (Pima) Indians. The likelihood functions of both data sets are shown to be maximized by the EM estimates, and the testing strategy provides a useful description of the structure of gametic disequilibrium. Following these applications, a number of simulation experiments are performed to test how well the likelihood-ratio statistic distributions are approximated by chi 2 distributions. In most circumstances the chi 2 grossly underestimated the probability of type I errors. However, at times they also overestimated the type 1 error probability. Accordingly, we recommended hypothesis tests that use the resampling method. PMID:7887436

  16. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASAs Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The engineering development of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS) requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The nominal and off-nominal characteristics of SLS's elements and subsystems must be understood and matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex systems engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model-based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model-based algorithms and their development lifecycle from inception through FSW certification are an important focus of SLS's development effort to further ensure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. To test and validate these M&FM algorithms a dedicated test-bed was developed for full Vehicle Management End-to-End Testing (VMET). For addressing fault management (FM

  17. Building a LiDAR point cloud simulator: Testing algorithms for high resolution topographic change

    NASA Astrophysics Data System (ADS)

    Carrea, Dario; Abellán, Antonio; Derron, Marc-Henri; Jaboyedoff, Michel

    2014-05-01

    Terrestrial laser technique (TLS) is becoming a common tool in Geosciences, with clear applications ranging from the generation of a high resolution 3D models to the monitoring of unstable slopes and the quantification of morphological changes. Nevertheless, like every measurement techniques, TLS still has some limitations that are not clearly understood and affect the accuracy of the dataset (point cloud). A challenge in LiDAR research is to understand the influence of instrumental parameters on measurement errors during LiDAR acquisition. Indeed, different critical parameters interact with the scans quality at different ranges: the existence of shadow areas, the spatial resolution (point density), and the diameter of the laser beam, the incidence angle and the single point accuracy. The objective of this study is to test the main limitations of different algorithms usually applied on point cloud data treatment, from alignment to monitoring. To this end, we built in MATLAB(c) environment a LiDAR point cloud simulator able to recreate the multiple sources of errors related to instrumental settings that we normally observe in real datasets. In a first step we characterized the error from single laser pulse by modelling the influence of range and incidence angle on single point data accuracy. In a second step, we simulated the scanning part of the system in order to analyze the shifting and angular error effects. Other parameters have been added to the point cloud simulator, such as point spacing, acquisition window, etc., in order to create point clouds of simple and/or complex geometries. We tested the influence of point density and vitiating point of view on the Iterative Closest Point (ICP) alignment and also in some deformation tracking algorithm with same point cloud geometry, in order to determine alignment and deformation detection threshold. We also generated a series of high resolution point clouds in order to model small changes on different environments

  18. Doppler Imaging with a Clean-Like Approach - Part One - a Newly Developed Algorithm Simulations and Tests

    NASA Astrophysics Data System (ADS)

    Kurster, M.

    1993-07-01

    A newly developed method for the Doppler imaging of star spot distributions on active late-type stars is presented. It comprises an algorithm particularly adapted to the (discrete) Doppler imaging problem (including eclipses) and is very efficient in determining the positions and shapes of star spots. A variety of tests demonstrates the capabilities as well as the limitations of the method by investigating the effects that uncertainties in various stellar parameters have on the image reconstruction. Any systematic errors within the reconstructed image are found to be a result of the ill-posed nature of the Doppler imaging problem and not a consequence of the adopted approach. The largest uncertainties are found with respect to the dynamical range of the image (brightness or temperature contrast). This kind of uncertainty is of little effect for studies of star spot migrations with the objectives of determining differential rotation and butterfly diagrams for late-type stars.

  19. Subsonic flight test evaluation of a performance seeking control algorithm on an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Gilyard, Glenn B.; Orme, John S.

    1992-01-01

    The subsonic flight test evaluation phase of the NASA F-15 (powered by F 100 engines) performance seeking control program was completed for single-engine operation at part- and military-power settings. The subsonic performance seeking control algorithm optimizes the quasi-steady-state performance of the propulsion system for three modes of operation. The minimum fuel flow mode minimizes fuel consumption. The minimum thrust mode maximizes thrust at military power. Decreases in thrust-specific fuel consumption of 1 to 2 percent were measured in the minimum fuel flow mode; these fuel savings are significant, especially for supersonic cruise aircraft. Decreases of up to approximately 100 degree R in fan turbine inlet temperature were measured in the minimum temperature mode. Temperature reductions of this magnitude would more than double turbine life if inlet temperature was the only life factor. Measured thrust increases of up to approximately 15 percent in the maximum thrust mode cause substantial increases in aircraft acceleration. The system dynamics of the closed-loop algorithm operation were good. The subsonic flight phase has validated the performance seeking control technology, which can significantly benefit the next generation of fighter and transport aircraft.

  20. ZEUS-2D: A Radiation Magnetohydrodynamics Code for Astrophysical Flows in Two Space Dimensions. II. The Magnetohydrodynamic Algorithms and Tests

    NASA Astrophysics Data System (ADS)

    Stone, James M.; Norman, Michael L.

    1992-06-01

    In this, the second of a series of three papers, we continue a detailed description of ZEUS-2D, a numerical code for the simulation of fluid dynamical flows in astrophysics including a self-consistent treatment of the effects of magnetic fields and radiation transfer. In this paper, we give a detailed description of the magnetohydrodynamical (MHD) algorithms in ZEUS-2D. The recently developed constrained transport (CT) algorithm is implemented for the numerical evolution of the components of the magnetic field for MHD simulations. This formalism guarantees the numerically evolved field components will satisfy the divergence-free constraint at all times. We find, however, that the method used to compute the electromotive forces must be chosen carefully to propagate accurately all modes of MHD wave families (in particular shear Alfvén waves). A new method of computing the electromotive force is developed using the method of characteristics (MOC). It is demonstrated through the results of an extensive series of MHD test problems that the resulting hybrid MOC-CT method provides for the accurate evolution of all modes of MHD wave families.

  1. Algorithms for testing of fractional dynamics: a practical guide to ARFIMA modelling

    NASA Astrophysics Data System (ADS)

    Burnecki, Krzysztof; Weron, Aleksander

    2014-10-01

    In this survey paper we present a systematic methodology which demonstrates how to identify the origins of fractional dynamics. We consider three mechanisms which lead to it, namely fractional Brownian motion, fractional Lévy stable motion and an autoregressive fractionally integrated moving average (ARFIMA) process but we concentrate on the ARFIMA modelling. The methodology is based on statistical tools for identification and validation of the fractional dynamics, in particular on an ARFIMA parameter estimator, an ergodicity test, a self-similarity index estimator based on sample p-variation and a memory parameter estimator based on sample mean-squared displacement. A complete list of algorithms needed for this is provided in appendices A-F. Finally, we illustrate the methodology on various empirical data and show that ARFIMA can be considered as a universal model for fractional dynamics. Thus, we provide a practical guide for experimentalists on how to efficiently use ARFIMA modelling for a large class of anomalous diffusion data.

  2. Rainfall estimation from soil moisture data: crash test for SM2RAIN algorithm

    NASA Astrophysics Data System (ADS)

    Brocca, Luca; Albergel, Clement; Massari, Christian; Ciabatta, Luca; Moramarco, Tommaso; de Rosnay, Patricia

    2015-04-01

    Soil moisture governs the partitioning of mass and energy fluxes between the land surface and the atmosphere and, hence, it represents a key variable for many applications in hydrology and earth science. In recent years, it was demonstrated that soil moisture observations from ground and satellite sensors contain important information useful for improving rainfall estimation. Indeed, soil moisture data have been used for correcting rainfall estimates from state-of-the-art satellite sensors (e.g. Crow et al., 2011), and also for improving flood prediction through a dual data assimilation approach (e.g. Massari et al., 2014; Chen et al., 2014). Brocca et al. (2013; 2014) developed a simple algorithm, called SM2RAIN, which allows estimating rainfall directly from soil moisture data. SM2RAIN has been applied successfully to in situ and satellite observations. Specifically, by using three satellite soil moisture products from ASCAT (Advanced SCATterometer), AMSR-E (Advanced Microwave Scanning Radiometer for Earth Observation) and SMOS (Soil Moisture and Ocean Salinity); it was found that the SM2RAIN-derived rainfall products are as accurate as state-of-the-art products, e.g., the real-time version of the TRMM (Tropical Rainfall Measuring Mission) product. Notwithstanding these promising results, a detailed study investigating the physical basis of the SM2RAIN algorithm, its range of applicability and its limitations on a global scale has still to be carried out. In this study, we carried out a crash test for SM2RAIN algorithm on a global scale by performing a synthetic experiment. Specifically, modelled soil moisture data are obtained from HTESSEL model (Hydrology Tiled ECMWF Scheme for Surface Exchanges over Land) forced by ERA-Interim near-surface meteorology. Afterwards, the modelled soil moisture data are used as input into SM2RAIN algorithm for testing weather or not the resulting rainfall estimates are able to reproduce ERA-Interim rainfall data. Correlation, root

  3. Near real-time expectation-maximization algorithm: computational performance and passive millimeter-wave imaging field test results

    NASA Astrophysics Data System (ADS)

    Reynolds, William R.; Talcott, Denise; Hilgers, John W.

    2002-07-01

    A new iterative algorithm (EMLS) via the expectation maximization method is derived for extrapolating a non- negative object function from noisy, diffraction blurred image data. The algorithm has the following desirable attributes; fast convergence is attained for high frequency object components, is less sensitive to constraint parameters, and will accommodate randomly missing data. Speed and convergence results are presented. Field test imagery was obtained with a passive millimeter wave imaging sensor having a 30.5 cm aperture. The algorithm was implemented and tested in near real time using field test imagery. Theoretical results and experimental results using the field test imagery will be compared using an effective aperture measure of resolution increase. The effective aperture measure, based on examination of the edge-spread function, will be detailed.

  4. Development and Implementation of Image-based Algorithms for Measurement of Deformations in Material Testing

    PubMed Central

    Barazzetti, Luigi; Scaioni, Marco

    2010-01-01

    This paper presents the development and implementation of three image-based methods used to detect and measure the displacements of a vast number of points in the case of laboratory testing on construction materials. Starting from the needs of structural engineers, three ad hoc tools for crack measurement in fibre-reinforced specimens and 2D or 3D deformation analysis through digital images were implemented and tested. These tools make use of advanced image processing algorithms and can integrate or even substitute some traditional sensors employed today in most laboratories. In addition, the automation provided by the implemented software, the limited cost of the instruments and the possibility to operate with an indefinite number of points offer new and more extensive analysis in the field of material testing. Several comparisons with other traditional sensors widely adopted inside most laboratories were carried out in order to demonstrate the accuracy of the implemented software. Implementation details, simulations and real applications are reported and discussed in this paper. PMID:22163612

  5. Acceleration of degradation by highly accelerated stress test and air-included highly accelerated stress test in crystalline silicon photovoltaic modules

    NASA Astrophysics Data System (ADS)

    Suzuki, Soh; Tanahashi, Tadanori; Doi, Takuya; Masuda, Atsushi

    2016-02-01

    We examined the effects of hyper-hygrothermal stresses with or without air on the degradation of crystalline silicon (c-Si) photovoltaic (PV) modules, to shorten the required duration of a conventional hygrothermal-stress test [i.e., the “damp heat (DH) stress test”, which is conducted at 85 °C/85% relative humidity for 1,000 h]. Interestingly, the encapsulant within a PV module becomes discolored under the air-included hygrothermal conditions achieved using DH stress test equipment and an air-included highly accelerated stress test (air-HAST) apparatus, but not under the air-excluded hygrothermal conditions realized using a highly accelerated stress test (HAST) machine. In contrast, the reduction in the output power of the PV module is accelerated irrespective of air inclusion in hyper-hygrothermal test atmosphere. From these findings, we conclude that the required duration of the DH stress test will at least be significantly shortened using air-HAST, but not HAST.

  6. Experimental testing of four correction algorithms for the forward scattering spectrometer probe

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.; Oldenburg, John R.; Lock, James A.

    1992-01-01

    Three number density correction algorithms and one size distribution correction algorithm for the Forward Scattering Spectrometer Probe (FSSP) were compared with data taken by the Phase Doppler Particle Analyzer (PDPA) and an optical number density measuring instrument (NDMI). Of the three number density correction algorithms, the one that compared best to the PDPA and NDMI data was the algorithm developed by Baumgardner, Strapp, and Dye (1985). The algorithm that corrects sizing errors in the FSSP that was developed by Lock and Hovenac (1989) was shown to be within 25 percent of the Phase Doppler measurements at number densities as high as 3000/cc.

  7. Testing a real-time algorithm for the detection of tsunami signals on sea-level records

    NASA Astrophysics Data System (ADS)

    Bressan, L.; Tinti, S.; Titov, V.

    2009-04-01

    One of the important tasks for the implementation of a tsunami warning system in the Mediterranean Sea is to develop a real-time detection algorithm. Unlike the Mediterranean Sea situation, tsunamis happen quite often in the Pacific Ocean and they have been historically recorded with a proper sampling rate. A large database of tsunami records is therefore available for the Pacific. The Tsunami Research Team of the University of Bologna is developing a real-time detection algorithm on synthetic records. Thanks to the collaboration with NCTR of PMEL/NOAA (NOAA Center for Tsunami Research of Pacific and Marine Environmental Laboratory/National Oceanic and Atmospheric Administration), it has been possible to test this algorithm on specific events recorded by Adak Island tide-gage, in Alaska, and by DART buoys, located offshore Alaska. This work has been undertaken in the framework of the Italian national project DPC-INGV S3. The detection algorithm has the goal to discriminate the first tsunami wave from the previous background signal. Shortly, the algorithm is built on a parameter based on the standard deviation of the signal calculated on a short time window and on its comparison with its computed prediction through a control function. The control function indicates a tsunami detection whenever it exceeds a certain threshold. The algorithm was calibrated and tested both on coastal tide-gages and on offshore buoys that measure sea-level changes. Its calibration presents different issues if the algorithm has to be implemented on an offshore buoy or on a coastal tide-gage. In particular, the algorithm parameters are site-specific for coastal sea-level signals, because sea-level changes are here mainly characterized by oscillations induced by the coastal topography. Adak Island background signal was analyzed and the algorithm parameters were set: It was found that there is a persistent presence of seiches with periods in the tsunami range, to which the algorithm is also

  8. Tests of a Semi-Analytical Case 1 and Gelbstoff Case 2 SeaWiFS Algorithm with a Global Data Set

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.; Hawes, Steve K.; Lee, Zhongping

    1997-01-01

    A semi-analytical algorithm was tested with a total of 733 points of either unpackaged or packaged-pigment data, with corresponding algorithm parameters for each data type. The 'unpackaged' type consisted of data sets that were generally consistent with the Case 1 CZCS algorithm and other well calibrated data sets. The 'packaged' type consisted of data sets apparently containing somewhat more packaged pigments, requiring modification of the absorption parameters of the model consistent with the CalCOFI study area. This resulted in two equally divided data sets. A more thorough scrutiny of these and other data sets using a semianalytical model requires improved knowledge of the phytoplankton and gelbstoff of the specific environment studied. Since the semi-analytical algorithm is dependent upon 4 spectral channels including the 412 nm channel, while most other algorithms are not, a means of testing data sets for consistency was sought. A numerical filter was developed to classify data sets into the above classes. The filter uses reflectance ratios, which can be determined from space. The sensitivity of such numerical filters to measurement resulting from atmospheric correction and sensor noise errors requires further study. The semi-analytical algorithm performed superbly on each of the data sets after classification, resulting in RMS1 errors of 0.107 and 0.121, respectively, for the unpackaged and packaged data-set classes, with little bias and slopes near 1.0. In combination, the RMS1 performance was 0.114. While these numbers appear rather sterling, one must bear in mind what mis-classification does to the results. Using an average or compromise parameterization on the modified global data set yielded an RMS1 error of 0.171, while using the unpackaged parameterization on the global evaluation data set yielded an RMS1 error of 0.284. So, without classification, the algorithm performs better globally using the average parameters than it does using the unpackaged

  9. Competing Sudakov veto algorithms

    NASA Astrophysics Data System (ADS)

    Kleiss, Ronald; Verheyen, Rob

    2016-07-01

    We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.

  10. Overview of Non-nuclear Testing of the Safe, Affordable 30-kW Fission Engine, Including End-to-End Demonstrator Testing

    NASA Technical Reports Server (NTRS)

    VanDyke, M. K.; Martin, J. J.; Houts, M. G.

    2003-01-01

    Successful development of space fission systems will require an extensive program of affordable and realistic testing. In addition to tests related to design/development of the fission system, realistic testing of the actual flight unit must also be performed. At the power levels under consideration (3-300 kW electric power), almost all technical issues are thermal or stress related and will not be strongly affected by the radiation environment. These issues can be resolved more thoroughly, less expensively, and in a more timely fashing with nonnuclear testing, provided it is prototypic of the system in question. This approach was used for the safe, affordable fission engine test article development program and accomplished viz cooperative efforts with Department of Energy labs, industry, universiites, and other NASA centers. This Technical Memorandum covers the analysis, testing, and data reduction of a 30-kW simulated reactor as well as an end-to-end demonstrator, including a power conversion system and an electric propulsion engine, the first of its kind in the United States.

  11. Flight Testing of the Space Launch System (SLS) Adaptive Augmenting Control (AAC) Algorithm on an F/A-18

    NASA Technical Reports Server (NTRS)

    Dennehy, Cornelius J.; VanZwieten, Tannen S.; Hanson, Curtis E.; Wall, John H.; Miller, Chris J.; Gilligan, Eric T.; Orr, Jeb S.

    2014-01-01

    The Marshall Space Flight Center (MSFC) Flight Mechanics and Analysis Division developed an adaptive augmenting control (AAC) algorithm for launch vehicles that improves robustness and performance on an as-needed basis by adapting a classical control algorithm to unexpected environments or variations in vehicle dynamics. This was baselined as part of the Space Launch System (SLS) flight control system. The NASA Engineering and Safety Center (NESC) was asked to partner with the SLS Program and the Space Technology Mission Directorate (STMD) Game Changing Development Program (GCDP) to flight test the AAC algorithm on a manned aircraft that can achieve a high level of dynamic similarity to a launch vehicle and raise the technology readiness of the algorithm early in the program. This document reports the outcome of the NESC assessment.

  12. A simplified flight-test method for determining aircraft takeoff performance that includes effects of pilot technique

    NASA Technical Reports Server (NTRS)

    Larson, T. J.; Schweikhard, W. G.

    1974-01-01

    A method for evaluating aircraft takeoff performance from brake release to air-phase height that requires fewer tests than conventionally required is evaluated with data for the XB-70 airplane. The method defines the effects of pilot technique on takeoff performance quantitatively, including the decrease in acceleration from drag due to lift. For a given takeoff weight and throttle setting, a single takeoff provides enough data to establish a standardizing relationship for the distance from brake release to any point where velocity is appropriate to rotation. The lower rotation rates penalized takeoff performance in terms of ground roll distance; the lowest observed rotation rate required a ground roll distance that was 19 percent longer than the highest. Rotations at the minimum rate also resulted in lift-off velocities that were approximately 5 knots lower than the highest rotation rate at any given lift-off distance.

  13. Numerical tests for effects of various parameters in niching genetic algorithm applied to regional waveform inversion

    NASA Astrophysics Data System (ADS)

    Li, Cong; Lei, Jianshe

    2014-10-01

    In this paper, we focus on the influences of various parameters in the niching genetic algorithm inversion procedure on the results, such as various objective functions, the number of the models in each subpopulation, and the critical separation radius. The frequency-waveform integration (F-K) method is applied to synthesize three-component waveform data with noise in various epicentral distances and azimuths. Our results show that if we use a zero-th-lag cross-correlation function, then we will obtain the model with a faster convergence and a higher precision than other objective functions. The number of models in each subpopulation has a great influence on the rate of convergence and computation time, suggesting that it should be obtained through tests in practical problems. The critical separation radius should be determined carefully because it directly affects the multi-extreme values in the inversion. We also compare the inverted results from full-band waveform data and surface-wave frequency-band (0.02-0.1 Hz) data, and find that the latter is relatively poorer but still has a higher precision, suggesting that surface-wave frequency-band data can also be used to invert for the crustal structure.

  14. Numerical tests for effects of various parameters in niching genetic algorithm applied to regional waveform inversion

    NASA Astrophysics Data System (ADS)

    Li, Cong; Lei, Jianshe

    2014-09-01

    In this paper, we focus on the influences of various parameters in the niching genetic algorithm inversion procedure on the results, such as various objective functions, the number of the models in each subpopulation, and the critical separation radius. The frequency-waveform integration (F-K) method is applied to synthesize three-component waveform data with noise in various epicentral distances and azimuths. Our results show that if we use a zero-th-lag cross-correlation function, then we will obtain the model with a faster convergence and a higher precision than other objective functions. The number of models in each subpopulation has a great influence on the rate of convergence and computation time, suggesting that it should be obtained through tests in practical problems. The critical separation radius should be determined carefully because it directly affects the multi-extreme values in the inversion. We also compare the inverted results from full-band waveform data and surface-wave frequency-band (0.02-0.1 Hz) data, and find that the latter is relatively poorer but still has a higher precision, suggesting that surface-wave frequency-band data can also be used to invert for the crustal structure.

  15. Presentation of a general algorithm to include effect assessment on secondary poisoning in the derivation of environmental quality criteria. Part 1. Aquatic food chains.

    PubMed

    Romijn, C A; Luttik, R; van de Meent, D; Slooff, W; Canton, J H

    1993-08-01

    Effect assessment on secondary poisoning can be an asset to effect assessments on direct poisoning in setting quality criteria for the environment. This study presents an algorithm for effect assessment on secondary poisoning. The water-fish-fish-eating bird or mammal pathway was analyzed as an example of a secondary poisoning pathway. Parameters used in this algorithm are the bioconcentration factor for fish (BCF) and the no-observed-effect concentration for the group of fish-eating birds and mammals (NOECfish-eater). For the derivation of reliable BCFs preference is given to the use of experimentally derived BCFs over QSAR estimates. NOECs for fish eaters are derived by extrapolating toxicity data on single species. Because data on fish-eating species are seldom available, toxicity data on all birds and mammalian species were used. The proposed algorithm (MAR = NOECfish-eater/BCF) was used to calculate MARS (maximum acceptable risk levels) for the compounds lindane, dieldrin, cadmium, mercury, PCB153, and PCB118. By subsequently, comparing these MARs to MARs derived by effect assessment for aquatic organisms, it was concluded that for methyl mercury and PCB153 secondary poisoning of fish-eating birds and mammals could be a critical pathway. For these compounds, effects on populations of fish-eating birds and mammals can occur at levels in surface water below the MAR calculated for aquatic ecosystems. Secondary poisoning of fish-eating birds and mammals is not likely to occur for cadmium at levels in water below the MAR calculated for aquatic ecosystems. PMID:7691536

  16. SEREN - a new SPH code for star and planet formation simulations. Algorithms and tests

    NASA Astrophysics Data System (ADS)

    Hubber, D. A.; Batty, C. P.; McLeod, A.; Whitworth, A. P.

    2011-05-01

    We present SEREN, a new hybrid Smoothed Particle Hydrodynamics and N-body code designed to simulate astrophysical processes such as star and planet formation. It is written in Fortran 95/2003 and has been parallelised using OpenMP. SEREN is designed in a flexible, modular style, thereby allowing a large number of options to be selected or disabled easily and without compromising performance. SEREN uses the conservative "grad-h" formulation of SPH, but can easily be configured to use traditional SPH or Godunov SPH. Thermal physics is treated either with a barotropic equation of state, or by solving the energy equation and modelling the transport of cooling radiation. A Barnes-Hut tree is used to obtain neighbour lists and compute gravitational accelerations efficiently, and an hierarchical time-stepping scheme is used to reduce the number of computations per timestep. Dense gravitationally bound objects are replaced by sink particles, to allow the simulation to be evolved longer, and to facilitate the identification of protostars and the compilation of stellar and binary properties. At the termination of a hydrodynamical simulation, SEREN has the option of switching to a pure N-body simulation, using a 4th-order Hermite integrator, and following the ballistic evolution of the sink particles (e.g. to determine the final binary statistics once a star cluster has relaxed). We describe in detail all the algorithms implemented in SEREN and we present the results of a suite of tests designed to demonstrate the fidelity of SEREN and its performance and scalability. Further information and additional tests of SEREN can be found at the web-page http://www.astro.group.shef.ac.uk/seren.

  17. Covariance Structure Model Fit Testing under Missing Data: An Application of the Supplemented EM Algorithm

    ERIC Educational Resources Information Center

    Cai, Li; Lee, Taehun

    2009-01-01

    We apply the Supplemented EM algorithm (Meng & Rubin, 1991) to address a chronic problem with the "two-stage" fitting of covariance structure models in the presence of ignorable missing data: the lack of an asymptotically chi-square distributed goodness-of-fit statistic. We show that the Supplemented EM algorithm provides a convenient…

  18. Universal test fixture for monolithic mm-wave integrated circuits calibrated with an augmented TRD algorithm

    NASA Technical Reports Server (NTRS)

    Romanofsky, Robert R.; Shalkhauser, Kurt A.

    1989-01-01

    The design and evaluation of a novel fixturing technique for characterizing millimeter wave solid state devices is presented. The technique utilizes a cosine-tapered ridge guide fixture and a one-tier de-embedding procedure to produce accurate and repeatable device level data. Advanced features of this technique include nondestructive testing, full waveguide bandwidth operation, universality of application, and rapid, yet repeatable, chip-level characterization. In addition, only one set of calibration standards is required regardless of the device geometry.

  19. Parallel training and testing methods for complex image processing algorithms on distributed, heterogeneous, unreliable, and non-dedicated resources

    NASA Astrophysics Data System (ADS)

    Usamentiaga, Rubén; García, Daniel F.; Molleda, Julio; Sainz, Ignacio; Bulnes, Francisco G.

    2011-01-01

    Advances in the image processing field have brought new methods which are able to perform complex tasks robustly. However, in order to meet constraints on functionality and reliability, imaging application developers often design complex algorithms with many parameters which must be finely tuned for each particular environment. The best approach for tuning these algorithms is to use an automatic training method, but the computational cost of this kind of training method is prohibitive, making it inviable even in powerful machines. The same problem arises when designing testing procedures. This work presents methods to train and test complex image processing algorithms in parallel execution environments. The approach proposed in this work is to use existing resources in offices or laboratories, rather than expensive clusters. These resources are typically non-dedicated, heterogeneous and unreliable. The proposed methods have been designed to deal with all these issues. Two methods are proposed: intelligent training based on genetic algorithms and PVM, and a full factorial design based on grid computing which can be used for training or testing. These methods are capable of harnessing the available computational power resources, giving more work to more powerful machines, while taking its unreliable nature into account. Both methods have been tested using real applications.

  20. Potential Enhancements to the Cross-track Infrared Sounder (CrIS) Ground Test, Data Downlink and Processing for Climate Monitoring including Trace Gas Retrievals

    NASA Astrophysics Data System (ADS)

    Farrow, S. V.; Christensen, T.; Hagan, D. E.

    2009-12-01

    Together with ATMS, the Cross-track Infrared Sounder (CrIS) sensor is a critical payload for National Polar-orbiting Operational Environmental Satellite System (NPOESS) and will first fly on the NPOESS Preparatory Project (NPP) mission, the risk reduction flight for NPOESS. NPOESS is the next generation weather and climate monitoring system for the Department of Defense and National Oceanic and Atmospheric Administration (NOAA), being developed under contract by Northrop Grumman Aerospace Systems. The paper describes potential changes to the program baseline to make CrIS data useful for climate monitoring, including trace gas retrievals such as CO2. Specifically, these are changes to ground calibration tests, changes to the Sensor Data Record (SDR) algorithm, and changes in the spacecraft interface to downlink all of the spectral channels the sensor produces. These changes are presented to promote discussion in the science community of an alternative to achieving some of the key requirements of NASA's OCO mission, which was to monitor CO2, but was destroyed during launch.

  1. An efficient algorithm for finding optimal gain-ratio multiple-split tests on hierarchical attributes in decision tree learning

    SciTech Connect

    Almuallim, H.; Akiba, Yasuhiro; Kaneda, Shigeo

    1996-12-31

    Given a set of training examples S and a tree-structured attribute x, the goal in this work is to find a multiple-split test defined on x that maximizes Quinlan`s gain-ratio measure. The number of possible such multiple-split tests grows exponentially in the size of the hierarchy associated with the attribute. It is, therefore, impractical to enumerate and evaluate all these tests in order to choose the best one. We introduce an efficient algorithm for solving this problem that guarantees maximizing the gain-ratio over all possible tests. For a training set of m examples and an attribute hierarchy of height d, our algorithm runs in time proportional to dm, which makes it efficient enough for practical use.

  2. Evaluation of a New Method of Fossil Retrodeformation by Algorithmic Symmetrization: Crania of Papionins (Primates, Cercopithecidae) as a Test Case

    PubMed Central

    Tallman, Melissa; Amenta, Nina; Delson, Eric; Frost, Stephen R.; Ghosh, Deboshmita; Klukkert, Zachary S.; Morrow, Andrea; Sawyer, Gary J.

    2014-01-01

    Diagenetic distortion can be a major obstacle to collecting quantitative shape data on paleontological specimens, especially for three-dimensional geometric morphometric analysis. Here we utilize the recently -published algorithmic symmetrization method of fossil reconstruction and compare it to the more traditional reflection & averaging approach. In order to have an objective test of this method, five casts of a female cranium of Papio hamadryas kindae were manually deformed while the plaster hardened. These were subsequently “retrodeformed” using both algorithmic symmetrization and reflection & averaging and then compared to the original, undeformed specimen. We found that in all cases, algorithmic retrodeformation improved the shape of the deformed cranium and in four out of five cases, the algorithmically symmetrized crania were more similar in shape to the original crania than the reflected & averaged reconstructions. In three out of five cases, the difference between the algorithmically symmetrized crania and the original cranium could be contained within the magnitude of variation among individuals in a single subspecies of Papio. Instances of asymmetric distortion, such as breakage on one side, or bending in the axis of symmetry, were well handled, whereas symmetrical distortion remained uncorrected. This technique was further tested on a naturally deformed and fossilized cranium of Paradolichopithecus arvernensis. Results, based on a principal components analysis and Procrustes distances, showed that the algorithmically symmetrized Paradolichopithecus cranium was more similar to other, less-deformed crania from the same species than was the original. These results illustrate the efficacy of this method of retrodeformation by algorithmic symmetrization for the correction of asymmetrical distortion in fossils. Symmetrical distortion remains a problem for all currently developed methods of retrodeformation. PMID:24992483

  3. Research on Algorithms based on Web Self-adaptive Study and Intelligent Test Paper Construction and their Applications

    NASA Astrophysics Data System (ADS)

    Liu, Ying; Wang, Limin; Huang, Lihua; Han, Xuming; Gu, Zhenshan; Sang, Juan

    A novel system based on Bernoulli Theorem of Large Number Law and the genetic algorithms was designed and realized in this paper, which had many advantages such as self-adaptive study for difficulty coefficient of item pool and intelligent test paper construction etc. At present, the system has applied in the examination of paperless computer tests of Jinlin university of finance and economics and some satisfactory results have been also obtained.

  4. Fast algorithms for crack simulation and identification in eddy current testing

    NASA Astrophysics Data System (ADS)

    Albanese, R.; Rubinacci, G.; Tamburrino, A.; Villone, F.

    2000-05-01

    Integral formulations are well suited for electromagnetic analysis of NDT problems. We use a method in which the unknowns are a two-component vector potential T defined in the conducting region Vc (where the current density J is given by its curl). The current density vector potential is expanded in terms of edge-element basis functions Tk, and the gauge is imposed by means of a tree-cotree decomposition of the finite element mesh. Electric constitutive equation is imposed using Galerkin approach: ∫ Vc∇xTkṡ(ηJ+∂A/∂t)dV=0, ∀Tk; where A is the magnetic vector potential (obtained from J via Biot-Savart law), η is the resistivity and t is the time. Using superposition, the forward problem is reformulated as the determination of the modified eddy current pattern δJ=J-Jo (Jo is the unperturbed current density whereas δJ=∑k=l,nδIkJk is the perturbation due to the crack). In the crack region, identified by a number of elements or element facets, we impose δJ=Jo. For the inverse problem, on the basis of a priori information, we first select a subdomain including a number of "candidate" elements or facets. We select a tentative subset and perform the direct analysis. The inverse problem can be then reformulated as finding which elements or facets of the tentative set actually belong to the crack. Pre-computing all the matrices related to the crack-free zone of the conductor, each single computation for a given tentative crack pattern is very quick (Woodbury's algorithm). This approach is well suited for zero order minimization procedures (e.g., genetic algorithms). The problem can also be reformulated as finding the crack depth as a function of the scanning plane co-ordinates. In this case, quantization (limitation to a set of few possible depth values) and truncation (obtained by neglecting the long distance interactions) allow us to limit the search space and apply techniques initially developed for digital communication over noisy channels [3]. The

  5. Behavior of an inversion-based precipitation retrieval algorithm with high-resolution AMPR measurements including a low-frequency 10.7-GHz channel

    NASA Technical Reports Server (NTRS)

    Smith, E. A.; Xiang, X.; Mugnai, A.; Hood, R. E.; Spencer, R. W.

    1994-01-01

    A microwave-based, profile-type precipitation retrieval algorithm has been used to analyze high-resolution passsive microwave measurements over an ocean background, obtained by the Advanced Microwave Precipitation Radiometer (AMPR) flown on a NASA ER-2 aircraft. The analysis is designed to first determine the improvements that can be gained by adding brightness temperature information from the AMPR low-frequency channel (10.7 GHz) to a multispectral retrieval algorithm nominally run with satellite information at 19, 37, and 85 GHz. The impact of spatial resolution degradation of the high-resolution brightness temperature information on the retrieved rain/cloud liquid water contents and ice water contents is then quantified in order to assess the possible biases inherent to satellite-based retrieval. Careful inspection of the high-resolution aircraft dataset reveals five distinctive brightness temperature features associated with cloud structure and scattering effects that are not generally detectable in current passive microwave satellite measurements. Results suggest that the inclusion of 10.7-GHz information overcomes two basic problems associated with three-channel retrieval. Intercomparisons of retrievals carried out at high-resolution and then averaged to a characteristic satellite scale to the corresponding retrievals in which the brightness temperatures are first convolved down to the satellite scale suggest that with the addition of the 10.7-GHz channel, the rain liquid water contents will not be negatively impacted by special resolution degradation. That is not the case with the ice water contents as they appear ti be quite sensitive to the imposed scale, the implication being that as spatial resolution is reduced, ice water contents will become increasingly underestimated.

  6. Segmentation of diesel spray images with log-likelihood ratio test algorithm for non-Gaussian distributions.

    PubMed

    Pastor, José V; Arrègle, Jean; García, José M; Zapata, L Daniel

    2007-02-20

    A methodology for processing images of diesel sprays under different experimental situations is presented. The new approach has been developed for cases where the background does not follow a Gaussian distribution but a positive bias appears. In such cases, the lognormal and the gamma probability density functions have been considered for the background digital level distributions. Two different algorithms have been compared with the standard log-likelihood ratio test (LRT): a threshold defined from the cumulative probability density function of the background shows a sensitive improvement, but the best results are obtained with modified versions of the LRT algorithm adapted to non-Gaussian cases. PMID:17279134

  7. Ground Testing of Prototype Hardware and Processing Algorithms for a Wide Area Space Surveillance System (WASSS)

    NASA Astrophysics Data System (ADS)

    Goldstein, N.; Dressler, R. A.; Richtsmeier, S. S.; McLean, J.; Dao, P. D.; Murray-Krezan, J.; Fulcoly, D. O.

    2013-09-01

    Recent ground testing of a wide area camera system and automated star removal algorithms has demonstrated the potential to detect, quantify, and track deep space objects using small aperture cameras and on-board processors. The camera system, which was originally developed for a space-based Wide Area Space Surveillance System (WASSS), operates in a fixed-stare mode, continuously monitoring a wide swath of space and differentiating celestial objects from satellites based on differential motion across the field of view. It would have greatest utility in a LEO orbit to provide automated and continuous monitoring of deep space with high refresh rates, and with particular emphasis on the GEO belt and GEO transfer space. Continuous monitoring allows a concept of change detection and custody maintenance not possible with existing sensors. The detection approach is equally applicable to Earth-based sensor systems. A distributed system of such sensors, either Earth-based, or space-based, could provide automated, persistent night-time monitoring of all of deep space. The continuous monitoring provides a daily record of the light curves of all GEO objects above a certain brightness within the field of view. The daily updates of satellite light curves offers a means to identify specific satellites, to note changes in orientation and operational mode, and to queue other SSA assets for higher resolution queries. The data processing approach may also be applied to larger-aperture, higher resolution camera systems to extend the sensitivity towards dimmer objects. In order to demonstrate the utility of the WASSS system and data processing, a ground based field test was conducted in October 2012. We report here the results of the observations made at Magdalena Ridge Observatory using the prototype WASSS camera, which has a 4×60° field-of-view , <0.05° resolution, a 2.8 cm2 aperture, and the ability to view within 4° of the sun. A single camera pointed at the GEO belt provided a

  8. 78 FR 20345 - Modification and Expansion of CBP Centers of Excellence and Expertise Test To Include Six...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-04

    ... Register (77 FR 52048) will continue to be accepted throughout the duration of that test. Selected... a General Notice in the Federal Register (77 FR 52048) announcing a test broadening the ability of... Notice published in the Federal Register (77 FR 52048) on August 28, 2012 announcing the test for...

  9. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: Algorithm development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Vicroy, D. D.; Simmon, D. A.

    1985-01-01

    A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

  10. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: algorithm development and flight test results

    SciTech Connect

    Knox, C.E.; Vicroy, D.D.; Simmon, D.A.

    1985-05-01

    A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

  11. Simulation and experimental tests on active mass damper control system based on Model Reference Adaptive Control algorithm

    NASA Astrophysics Data System (ADS)

    Tu, Jianwei; Lin, Xiaofeng; Tu, Bo; Xu, Jiayun; Tan, Dongmei

    2014-09-01

    In the process of sudden natural disasters (such as earthquake or typhoon), the active mass damper (AMD) system can reduce the structural vibration response optimally, which serves as a frequently applied but less mature vibration-reducing technology in wind and earthquake resistance of high-rise buildings. As the core of this technology, the selection of control algorithm is extremely challenging due to the uncertainty of structural parameters and the randomness of external loads. It is not necessary for the Model Reference Adaptive Control (MRAC) based on the Minimal Controller Synthesis (MCS) algorithm to know in advance the structural parameters, which produces special advantages in conditions of real-time change of system parameters, uncertain external disturbance, and the nonlinear dynamic system. This paper studies the application of the MRAC into the AMD active control system. The principle of MRAC algorithm is recommended and the dynamic model and the motion differential equation of AMD system based on MRAC is established under seismic excitation. The simulation analysis for linear and nonlinear structures when the structural stiffness is degenerated is performed under AMD system controlled by MRAC algorithm. To verify the validity of the MRAC over the AMD system, experimental tests are carried out on a linear structure and a structure with variable stiffness with the AMD system under seismic excitation on the shake table, and the experimental results are compared with those of the traditional pole assignment control algorithm.

  12. Development of an HL7 interface engine, based on tree structure and streaming algorithm, for large-size messages which include image data.

    PubMed

    Um, Ki Sung; Kwak, Yun Sik; Cho, Hune; Kim, Il Kon

    2005-11-01

    A basic assumption of Health Level Seven (HL7) protocol is 'No limitation of message length'. However, most existing commercial HL7 interface engines do limit message length because they use the string array method, which is run in the main memory for the HL7 message parsing process. Specifically, messages with image and multi-media data create a long string array and thus cause the computer system to raise critical and fatal problem. Consequently, HL7 messages cannot handle the image and multi-media data necessary in modern medical records. This study aims to solve this problem with the 'streaming algorithm' method. This new method for HL7 message parsing applies the character-stream object which process character by character between the main memory and hard disk device with the consequence that the processing load on main memory could be alleviated. The main functions of this new engine are generating, parsing, validating, browsing, sending, and receiving HL7 messages. Also, the engine can parse and generate XML-formatted HL7 messages. This new HL7 engine successfully exchanged HL7 messages with 10 megabyte size images and discharge summary information between two university hospitals. PMID:16181703

  13. Including adaptation and mitigation responses to climate change in a multiobjective evolutionary algorithm framework for urban water supply systems incorporating GHG emissions

    NASA Astrophysics Data System (ADS)

    Paton, F. L.; Maier, H. R.; Dandy, G. C.

    2014-08-01

    Cities around the world are increasingly involved in climate action and mitigating greenhouse gas (GHG) emissions. However, in the context of responding to climate pressures in the water sector, very few studies have investigated the impacts of changing water use on GHG emissions, even though water resource adaptation often requires greater energy use. Consequently, reducing GHG emissions, and thus focusing on both mitigation and adaptation responses to climate change in planning and managing urban water supply systems, is necessary. Furthermore, the minimization of GHG emissions is likely to conflict with other objectives. Thus, applying a multiobjective evolutionary algorithm (MOEA), which can evolve an approximation of entire trade-off (Pareto) fronts of multiple objectives in a single run, would be beneficial. Consequently, the main aim of this paper is to incorporate GHG emissions into a MOEA framework to take into consideration both adaptation and mitigation responses to climate change for a city's water supply system. The approach is applied to a case study based on Adelaide's southern water supply system to demonstrate the framework's practical management implications. Results indicate that trade-offs exist between GHG emissions and risk-based performance, as well as GHG emissions and economic cost. Solutions containing rainwater tanks are expensive, while GHG emissions greatly increase with increased desalinated water supply. Consequently, while desalination plants may be good adaptation options to climate change due to their climate-independence, rainwater may be a better mitigation response, albeit more expensive.

  14. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Patterson, Jonathan; Teare, David; Johnson, Stephen

    2015-01-01

    The engineering development of the new Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these spacecraft systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex system engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in specialized Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model based algorithms and their development lifecycle from inception through Flight Software certification are an important focus of this development effort to further insure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. NASA formed a dedicated M&FM team for addressing fault management early in the development lifecycle for the SLS initiative. As part of the development of the M&FM capabilities, this team has developed a dedicated testbed that

  15. Industrial Sites Work Plan for Leachfield Corrective Action Units: Nevada Test Site and Tonopah Test Range, Nevada (including Record of Technical Change Nos. 1, 2, 3, and 4)

    SciTech Connect

    DOE /NV

    1998-12-18

    This Leachfield Corrective Action Units (CAUs) Work Plan has been developed in accordance with the Federal Facility Agreement and Consent Order (FFACO) that was agreed to by the U.S. Department of Energy, Nevada Operations Office (DOE/NV); the State of Nevada Division of Environmental Protection (NDEP); and the U.S. Department of Defense (FFACO, 1996). Under the FFACO, a work plan is an optional planning document that provides information for a CAU or group of CAUs where significant commonality exists. A work plan may be developed that can be referenced by leachfield Corrective Action Investigation Plans (CAIPs) to eliminate redundant CAU documentation. This Work Plan includes FFACO-required management, technical, quality assurance (QA), health and safety, public involvement, field sampling, and waste management documentation common to several CAUs with similar site histories and characteristics, namely the leachfield systems at the Nevada Test Site (NTS) and the Tonopah Test Range (TT R). For each CAU, a CAIP will be prepared to present detailed, site-specific information regarding contaminants of potential concern (COPCs), sampling locations, and investigation methods.

  16. Generalization of the Lord-Wingersky Algorithm to Computing the Distribution of Summed Test Scores Based on Real-Number Item Scores

    ERIC Educational Resources Information Center

    Kim, Seonghoon

    2013-01-01

    With known item response theory (IRT) item parameters, Lord and Wingersky provided a recursive algorithm for computing the conditional frequency distribution of number-correct test scores, given proficiency. This article presents a generalized algorithm for computing the conditional distribution of summed test scores involving real-number item…

  17. Use of computerized algorithm to identify individuals in need of testing for celiac disease

    PubMed Central

    Ludvigsson, Jonas F; Pathak, Jyotishman; Murphy, Sean; Durski, Matthew; Kirsch, Phillip S; Chute, Christophe G; Ryu, Euijung; Murray, Joseph A

    2013-01-01

    Background and aim Celiac disease (CD) is a lifelong immune-mediated disease with excess mortality. Early diagnosis is important to minimize disease symptoms, complications, and consumption of healthcare resources. Most patients remain undiagnosed. We developed two electronic medical record (EMR)-based algorithms to identify patients at high risk of CD and in need of CD screening. Methods (I) Using natural language processing (NLP), we searched EMRs for 16 free text (and related) terms in 216 CD patients and 280 controls. (II) EMRs were also searched for ICD9 (International Classification of Disease) codes suggesting an increased risk of CD in 202 patients with CD and 524 controls. For each approach, we determined the optimal number of hits to be assigned as CD cases. To assess performance of these algorithms, sensitivity and specificity were calculated. Results Using two hits as the cut-off, the NLP algorithm identified 72.9% of all celiac patients (sensitivity), and ruled out CD in 89.9% of the controls (specificity). In a representative US population of individuals without a prior celiac diagnosis (assuming that 0.6% had undiagnosed CD), this NLP algorithm could identify a group of individuals where 4.2% would have CD (positive predictive value). ICD9 code search using three hits as the cut-off had a sensitivity of 17.1% and a specificity of 88.5% (positive predictive value was 0.9%). Discussion and conclusions This study shows that computerized EMR-based algorithms can help identify patients at high risk of CD. NLP-based techniques demonstrate higher sensitivity and positive predictive values than algorithms based on ICD9 code searches. PMID:23956016

  18. Should We Stop Looking for a Better Scoring Algorithm for Handling Implicit Association Test Data? Test of the Role of Errors, Extreme Latencies Treatment, Scoring Formula, and Practice Trials on Reliability and Validity

    PubMed Central

    Perugini, Marco; Schönbrodt, Felix

    2015-01-01

    Since the development of D scores for the Implicit Association Test, few studies have examined whether there is a better scoring method. In this contribution, we tested the effect of four relevant parameters for IAT data that are the treatment of extreme latencies, the error treatment, the method for computing the IAT difference, and the distinction between practice and test critical trials. For some options of these different parameters, we included robust statistic methods that can provide viable alternative metrics to existing scoring algorithms, especially given the specificity of reaction time data. We thus elaborated 420 algorithms that result from the combination of all the different options and test the main effect of the four parameters with robust statistical analyses as well as their interaction with the type of IAT (i.e., with or without built-in penalty included in the IAT procedure). From the results, we can elaborate some recommendations. A treatment of extreme latencies is preferable but only if it consists in replacing rather than eliminating them. Errors contain important information and should not be discarded. The D score seems to be still a good way to compute the difference although the G score could be a good alternative, and finally it seems better to not compute the IAT difference separately for practice and test critical trials. From this recommendation, we propose to improve the traditional D scores with small yet effective modifications. PMID:26107176

  19. An Examination of the Factor Structure of Four of the Cognitive Abilities Included in the Educational Testing Service Kit of Factor-Referenced Cognitive Tests.

    ERIC Educational Resources Information Center

    Babcock, Renee L.; Laguna, Kerrie

    1997-01-01

    The Educational Testing Service Kit of Factor-Referenced Cognitive Tests contains 72 tests that are supposed to be markers of 23 latent cognitive constructs. Examination of the factor structure of four of these tests with 165 undergraduates suggests caution in using the measures as markers of distinct factors. (SLD)

  20. Compilation, design tests: Energetic particles Satellite S-3 including design tests for S-3A, S-3B and S-3C

    NASA Technical Reports Server (NTRS)

    Ledoux, F. N.

    1973-01-01

    A compilation of engineering design tests which were conducted in support of the Energetic Particle Satellite S-3, S-3A, and S-3b programs. The purpose for conducting the tests was to determine the adequacy and reliability of the Energetic Particles Series of satellites designs. The various tests consisted of: (1) moments of inertia, (2) functional reliability, (3) component and structural integrity, (4) initiators and explosives tests, and (5) acceptance tests.

  1. The Order-Restricted Association Model: Two Estimation Algorithms and Issues in Testing

    ERIC Educational Resources Information Center

    Galindo-Garre, Francisca; Vermunt, Jeroen K.

    2004-01-01

    This paper presents a row-column (RC) association model in which the estimated row and column scores are forced to be in agreement with a priori specified ordering. Two efficient algorithms for finding the order-restricted maximum likelihood (ML) estimates are proposed and their reliability under different degrees of association is investigated by…

  2. Multi-color space threshold segmentation and self-learning k-NN algorithm for surge test EUT status identification

    NASA Astrophysics Data System (ADS)

    Huang, Jian; Liu, Gui-xiong

    2016-04-01

    The identification of targets varies in different surge tests. A multi-color space threshold segmentation and self-learning k-nearest neighbor algorithm (k-NN) for equipment under test status identification was proposed after using feature matching to identify equipment status had to train new patterns every time before testing. First, color space (L*a*b*, hue saturation lightness (HSL), hue saturation value (HSV)) to segment was selected according to the high luminance points ratio and white luminance points ratio of the image. Second, the unknown class sample S r was classified by the k-NN algorithm with training set T z according to the feature vector, which was formed from number of pixels, eccentricity ratio, compactness ratio, and Euler's numbers. Last, while the classification confidence coefficient equaled k, made S r as one sample of pre-training set T z '. The training set T z increased to T z+1 by T z if T z was saturated. In nine series of illuminant, indicator light, screen, and disturbances samples (a total of 21600 frames), the algorithm had a 98.65%identification accuracy, also selected five groups of samples to enlarge the training set from T 0 to T 5 by itself.

  3. Poroviscoelastic finite element model including continuous fiber distribution for the simulation of nanoindentation tests on articular cartilage.

    PubMed

    Taffetani, M; Griebel, M; Gastaldi, D; Klisch, S M; Vena, P

    2014-04-01

    Articular cartilage is a soft hydrated tissue that facilitates proper load transfer in diarthroidal joints. The mechanical properties of articular cartilage derive from its structural and hierarchical organization that, at the micrometric length scale, encompasses three main components: a network of insoluble collagen fibrils, negatively charged macromolecules and a porous extracellular matrix. In this work, a constituent-based constitutive model for the simulation of nanoindentation tests on articular cartilage is presented: it accounts for the multi-constituent, non-linear, porous, and viscous aspects of articular cartilage mechanics. In order to reproduce the articular cartilage response under different loading conditions, the model considers a continuous distribution of collagen fibril orientation, swelling, and depth-dependent mechanical properties. The model's parameters are obtained by fitting published experimental data for the time-dependent response in a stress relaxation unconfined compression test on adult bovine articular cartilage. Then, model validation is obtained by simulating three independent experimental tests: (i) the time-dependent response in a stress relaxation confined compression test, (ii) the drained response of a flat punch indentation test and (iii) the depth-dependence of effective Poisson's ratio in a unconfined compression test. Finally, the validated constitutive model has been used to simulate multiload spherical nanoindentation creep tests. Upon accounting for strain-dependent tissue permeability and intrinsic viscoelastic properties of the collagen network, the model accurately fits the drained and undrained curves and time-dependent creep response. The results show that depth-dependent tissue properties and glycosaminoglycan-induced tissue swelling should be accounted for when simulating indentation experiments. PMID:24389384

  4. FG syndrome, an X-linked multiple congenital anomaly syndrome: The clinical phenotype and an algorithm for diagnostic testing

    PubMed Central

    Clark, Robin Dawn; Graham, John M.; Friez, Michael J.; Hoo, Joe J.; Jones, Kenneth Lyons; McKeown, Carole; Moeschler, John B.; Raymond, F. Lucy; Rogers, R. Curtis; Schwartz, Charles E.; Battaglia, Agatino; Lyons, Michael J.; Stevenson, Roger E.

    2014-01-01

    FG syndrome is a rare X-linked multiple congenital anomaly-cognitive impairment disorder caused by the p.R961W mutation in the MED12 gene. We identified all known patients with this mutation to delineate their clinical phenotype and devise a clinical algorithm to facilitate molecular diagnosis. We ascertained 23 males with the p.R961W mutation in MED12 from 9 previously reported FG syndrome families and 1 new family. Six patients are reviewed in detail. These 23 patients were compared with 48 MED12 mutation-negative patients, who had the clinical diagnosis of FG syndrome. Traits that best discriminated between these two groups were chosen to develop an algorithm with high sensitivity and specificity for the p.R961W MED12 mutation. FG syndrome has a recognizable dysmorphic phenotype with a high incidence of congenital anomalies. A family history of X-linked mental retardation, deceased male infants, and/or multiple fetal losses was documented in all families. The algorithm identifies the p.R961W MED12 mutation-positive group with 100% sensitivity and 90% spec-ificity. The clinical phenotype of FG syndrome defines a recognizable pattern of X-linked multiple congenital anomalies and cognitive impairment. This algorithm can assist the clinician in selecting the patients for testing who are most likely to have the recurrent p.R961W MED12 mutation. PMID:19938245

  5. Genomic selection in a pig population including information from slaughtered full sibs of boars within a sib-testing program.

    PubMed

    Samorè, A B; Buttazzoni, L; Gallo, M; Russo, V; Fontanesi, L

    2015-05-01

    Genomic selection is becoming a common practise in dairy cattle, but only few works have studied its introduction in pig selection programs. Results described for this species are highly dependent on the considered traits and the specific population structure. This paper aims to simulate the impact of genomic selection in a pig population with a training cohort of performance-tested and slaughtered full sibs. This population is selected for performance, carcass and meat quality traits by full-sib testing of boars. Data were simulated using a forward-in-time simulation process that modeled around 60K single nucleotide polymorphisms and several quantitative trait loci distributed across the 18 porcine autosomes. Data were edited to obtain, for each cycle, 200 sires mated with 800 dams to produce 800 litters of 4 piglets each, two males and two females (needed for the sib test), for a total of 3200 newborns. At each cycle, a subset of 200 litters were sib tested, and 60 boars and 160 sows were selected to replace the same number of culled male and female parents. Simulated selection of boars based on performance test data of their full sibs (one castrated brother and two sisters per boar in 200 litters) lasted for 15 cycles. Genotyping and phenotyping of the three tested sibs (training population) and genotyping of the candidate boars (prediction population) were assumed. Breeding values were calculated for traits with two heritability levels (h 2=0.40, carcass traits, and h 2=0.10, meat quality parameters) on simulated pedigrees, phenotypes and genotypes. Genomic breeding values, estimated by various models (GBLUP from raw phenotype or using breeding values and single-step models), were compared with the classical BLUP Animal Model predictions in terms of predictive ability. Results obtained for traits with moderate heritability (h 2=0.40), similar to the heritability of traits commonly measured within a sib-testing program, did not show any benefit from the

  6. Evaluation of a wind-tunnel gust response technique including correlations with analytical and flight test results

    NASA Technical Reports Server (NTRS)

    Redd, L. T.; Hanson, P. W.; Wynne, E. C.

    1979-01-01

    A wind tunnel technique for obtaining gust frequency response functions for use in predicting the response of flexible aircraft to atmospheric turbulence is evaluated. The tunnel test results for a dynamically scaled cable supported aeroelastic model are compared with analytical and flight data. The wind tunnel technique, which employs oscillating vanes in the tunnel throat section to generate a sinusoidally varying flow field around the model, was evaluated by use of a 1/30 scale model of the B-52E airplane. Correlation between the wind tunnel results, flight test results, and analytical predictions for response in the short period and wing first elastic modes of motion are presented.

  7. Comparison of GenomEra C. difficile and Xpert C. difficile as Confirmatory Tests in a Multistep Algorithm for Diagnosis of Clostridium difficile Infection

    PubMed Central

    Reigadas, Elena; Marín, Mercedes; Fernández-Chico, Antonia; Catalán, Pilar; Bouza, Emilio

    2014-01-01

    We compared two multistep diagnostic algorithms based on C. Diff Quik Chek Complete and, as confirmatory tests, GenomEra C. difficile and Xpert C. difficile. The sensitivity, specificity, positive predictive value, and negative predictive value were 87.2%, 99.7%, 97.1%, and 98.3%, respectively, for the GenomEra-based algorithm and 89.7%, 99.4%, 95.5%, and 98.6%, respectively, for the Xpert-based algorithm. GenomEra represents an alternative to Xpert as a confirmatory test of a multistep algorithm for Clostridium difficile infection (CDI) diagnosis. PMID:25392360

  8. A Historical Perspective of Testing and Assessment Including the Impact of Summative and Formative Assessment on Student Achievement

    ERIC Educational Resources Information Center

    Brink, Carole Sanger

    2011-01-01

    In 2007, Georgia developed a comprehensive framework to define what students need to know. One component of this framework emphasizes the use of both formative and summative assessments as part of an integral and specific component of the teachers. performance evaluation. Georgia administers the Criterion-Referenced Competency Test (CRCT) to every…

  9. NWRA AVOSS Wake Vortex Prediction Algorithm. 3.1.1

    NASA Technical Reports Server (NTRS)

    Robins, R. E.; Delisi, D. P.; Hinton, David (Technical Monitor)

    2002-01-01

    This report provides a detailed description of the wake vortex prediction algorithm used in the Demonstration Version of NASA's Aircraft Vortex Spacing System (AVOSS). The report includes all equations used in the algorithm, an explanation of how to run the algorithm, and a discussion of how the source code for the algorithm is organized. Several appendices contain important supplementary information, including suggestions for enhancing the algorithm and results from test cases.

  10. Human organ/tissue growth algorithms that include obese individuals and black/white population organ weight similarities from autopsy data.

    PubMed

    Young, John F; Luecke, Richard H; Pearce, Bruce A; Lee, Taewon; Ahn, Hongshik; Baek, Songjoon; Moon, Hojin; Dye, Daniel W; Davis, Thomas M; Taylor, Susan J

    2009-01-01

    Physiologically based pharmacokinetic (PBPK) models need the correct organ/tissue weights to match various total body weights in order to be applied to children and the obese individual. Baseline data from Reference Man for the growth of human organs (adrenals, brain, heart, kidneys, liver, lungs, pancreas, spleen, thymus, and thyroid) were augmented with autopsy data to extend the describing polynomials to include the morbidly obese individual (up to 250 kg). Additional literature data similarly extends the growth curves for blood volume, muscle, skin, and adipose tissue. Collectively these polynomials were used to calculate blood/organ/tissue weights for males and females from birth to 250 kg, which can be directly used to help parameterize PBPK models. In contrast to other black/white anthropomorphic measurements, the data demonstrated no observable or statistical difference in weights for any organ/tissue between individuals identified as black or white in the autopsy reports. PMID:19267313

  11. Statistical Analysis of a Large Sample Size Pyroshock Test Data Set Including Post Flight Data Assessment. Revision 1

    NASA Technical Reports Server (NTRS)

    Hughes, William O.; McNelis, Anne M.

    2010-01-01

    The Earth Observing System (EOS) Terra spacecraft was launched on an Atlas IIAS launch vehicle on its mission to observe planet Earth in late 1999. Prior to launch, the new design of the spacecraft's pyroshock separation system was characterized by a series of 13 separation ground tests. The analysis methods used to evaluate this unusually large amount of shock data will be discussed in this paper, with particular emphasis on population distributions and finding statistically significant families of data, leading to an overall shock separation interface level. The wealth of ground test data also allowed a derivation of a Mission Assurance level for the flight. All of the flight shock measurements were below the EOS Terra Mission Assurance level thus contributing to the overall success of the EOS Terra mission. The effectiveness of the statistical methodology for characterizing the shock interface level and for developing a flight Mission Assurance level from a large sample size of shock data is demonstrated in this paper.

  12. Testing the Generalization Efficiency of Oil Slick Classification Algorithm Using Multiple SAR Data for Deepwater Horizon Oil Spill

    NASA Astrophysics Data System (ADS)

    Ozkan, C.; Osmanoglu, B.; Sunar, F.; Staples, G.; Kalkan, K.; Balık Sanlı, F.

    2012-07-01

    Marine oil spills due to releases of crude oil from tankers, offshore platforms, drilling rigs and wells, etc. are seriously affecting the fragile marine and coastal ecosystem and cause political and environmental concern. A catastrophic explosion and subsequent fire in the Deepwater Horizon oil platform caused the platform to burn and sink, and oil leaked continuously between April 20th and July 15th of 2010, releasing about 780,000 m3 of crude oil into the Gulf of Mexico. Today, space-borne SAR sensors are extensively used for the detection of oil spills in the marine environment, as they are independent from sun light, not affected by cloudiness, and more cost-effective than air patrolling due to covering large areas. In this study, generalization extent of an object based classification algorithm was tested for oil spill detection using multiple SAR imagery data. Among many geometrical, physical and textural features, some more distinctive ones were selected to distinguish oil and look alike objects from each others. The tested classifier was constructed from a Multilayer Perception Artificial Neural Network trained by ABC, LM and BP optimization algorithms. The training data to train the classifier were constituted from SAR data consisting of oil spill originated from Lebanon in 2007. The classifier was then applied to the Deepwater Horizon oil spill data in the Gulf of Mexico on RADARSAT-2 and ALOS PALSAR images to demonstrate the generalization efficiency of oil slick classification algorithm.

  13. A test on a Neuro-Fuzzy algorithm used to reduce continuous gravity records for the effect of meteorological parameters

    NASA Astrophysics Data System (ADS)

    Andò, Bruno; Carbone, Daniele

    2004-05-01

    Gravity measurements are utilized at active volcanoes to detect mass changes linked to magma transfer processes and thus to recognize forerunners to paroxysmal volcanic events. Continuous gravity measurements are now increasingly performed at sites very close to active craters, where there is the greatest chance to detect meaningful gravity changes. Unfortunately, especially when used against the adverse environmental conditions usually encountered at such places, gravimeters have been proved to be affected by meteorological parameters, mainly by changes in the atmospheric temperature. The pseudo-signal generated by these perturbations is often stronger than the signal generated by actual changes in the gravity field. Thus, the implementation of well-performing algorithms for reducing the gravity signal for the effect of meteorological parameters is vital to obtain sequences useful from the volcano surveillance standpoint. In the present paper, a Neuro-Fuzzy algorithm, which was already proved to accomplish the required task satisfactorily, is tested over a data set from three gravimeters which worked continuously for about 50 days at a site far away from active zones, where changes due to actual fluctuation of the gravity field are expected to be within a few microgal. After accomplishing the reduction of the gravity series, residuals are within about 15 μGal peak-to-peak, thus confirming the capabilities of the Neuro-Fuzzy algorithm under test of performing the required task satisfactorily.

  14. 78 FR 28633 - Prometric, Inc., a Subsidiary of Educational Testing Service, Including On-Site Leased Workers...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-15

    ... Department's notice of determination was published in the Federal Register on October 19, 2012 (77 FR 64357..., Including On-Site Leased Workers From Office Team St. Paul, Minnesota; Amended Certification Regarding... workers of the subject firm. The company reports that workers leased from Office Team were employed...

  15. Including Students with Disabilities in Large-Scale Testing: Emerging Practices. ERIC/OSEP Digest E564.

    ERIC Educational Resources Information Center

    Fitzsimmons, Mary K.

    This brief identifies practices that include students with disabilities in large-scale assessments as required by the reauthorized and amended 1997 Individuals with Disabilities Education Act. It notes relevant research by the National Center on Educational Outcomes and summarizes major findings of studies funded by the U.S. Office of Special…

  16. Anticoccidial efficacy of semduramicin. 2. Evaluation against field isolates including comparisons with salinomycin, maduramicin, and monensin in battery tests.

    PubMed

    Logan, N B; McKenzie, M E; Conway, D P; Chappel, L R; Hammet, N C

    1993-11-01

    The efficacy of semduramicin (AVIAX), a novel polyether ionophore, was profiled in a series of 57 battery tests conducted in the United States and the United Kingdom. The studies employed mixed and monospecific infections of Eimeria acervulina, Eimeria mivati/Eimeria mitis, Eimeria brunetti, Eimeria maxima, Eimeria necatrix, and Eimeria tenella derived from North American and European field isolates. Ten-day-old broiler cockerels in pens of 8 to 10 birds were continuously medicated in feed beginning 24 h before challenge in tests of 6 to 8 days' duration. At the use level of 25 ppm, semduramicin effectively controlled mortality, lesions, and weight gain depression that occurred in unmedicated, infected controls for all species. In comparison with 60 ppm salinomycin, semduramicin significantly (P < .05) improved weight gain against E. brunetti and E. tenella, lesion control against E. brunetti and E. maxima, and the control of coccidiosis mortality against E. tenella. Salinomycin was superior (P < .05) to all treatments in maintenance of weight gain and control of lesions for E. acervulina. Maduramicin at 5 ppm was inferior (P < .05) to semduramicin in control of E. acervulina and E. maxima lesions, but was superior (P < .05) to all treatments in maintenance of weight gain and control of lesions in E. tenella infections. The data indicate that semduramicin at 25 ppm is well tolerated in broilers and possesses broad spectrum anticoccidial activity. PMID:8265495

  17. 40 CFR 1048.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... Calculate cycle statistics and compare with the established criteria as specified in 40 CFR 1065.514 to... emissions and cycle statistics the same as for transient testing as specified in 40 CFR part 1065, subpart G... 6 Intermediate test 10 0.10 7 Warm idle 0 0.15 1 Speed terms are defined in 40 CFR part 1065. 2...

  18. 40 CFR 1039.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...-state test according to this section after an appropriate warm-up period, consistent with 40 CFR part... idle mode, operate the engine at its warm idle speed as described in 40 CFR part 1065. (d) For constant... in 40 CFR 1065.514 to confirm that the test is valid. Operate the engine and sampling system...

  19. 40 CFR 1039.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...-state test according to this section after an appropriate warm-up period, consistent with 40 CFR part... idle mode, operate the engine at its warm idle speed as described in 40 CFR part 1065. (d) For constant... in 40 CFR 1065.514 to confirm that the test is valid. Operate the engine and sampling system...

  20. 40 CFR 1048.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... Calculate cycle statistics and compare with the established criteria as specified in 40 CFR 1065.514 to... emissions and cycle statistics the same as for transient testing as specified in 40 CFR part 1065, subpart G... 6 Intermediate test 10 0.10 7 Warm idle 0 0.15 1 Speed terms are defined in 40 CFR part 1065. 2...

  1. 40 CFR 1048.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... Calculate cycle statistics and compare with the established criteria as specified in 40 CFR 1065.514 to... emissions and cycle statistics the same as for transient testing as specified in 40 CFR part 1065, subpart G... 6 Intermediate test 10 0.10 7 Warm idle 0 0.15 1 Speed terms are defined in 40 CFR part 1065. 2...

  2. Homogenisation algorithm skill testing with synthetic global benchmarks for the International Surface Temperature Initiative

    NASA Astrophysics Data System (ADS)

    Willet, Katherine; Venema, Victor; Williams, Claude; Aguilar, Enric; joliffe, Ian; Alexander, Lisa; Vincent, Lucie; Lund, Robert; Menne, Matt; Thorne, Peter; Auchmann, Renate; Warren, Rachel; Bronniman, Stefan; Thorarinsdotir, Thordis; Easterbrook, Steve; Gallagher, Colin; Lopardo, Giuseppina; Hausfather, Zeke; Berry, David

    2015-04-01

    Our surface temperature data are good enough to give us confidence that the world has warmed since 1880. However, they are not perfect - we cannot be precise in the amount of warming for the globe and especially for small regions or specific locations. Inhomogeneity (non-climate changes to the station record) is a major problem. While progress in detection of, and adjustment for inhomogeneities is continually advancing, monitoring effectiveness on large networks and gauging respective improvements in climate data quality is non-trivial. There is currently no internationally recognised means of robustly assessing the effectiveness of homogenisation methods on real data - and thus, the inhomogeneity uncertainty in those data. Here I present the work of the International Surface Temperature Initiative (ISTI; www.surfacetemperatures.org) Benchmarking working group. The aim is to quantify homogenisation algorithm skill on the global scale against realistic benchmarks. This involves the creation of synthetic worlds of surface temperature data, deliberate contamination of these with known errors and then assessment of the ability of homogenisation algorithms to detect and remove these errors. The ultimate aim is threefold: quantifying uncertainties in surface temperature data; enabling more meaningful product intercomparison; and improving homogenisation methods. There are five components work: 1. Create 30000 synthetic benchmark stations that look and feel like the real global temperature network, but do not contain any inhomogeneities: analog clean-worlds. 2. Design a set of error models which mimic the main types of inhomogeneities found in practice, and combined them with the analog clean-worlds to give analog error-worlds. 3. Engage with dataset creators to run their homogenisation algorithms blind on the analog error-world stations as they have done with the real data. 4. Design an assessment framework to gauge the degree to which analog error-worlds are returned to

  3. Development of model-based fault diagnosis algorithms for MASCOTTE cryogenic test bench

    NASA Astrophysics Data System (ADS)

    Iannetti, A.; Marzat, J.; Piet-Lahanier, H.; Ordonneau, G.; Vingert, L.

    2014-12-01

    This article describes the on-going results of a fault diagnosis benchmark for a cryogenic rocket engine demonstrator. The benchmark consists in the use of classical model- based fault diagnosis methods to monitor the status of the cooling circuit of the MASCOTTE cryogenic bench. The algorithms developed are validated on real data from the last 2014 firing campaign (ATAC campaign). The objective of this demonstration is to find practical diagnosis alternatives to classical redline providing more flexible means of data exploitation in real time and for post processing.

  4. Modified agar dilution susceptibility testing method for determining in vitro activities of antifungal agents, including azole compounds.

    PubMed Central

    Yoshida, T; Jono, K; Okonogi, K

    1997-01-01

    In vitro activities of antifungal agents, including azole compounds, against yeasts were easily determined by using RPMI-1640 agar medium and by incubating the plates in the presence of 20% CO2. The end point of inhibition was clear by this method, even in the case of azole compounds, because of the almost complete inhibition of yeast growth at high concentrations which permitted weak growth of some Candida strains by traditional methods. MICs obtained by the agar dilution method were similar to those obtained by the broth dilution method proposed by the National Committee for Clinical Laboratory Standards. PMID:9174197

  5. A new test statistic for climate models that includes field and spatial dependencies using Gaussian Markov random fields

    DOE PAGESBeta

    Nosedal-Sanchez, Alvaro; Jackson, Charles S.; Huerta, Gabriel

    2016-07-20

    A new test statistic for climate model evaluation has been developed that potentially mitigates some of the limitations that exist for observing and representing field and space dependencies of climate phenomena. Traditionally such dependencies have been ignored when climate models have been evaluated against observational data, which makes it difficult to assess whether any given model is simulating observed climate for the right reasons. The new statistic uses Gaussian Markov random fields for estimating field and space dependencies within a first-order grid point neighborhood structure. We illustrate the ability of Gaussian Markov random fields to represent empirical estimates of fieldmore » and space covariances using "witch hat" graphs. We further use the new statistic to evaluate the tropical response of a climate model (CAM3.1) to changes in two parameters important to its representation of cloud and precipitation physics. Overall, the inclusion of dependency information did not alter significantly the recognition of those regions of parameter space that best approximated observations. However, there were some qualitative differences in the shape of the response surface that suggest how such a measure could affect estimates of model uncertainty.« less

  6. Optimizing tuning masses for helicopter rotor blade vibration reduction including computed airloads and comparison with test data

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.; Walsh, Joanne L.; Wilbur, Matthew L.

    1992-01-01

    The development and validation of an optimization procedure to systematically place tuning masses along a rotor blade span to minimize vibratory loads are described. The masses and their corresponding locations are the design variables that are manipulated to reduce the harmonics of hub shear for a four-bladed rotor system without adding a large mass penalty. The procedure incorporates a comprehensive helicopter analysis to calculate the airloads. Predicting changes in airloads due to changes in design variables is an important feature of this research. The procedure was applied to a one-sixth, Mach-scaled rotor blade model to place three masses and then again to place six masses. In both cases the added mass was able to achieve significant reductions in the hub shear. In addition, the procedure was applied to place a single mass of fixed value on a blade model to reduce the hub shear for three flight conditions. The analytical results were compared to experimental data from a wind tunnel test performed in the Langley Transonic Dynamics Tunnel. The correlation of the mass location was good and the trend of the mass location with respect to flight speed was predicted fairly well. However, it was noted that the analysis was not entirely successful at predicting the absolute magnitudes of the fixed system loads.

  7. A new test statistic for climate models that includes field and spatial dependencies using Gaussian Markov random fields

    NASA Astrophysics Data System (ADS)

    Nosedal-Sanchez, Alvaro; Jackson, Charles S.; Huerta, Gabriel

    2016-07-01

    A new test statistic for climate model evaluation has been developed that potentially mitigates some of the limitations that exist for observing and representing field and space dependencies of climate phenomena. Traditionally such dependencies have been ignored when climate models have been evaluated against observational data, which makes it difficult to assess whether any given model is simulating observed climate for the right reasons. The new statistic uses Gaussian Markov random fields for estimating field and space dependencies within a first-order grid point neighborhood structure. We illustrate the ability of Gaussian Markov random fields to represent empirical estimates of field and space covariances using "witch hat" graphs. We further use the new statistic to evaluate the tropical response of a climate model (CAM3.1) to changes in two parameters important to its representation of cloud and precipitation physics. Overall, the inclusion of dependency information did not alter significantly the recognition of those regions of parameter space that best approximated observations. However, there were some qualitative differences in the shape of the response surface that suggest how such a measure could affect estimates of model uncertainty.

  8. Inventory of forest and rangeland resources, including forest stress. [Atlanta, Georgia, Black Hills, and Manitou, Colorado test sites

    NASA Technical Reports Server (NTRS)

    Heller, R. C.; Aldrich, R. C.; Weber, F. P.; Driscoll, R. S. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. Some current beetle-killed ponderosa pine can be detected on S190-B photography imaged over the Bear Lodge mountains in the Black Hills National Forest. Detections were made on SL-3 imagery (September 13, 1973) using a zoom lens microscope to view the photography. At this time correlations have not been made to all of the known infestation spots in the Bear Lodge mountains; rather, known infestations have been located on the SL-3 imagery. It was determined that the beetle-killed trees were current kills by stereo viewing of SL-3 imagery on one side and SL-2 on the other. A successful technique was developed for mapping current beetle-killed pine using MSS imagery from mission 247 flown by the C-130 over the Black Hills test site in September 1973. Color enhancement processing on the NASA/JSC, DAS system using three MSS channels produced an excellent quality detection map for current kill pine. More importantly it provides a way to inventory the dead trees by relating PCM counts to actual numbers of dead trees.

  9. Corrective Action Decision Document for Corrective Action Unit 204: Storage Bunkers, Nevada Test Site, Nevada: Revision 0, Including Errata Sheet

    SciTech Connect

    U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office

    2004-04-01

    This Corrective Action Decision Document identifies the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office's corrective action alternative recommendation for each of the corrective action sites (CASs) within Corrective Action Unit (CAU) 204: Storage Bunkers, Nevada Test Site (NTS), Nevada, under the Federal Facility Agreement and Consent Order. An evaluation of analytical data from the corrective action investigation, review of current and future operations at each CAS, and a detailed comparative analysis of potential corrective action alternatives were used to determine the appropriate corrective action for each CAS. There are six CASs in CAU 204, which are all located between Areas 1, 2, 3, and 5 on the NTS. The No Further Action alternative was recommended for CASs 01-34-01, 02-34-01, 03-34-01, and 05-99-02; and a Closure in Place with Administrative Controls recommendation was the preferred corrective action for CASs 05-18-02 and 05-33-01. These alternatives were judged to meet all requirements for the technical components evaluated as well as applicable state and federal regulations for closure of the sites and will eliminate potential future exposure pathways to the contaminated media at CAU 204.

  10. Using Lagrangian-based process studies to test satellite algorithms of vertical carbon flux in the eastern North Pacific Ocean

    NASA Astrophysics Data System (ADS)

    Stukel, M. R.; Kahru, M.; Benitez-Nelson, C. R.; Décima, M.; Goericke, R.; Landry, M. R.; Ohman, M. D.

    2015-11-01

    The biological carbon pump is responsible for the transport of ˜5-20 Pg C yr-1 from the surface into the deep ocean but its variability is poorly understood due to an incomplete mechanistic understanding of the complex underlying planktonic processes. In fact, algorithms designed to estimate carbon export from satellite products incorporate fundamentally different assumptions about the relationships between plankton biomass, productivity, and export efficiency. To test the alternate formulations of export efficiency in remote-sensing algorithms formulated by Dunne et al. (2005), Laws et al. (2011), Henson et al. (2011), and Siegel et al. (2014), we have compiled in situ measurements (temperature, chlorophyll, primary production, phytoplankton biomass and size structure, grazing rates, net chlorophyll change, and carbon export) made during Lagrangian process studies on seven cruises in the California Current Ecosystem and Costa Rica Dome. A food-web based approach formulated by Siegel et al. (2014) performs as well or better than other empirical formulations, while simultaneously providing reasonable estimates of protozoan and mesozooplankton grazing rates. By tuning the Siegel et al. (2014) algorithm to match in situ grazing rates more accurately, we also obtain better in situ carbon export measurements. Adequate representations of food-web relationships and grazing dynamics are therefore crucial to improving the accuracy of export predictions made from satellite-derived products. Nevertheless, considerable unexplained variance in export remains and must be explored before we can reliably use remote sensing products to assess the impact of climate change on biologically mediated carbon sequestration.

  11. Characterizing and hindcasting ripple bedform dynamics: Field test of non-equilibrium models utilizing a fingerprint algorithm

    NASA Astrophysics Data System (ADS)

    DuVal, Carter B.; Trembanis, Arthur C.; Skarke, Adam

    2016-03-01

    Ripple bedform response to near bed forcing has been found to be asynchronous with rapidly changing hydrodynamic conditions. Recent models have attempted to account for this time variance through the introduction of a time offset between hydrodynamic forcing and seabed response with varying success. While focusing on temporal ripple evolution, spatial ripple variation has been partly neglected. With the fingerprint algorithm ripple bedform parameterization technique, spatial variation can be quickly and precisely characterized, and as such, this method is particularly useful for evaluation of ripple model spatio-temporal validity. Using time-series hydrodynamic data and synoptic acoustic imagery collected at an inner continental shelf site, this study compares an adapted time-varying ripple geometric model to observed field observations in light of the fingerprint algorithm results. Multiple equilibrium ripple predictors are tested within the time-varying model, with the algorithm results serving as the baseline geometric values. Results indicate that ripple bedforms, in the presence of rapidly changing high-energy conditions, reorganize at a slower rate than predicted by the models. Relict ripples were found to be near peak-forcing wavelengths after rapidly decaying storm events, and still present after months of sub-critical flow conditions.

  12. Test of multiscaling in a diffusion-limited-aggregation model using an off-lattice killing-free algorithm

    NASA Astrophysics Data System (ADS)

    Menshutin, Anton Yu.; Shchur, Lev N.

    2006-01-01

    We test the multiscaling issue of diffusion-limited-aggregation (DLA) clusters using a modified algorithm. This algorithm eliminates killing the particles at the death circle. Instead, we return them to the birth circle at a random relative angle taken from the evaluated distribution. In addition, we use a two-level hierarchical memory model that allows using large steps in conjunction with an off-lattice realization of the model. Our algorithm still seems to stay in the framework of the original DLA model. We present an accurate estimate of the fractal dimensions based on the data for a hundred clusters with 50 million particles each. We find that multiscaling cannot be ruled out. We also find that the fractal dimension is a weak self-averaging quantity. In addition, the fractal dimension, if calculated using the harmonic measure, is a nonmonotonic function of the cluster radius. We argue that the controversies in the data interpretation can be due to the weak self-averaging and the influence of intrinsic noise.

  13. Evaluation of different confirmatory algorithms using seven treponemal tests on Architect Syphilis TP-positive/RPR-negative sera.

    PubMed

    Jonckheere, S; Berth, M; Van Esbroeck, M; Blomme, S; Lagrou, K; Padalko, E

    2015-10-01

    The Architect Syphilis TP is considered to be a suitable screening test due to its high sensitivity and full automation. According to the International Union against Sexually Transmitted Infections (IUSTI) 2014 guidelines, however, positive screening tests need confirmation with Treponema pallidum particle agglutination (TP.PA). Among Architect-positive results, samples with a negative non-treponemal test present the major diagnostic challenge. In this multicenter study, we investigated if other, preferable less labor-intensive treponemal tests could replace TP.PA. A total of 178 rapid plasma reagin (RPR)-negative sera with an Architect value between 1 and 15 S/CO were prospectively selected in three centers. These sera were analyzed with TP.PA and six alternative treponemal tests: three immunoblots and three tests on random-access analyzers. The diagnostic performance of the treponemal tests differed substantially, with the overall agreement between the six alternative tests ranging from 44.6 to 82.0%. Based on TP.PA as the gold standard, the INNO-LIA IgG blot, the BioPlex 2200 IgG, and the Syphilis TPA showed a high sensitivity, while the EUROLINE-WB IgG blot, recomLine Treponema IgG blot, and the Chorus Syphilis screen showed a high specificity. However, an Architect cut-off of 5.6 S/CO can serve as an alternative for these confirmatory treponemal tests in case of an RPR-negative result. Treponemal tests show poor agreement in this challenging group of Architect-positive/RPR-negative sera. The most optimal algorithm is obtained by assigning sera with an Architect value >5.6 S/CO as true-positives and sera with a value between 1 and 5.6 S/CO as undetermined, requiring further testing with TP.PA. PMID:26187433

  14. Testing a discrete choice experiment including duration to value health states for large descriptive systems: Addressing design and sampling issues

    PubMed Central

    Bansback, Nick; Hole, Arne Risa; Mulhern, Brendan; Tsuchiya, Aki

    2014-01-01

    There is interest in the use of discrete choice experiments that include a duration attribute (DCETTO) to generate health utility values, but questions remain on its feasibility in large health state descriptive systems. This study examines the stability of DCETTO to estimate health utility values from the five-level EQ-5D, an instrument with depicts 3125 different health states. Between January and March 2011, we administered 120 DCETTO tasks based on the five-level EQ-5D to a total of 1799 respondents in the UK (each completed 15 DCETTO tasks on-line). We compared models across different sample sizes and different total numbers of observations. We found the DCETTO coefficients were generally consistent, with high agreement between individual ordinal preferences and aggregate cardinal values. Keeping the DCE design and the total number of observations fixed, subsamples consisting of 10 tasks per respondent with an intermediate sized sample, and 15 tasks with a smaller sample provide similar results in comparison to the whole sample model. In conclusion, we find that the DCETTO is a feasible method for developing values for larger descriptive systems such as EQ-5D-5L, and find evidence supporting important design features for future valuation studies that use the DCETTO. PMID:24908173

  15. Space shuttle orbiter avionics software: Post review report for the entry FACI (First Article Configuration Inspection). [including orbital flight tests integrated system

    NASA Technical Reports Server (NTRS)

    Markos, H.

    1978-01-01

    Status of the computer programs dealing with space shuttle orbiter avionics is reported. Specific topics covered include: delivery status; SSW software; SM software; DL software; GNC software; level 3/4 testing; level 5 testing; performance analysis, SDL readiness for entry first article configuration inspection; and verification assessment.

  16. Testing earthquake prediction algorithms: Statistically significant advance prediction of the largest earthquakes in the Circum-Pacific, 1992-1997

    USGS Publications Warehouse

    Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.

    1999-01-01

    Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier

  17. Testing the robustness of the genetic algorithm on the floating building block representation

    SciTech Connect

    Lindsay, R.K.; Wu, A.S.

    1996-12-31

    Recent studies on a floating building block representation for the genetic algorithm (GA) suggest that there are many advantages to using the floating representation. This paper investigates the behavior of the GA on floating representation problems in response to three different types of pressures: (1) a reduction in the amount of genetic material available to the GA during the problem solving process, (2) functions which have negative-valued building blocks, and (3) randomizing non-coding segments. Results indicate that the GA`s performance on floating representation problems is very robust. Significant reductions in genetic material (genome length) may be made with relatively small decrease in performance. The GA can effectively solve problems with negative building blocks. Randomizing non-coding segments appears to improve rather than harm GA performance.

  18. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  19. Global Precipitation Measurement: GPM Microwave Imager (GMI) Algorithm Development Approach

    NASA Technical Reports Server (NTRS)

    Stocker, Erich Franz

    2009-01-01

    This slide presentation reviews the approach to the development of the Global Precipitation Measurement algorithm. This presentation includes information about the responsibilities for the development of the algorithm, and the calibration. Also included is information about the orbit, and the sun angle. The test of the algorithm code will be done with synthetic data generated from the Precipitation Processing System (PPS).

  20. 40 CFR 1048.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... are defined in 40 CFR part 1065. 2 The percent torque is relative to the maximum torque at the given...-modal cycles described in 40 CFR Part 1065. (b) Measure emissions by testing the engine on a dynamometer... Steady-state 124 Warm idle 0 1 Speed terms are defined in 40 CFR part 1065. 2 Advance from one mode...

  1. SU-E-T-347: Validation of the Condensed History Algorithm of Geant4 Using the Fano Test

    SciTech Connect

    Lee, H; Mathis, M; Sawakuchi, G

    2014-06-01

    Purpose: To validate the condensed history algorithm and physics of the Geant4 Monte Carlo toolkit for simulations of ionization chambers (ICs). This study is the first step to validate Geant4 for calculations of photon beam quality correction factors under the presence of a strong magnetic field for magnetic resonance guided linac system applications. Methods: The electron transport and boundary crossing algorithms of Geant4 version 9.6.p02 were tested under Fano conditions using the Geant4 example/application FanoCavity. User-defined parameters of the condensed history and multiple scattering algorithms were investigated under Fano test conditions for three scattering models (physics lists): G4UrbanMscModel95 (PhysListEmStandard-option3), G4GoudsmitSaundersonMsc (PhysListEmStandard-GS), and G4WentzelVIModel/G4CoulombScattering (PhysListEmStandard-WVI). Simulations were conducted using monoenergetic photon beams, ranging from 0.5 to 7 MeV and emphasizing energies from 0.8 to 3 MeV. Results: The GS and WVI physics lists provided consistent Fano test results (within ±0.5%) for maximum step sizes under 0.01 mm at 1.25 MeV, with improved performance at 3 MeV (within ±0.25%). The option3 physics list provided consistent Fano test results (within ±0.5%) for maximum step sizes above 1 mm. Optimal parameters for the option3 physics list were 10 km maximum step size with default values for other user-defined parameters: 0.2 dRoverRange, 0.01 mm final range, 0.04 range factor, 2.5 geometrical factor, and 1 skin. Simulations using the option3 physics list were ∼70 – 100 times faster compared to GS and WVI under optimal parameters. Conclusion: This work indicated that the option3 physics list passes the Fano test within ±0.5% when using a maximum step size of 10 km for energies suitable for IC calculations in a 6 MV spectrum without extensive computational times. Optimal user-defined parameters using the option3 physics list will be used in future IC simulations to

  2. 40 CFR 1048.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 6 Intermediate test 10 0.10 7 Warm idle 0 0.15 1 Speed terms are defined in 40 CFR part 1065. 2 The... engine at its warm idle speed as described in 40 CFR 1065.510. (e) For full-load operating modes, operate.... Calculate cycle statistics and compare with the established criteria as specified in 40 CFR 1065.514...

  3. Tests of Five Full-Scale Propellers in the Presence of a Radial and a Liquid-Cooled Engine Nacelle, Including Tests of Two Spinners

    NASA Technical Reports Server (NTRS)

    Biermann, David; Hartman, Edwin P

    1938-01-01

    Wind-tunnel tests are reported of five 3-blade 10-foot propellers operating in front of a radial and a liquid-cooled engine nacelle. The range of blade angles investigated extended from 15 degrees to 45 degrees. Two spinners were tested in conjunction with the liquid-cooled engine nacelle. Comparisons are made between propellers having different blade-shank shapes, blades of different thickness, and different airfoil sections. The results show that propellers operating in front of the liquid-cooled engine nacelle had higher take-off efficiencies than when operating in front of the radial engine nacelle; the peak efficiency was higher only when spinners were employed. One spinner increased the propulsive efficiency of the liquid-cooled unit 6 percent for the highest blade-angle setting investigated and less for lower blade angles. The propeller having airfoil sections extending into the hub was superior to one having round blade shanks. The thick propeller having a Clark y section had a higher take-off efficiency than the thinner one, but its maximum efficiency was possibly lower. Of the three blade sections tested, Clark y, R.A.F. 6, and NACA 2400-34, the Clark y was superior for the high-speed condition, but the R.A.F. 6 excelled for the take-off condition.

  4. Parasitological diagnosis combining an internally controlled real-time PCR assay for the detection of four protozoa in stool samples with a testing algorithm for microscopy.

    PubMed

    Bruijnesteijn van Coppenraet, L E S; Wallinga, J A; Ruijs, G J H M; Bruins, M J; Verweij, J J

    2009-09-01

    Molecular detection of gastrointestinal protozoa is more sensitive and more specific than microscopy but, to date, has not routinely replaced time-consuming microscopic analysis. Two internally controlled real-time PCR assays for the combined detection of Entamoeba histolytica, Giardia lamblia, Cryptosporidium spp. and Dientamoeba fragilis in single faecal samples were compared with Triple Faeces Test (TFT) microscopy results from 397 patient samples. Additionally, an algorithm for complete parasitological diagnosis was created. Real-time PCR revealed 152 (38.3%) positive cases, 18 of which were double infections: one (0.3%) sample was positive for E. histolytica, 44 (11.1%) samples were positive for G. lamblia, 122 (30.7%) samples were positive for D. fragilis, and three (0.8%) samples were positive for Cryptosporidium. TFT microscopy yielded 96 (24.2%) positive cases, including five double infections: one sample was positive for E. histolytica/Entamoeba dispar, 29 (7.3%) samples were positive for G. lamblia, 69 (17.4%) samples were positive for D. fragilis, and two (0.5%) samples were positive for Cryptosporidium hominis/Cryptosporidium parvum. Retrospective analysis of the clinical patient information of 2887 TFT sets showed that eosinophilia, elevated IgE levels, adoption and travelling to (sub)tropical areas are predisposing factors for infection with non-protozoal gastrointestinal parasites. The proposed diagnostic algorithm includes application of real-time PCR to all samples, with the addition of microscopy on an unpreserved faecal sample in cases of a predisposing factor, or a repeat request for parasitological examination. Application of real-time PCR improved the diagnostic yield by 18%. A single stool sample is sufficient for complete parasitological diagnosis when an algorithm based on clinical information is applied. PMID:19624500

  5. The feasibility test of state-of-the-art face detection algorithms for vehicle occupant detection

    NASA Astrophysics Data System (ADS)

    Makrushin, Andrey; Dittmann, Jana; Vielhauer, Claus; Langnickel, Mirko; Kraetzer, Christian

    2010-01-01

    Vehicle seat occupancy detection systems are designed to prevent the deployment of airbags at unoccupied seats, thus avoiding the considerable cost imposed by the replacement of airbags. Occupancy detection can also improve passenger comfort, e.g. by activating air-conditioning systems. The most promising development perspectives are seen in optical sensing systems which have become cheaper and smaller in recent years. The most plausible way to check the seat occupancy by occupants is the detection of presence and location of heads, or more precisely, faces. This paper compares the detection performances of the three most commonly used and widely available face detection algorithms: Viola- Jones, Kienzle et al. and Nilsson et al. The main objective of this work is to identify whether one of these systems is suitable for use in a vehicle environment with variable and mostly non-uniform illumination conditions, and whether any one face detection system can be sufficient for seat occupancy detection. The evaluation of detection performance is based on a large database comprising 53,928 video frames containing proprietary data collected from 39 persons of both sexes and different ages and body height as well as different objects such as bags and rearward/forward facing child restraint systems.

  6. Test and evaluation of the HIDEC engine uptrim algorithm. [Highly Integrated Digital Electronic Control for aircraft

    NASA Technical Reports Server (NTRS)

    Ray, R. J.; Myers, L. P.

    1986-01-01

    The highly integrated digital electronic control (HIDEC) program will demonstrate and evaluate the improvements in performance and mission effectiveness that result from integrated engine-airframe control systems. Performance improvements will result from an adaptive engine stall margin mode, a highly integrated mode that uses the airplane flight conditions and the resulting inlet distortion to continuously compute engine stall margin. When there is excessive stall margin, the engine is uptrimmed for more thrust by increasing engine pressure ratio (EPR). The EPR uptrim logic has been evaluated and implemente into computer simulations. Thrust improvements over 10 percent are predicted for subsonic flight conditions. The EPR uptrim was successfully demonstrated during engine ground tests. Test results verify model predictions at the conditions tested.

  7. Genetic Algorithm Based Multi-Agent System Applied to Test Generation

    ERIC Educational Resources Information Center

    Meng, Anbo; Ye, Luqing; Roy, Daniel; Padilla, Pierre

    2007-01-01

    Automatic test generating system in distributed computing context is one of the most important links in on-line evaluation system. Although the issue has been argued long since, there is not a perfect solution to it so far. This paper proposed an innovative approach to successfully addressing such issue by the seamless integration of genetic…

  8. Evaluation of Carbapenemase Screening and Confirmation Tests with Enterobacteriaceae and Development of a Practical Diagnostic Algorithm

    PubMed Central

    Maurer, Florian P.; Castelberg, Claudio; Quiblier, Chantal; Bloemberg, Guido V.

    2014-01-01

    Reliable identification of carbapenemase-producing members of the family Enterobacteriaceae is necessary to limit their spread. This study aimed to develop a diagnostic flow chart using phenotypic screening and confirmation tests that is suitable for implementation in different types of clinical laboratories. A total of 334 clinical Enterobacteriaceae isolates genetically characterized with respect to carbapenemase, extended-spectrum β-lactamase (ESBL), and AmpC genes were analyzed. A total of 142/334 isolates (42.2%) were suspected of carbapenemase production, i.e., intermediate or resistant to ertapenem (ETP) and/or meropenem (MEM) and/or imipenem (IPM) according to EUCAST clinical breakpoints (CBPs). A group of 193/334 isolates (57.8%) showing susceptibility to ETP, MEM, and IPM was considered the negative-control group in this study. CLSI and EUCAST carbapenem CBPs and the new EUCAST MEM screening cutoff were evaluated as screening parameters. ETP, MEM, and IPM with or without aminophenylboronic acid (APBA) or EDTA combined-disk tests (CDTs) and the Carba NP-II test were evaluated as confirmation assays. EUCAST temocillin cutoffs were evaluated for OXA-48 detection. The EUCAST MEM screening cutoff (<25 mm) showed a sensitivity of 100%. The ETP APBA CDT on Mueller-Hinton agar containing cloxacillin (MH-CLX) displayed 100% sensitivity and specificity for class A carbapenemase confirmation. ETP and MEM EDTA CDTs showed 100% sensitivity and specificity for class B carbapenemases. Temocillin zone diameters/MIC testing on MH-CLX was highly specific for OXA-48 producers. The overall sensitivity, specificity, positive predictive value, and negative predictive value of the Carba NP-II test were 78.9, 100, 100, and 98.7%, respectively. Combining the EUCAST MEM carbapenemase screening cutoff (<25 mm), ETP (or MEM), APBA, and EDTA CDTs, and temocillin disk diffusion on MH-CLX promises excellent performance for carbapenemase detection. PMID:25355766

  9. Evaluation of carbapenemase screening and confirmation tests with Enterobacteriaceae and development of a practical diagnostic algorithm.

    PubMed

    Maurer, Florian P; Castelberg, Claudio; Quiblier, Chantal; Bloemberg, Guido V; Hombach, Michael

    2015-01-01

    Reliable identification of carbapenemase-producing members of the family Enterobacteriaceae is necessary to limit their spread. This study aimed to develop a diagnostic flow chart using phenotypic screening and confirmation tests that is suitable for implementation in different types of clinical laboratories. A total of 334 clinical Enterobacteriaceae isolates genetically characterized with respect to carbapenemase, extended-spectrum β-lactamase (ESBL), and AmpC genes were analyzed. A total of 142/334 isolates (42.2%) were suspected of carbapenemase production, i.e., intermediate or resistant to ertapenem (ETP) and/or meropenem (MEM) and/or imipenem (IPM) according to EUCAST clinical breakpoints (CBPs). A group of 193/334 isolates (57.8%) showing susceptibility to ETP, MEM, and IPM was considered the negative-control group in this study. CLSI and EUCAST carbapenem CBPs and the new EUCAST MEM screening cutoff were evaluated as screening parameters. ETP, MEM, and IPM with or without aminophenylboronic acid (APBA) or EDTA combined-disk tests (CDTs) and the Carba NP-II test were evaluated as confirmation assays. EUCAST temocillin cutoffs were evaluated for OXA-48 detection. The EUCAST MEM screening cutoff (<25 mm) showed a sensitivity of 100%. The ETP APBA CDT on Mueller-Hinton agar containing cloxacillin (MH-CLX) displayed 100% sensitivity and specificity for class A carbapenemase confirmation. ETP and MEM EDTA CDTs showed 100% sensitivity and specificity for class B carbapenemases. Temocillin zone diameters/MIC testing on MH-CLX was highly specific for OXA-48 producers. The overall sensitivity, specificity, positive predictive value, and negative predictive value of the Carba NP-II test were 78.9, 100, 100, and 98.7%, respectively. Combining the EUCAST MEM carbapenemase screening cutoff (<25 mm), ETP (or MEM), APBA, and EDTA CDTs, and temocillin disk diffusion on MH-CLX promises excellent performance for carbapenemase detection. PMID:25355766

  10. First results from the COST-HOME monthly benchmark dataset with temperature and precipitation data for testing homogenisation algorithms

    NASA Astrophysics Data System (ADS)

    Venema, Victor; Mestre, Olivier

    2010-05-01

    between the stations has been modelled as uncorrelated Gaussian white noise. The idealised dataset is valuable because its statistical characteristics are assumed in most homogenisation algorithms and Gaussian white noise is the signal most used for testing the algorithms. The surrogate and synthetic data represent homogeneous climate data. To this data known inhomogeneities are added: outliers, as well as break inhomogeneities and local trends. Furthermore, missing data is simulated and a global trend is added. The participants have returned around 25 contributions. Some fully automatic algorithms were applied, but most homogenisation methods need human input. For well-known algorithms, MASH, PRODIGE, SNHT, multiple contributions were returned. This allowed us to study the importance of the implementation and the operator for homogenisation, which was found to be an important factor. For more information on the COST Action on homogenisation see: http://www.homogenisation.org/ For more information on - and for downloading of - the benchmark dataset and the returned data see: http://www.meteo.uni-bonn.de/venema/themes/homogenisation/

  11. Integrated test rig for tether hardware, real-time simulator and control algorithms: Robust momentum transfer validated

    NASA Astrophysics Data System (ADS)

    Kruijff, Michiel; van der Heide, Erik Jan

    2001-02-01

    In preparation of the ESA demonstration mission for a tethered sample return capability from ISS, a breadboard test has been performed to validate the robust StarTrack tether dynamics control algorithms in conjunction with the constructed hardware. The proposed mission will use hardware inherited from the YES mission (Kruijff, 1999). A tether spool is holding a 7 kg, 35 km Dyneema tether. A 45 kg re-entry capsule will be ejected by springs and then deployed by gravity gradient. The dynamics are solely controlled by a barberpole type friction brake, similar to the SEDS hardware. This hardware is integrated in a test rig, based on the TMM&M stand, that has been upgraded to accommodate both a Space Part (abruptly applied initial tether deployment speed, fine tensiometer, real-time space tether simulator using the tensiometer measurements as input, take-up roller deploying the tether at a simulator-controlled speed) and a Satellite Part (infra-red beams inside the tether canister, control computer estimating deployed length and required extra braking from the IRED interrupts, `barberpole' friction brake). So the set-up allows for a tether deployment with closed loop control, all governed by a real-time comprehensive tether dynamics simulation. The tether deployment is based on the two-stage StarTrack deployment. This scheme stabilizes the tether at an intermediate vertical stage (with 3 km deployed). When the orbit and landing site have synchronized, a high-speed deployment follows to a large angle. When the fully deployed 35-km tether swings to the vertical at approximately 40 m/s, it is cut at a prefixed time optimized for landing site accuracy. The paper discusses the tests performed to characterize the designed hardware, maturing of the developed algorithms with respect to the hardware noise levels and the difficulties and limitations of the test rig. It is found that the set-up can be applied to a variety of tether pre-mission tests. It is shown that the performed

  12. Testing multistage gain and offset trimming in a single photon counting IC with a charge sharing elimination algorithm

    NASA Astrophysics Data System (ADS)

    Krzyżanowska, A.; Gryboś, P.; Szczygieł, R.; Maj, P.

    2015-12-01

    Designing a hybrid pixel detector readout electronics operating in a single photon counting mode is a very challenging process, where many main parameters are optimized in parallel (e.g. gain, noise, and threshold dispersion). Additional requirements for a smaller pixel size with extended functionality push designers to use new deep sub-micron technologies. Minimizing the channel size is possible, however, with a decreased pixel size, the charge sharing effect becomes a more important issue. To overcome this problem, we designed an integrated circuit prototype produced in CMOS 40 nm technology, which has an extended functionality of a single pixel. A C8P1 algorithm for the charge sharing effect compensation was implemented. In the algorithm's first stage the charge is rebuilt in a signal rebuilt hub fed by the CSA (charge sensitive amplifier) outputs from four neighbouring pixels. Then, the pixel with the biggest amount of charge is chosen, after a comparison with all the adjacent ones. In order to process the data in such a complicated way, a certain architecture of a single channel was proposed, which allows for: ṡ processing the signal with the possibility of total charge reconstruction (by connecting with the adjacent pixels), ṡ a comparison of certain pixel amplitude to its 8 neighbours, ṡ the extended testability of each block inside the channel to measure CSA gain dispersion, shaper gain dispersion, threshold dispersion (including the simultaneous generation of different pulse amplitudes from different pixels), ṡ trimming all the necessary blocks for proper operation. We present a solution for multistage gain and offset trimming implemented in the IC prototype. It allows for minimization of the total charge extraction errors, minimization of threshold dispersion in the pixel matrix and minimization of errors of comparison of certain pixel pulse amplitudes with all its neighbours. The detailed architecture of a single channel is presented together

  13. Advanced Transport Delay Compensation Algorithms: Results of Delay Measurement and Piloted Performance Tests

    NASA Technical Reports Server (NTRS)

    Guo, Liwen; Cardullo, Frank M.; Kelly, Lon C.

    2007-01-01

    This report summarizes the results of delay measurement and piloted performance tests that were conducted to assess the effectiveness of the adaptive compensator and the state space compensator for alleviating the phase distortion of transport delay in the visual system in the VMS at the NASA Langley Research Center. Piloted simulation tests were conducted to assess the effectiveness of two novel compensators in comparison to the McFarland predictor and the baseline system with no compensation. Thirteen pilots with heterogeneous flight experience executed straight-in and offset approaches, at various delay configurations, on a flight simulator where different predictors were applied to compensate for transport delay. The glideslope and touchdown errors, power spectral density of the pilot control inputs, NASA Task Load Index, and Cooper-Harper rating of the handling qualities were employed for the analyses. The overall analyses show that the adaptive predictor results in slightly poorer compensation for short added delay (up to 48 ms) and better compensation for long added delay (up to 192 ms) than the McFarland compensator. The analyses also show that the state space predictor is fairly superior for short delay and significantly superior for long delay than the McFarland compensator.

  14. An in silico algorithm for identifying stabilizing pockets in proteins: test case, the Y220C mutant of the p53 tumor suppressor protein.

    PubMed

    Bromley, Dennis; Bauer, Matthias R; Fersht, Alan R; Daggett, Valerie

    2016-09-01

    The p53 tumor suppressor protein performs a critical role in stimulating apoptosis and cell cycle arrest in response to oncogenic stress. The function of p53 can be compromised by mutation, leading to increased risk of cancer; approximately 50% of cancers are associated with mutations in the p53 gene, the majority of which are in the core DNA-binding domain. The Y220C mutation of p53, for example, destabilizes the core domain by 4 kcal/mol, leading to rapid denaturation and aggregation. The associated loss of tumor suppressor functionality is associated with approximately 75 000 new cancer cases every year. Destabilized p53 mutants can be 'rescued' and their function restored; binding of a small molecule into a pocket on the surface of mutant p53 can stabilize its wild-type structure and restore its function. Here, we describe an in silico algorithm for identifying potential rescue pockets, including the algorithm's integration with the Dynameomics molecular dynamics data warehouse and the DIVE visual analytics engine. We discuss the results of the application of the method to the Y220C p53 mutant, entailing finding a putative rescue pocket through MD simulations followed by an in silico search for stabilizing ligands that dock into the putative rescue pocket. The top three compounds from this search were tested experimentally and one of them bound in the pocket, as shown by nuclear magnetic resonance, and weakly stabilized the mutant. PMID:27503952

  15. SIMULATING MAGNETOHYDRODYNAMICAL FLOW WITH CONSTRAINED TRANSPORT AND ADAPTIVE MESH REFINEMENT: ALGORITHMS AND TESTS OF THE AstroBEAR CODE

    SciTech Connect

    Cunningham, Andrew J.; Frank, Adam; Varniere, Peggy; Mitran, Sorin; Jones, Thomas W.

    2009-06-15

    A description is given of the algorithms implemented in the AstroBEAR adaptive mesh-refinement code for ideal magnetohydrodynamics. The code provides several high-resolution shock-capturing schemes which are constructed to maintain conserved quantities of the flow in a finite-volume sense. Divergence-free magnetic field topologies are maintained to machine precision by collating the components of the magnetic field on a cell-interface staggered grid and utilizing the constrained transport approach for integrating the induction equations. The maintenance of magnetic field topologies on adaptive grids is achieved using prolongation and restriction operators which preserve the divergence and curl of the magnetic field across collocated grids of different resolutions. The robustness and correctness of the code is demonstrated by comparing the numerical solution of various tests with analytical solutions or previously published numerical solutions obtained by other codes.

  16. Improving the Response of a Rollover Sensor Placed in a Car under Performance Tests by Using a RLS Lattice Algorithm

    PubMed Central

    Hernandez, Wilmar

    2005-01-01

    In this paper, a sensor to measure the rollover angle of a car under performance tests is presented. Basically, the sensor consists of a dual-axis accelerometer, analog-electronic instrumentation stages, a data acquisition system and an adaptive filter based on a recursive least-squares (RLS) lattice algorithm. In short, the adaptive filter is used to improve the performance of the rollover sensor by carrying out an optimal prediction of the relevant signal coming from the sensor, which is buried in a broad-band noise background where we have little knowledge of the noise characteristics. The experimental results are satisfactory and show a significant improvement in the signal-to-noise ratio at the system output.

  17. The remote sensing of ocean primary productivity - Use of a new data compilation to test satellite algorithms

    NASA Technical Reports Server (NTRS)

    Balch, William; Evans, Robert; Brown, Jim; Feldman, Gene; Mcclain, Charles; Esaias, Wayne

    1992-01-01

    Global pigment and primary productivity algorithms based on a new data compilation of over 12,000 stations occupied mostly in the Northern Hemisphere, from the late 1950s to 1988, were tested. The results showed high variability of the fraction of total pigment contributed by chlorophyll, which is required for subsequent predictions of primary productivity. Two models, which predict pigment concentration normalized to an attenuation length of euphotic depth, were checked against 2,800 vertical profiles of pigments. Phaeopigments consistently showed maxima at about one optical depth below the chlorophyll maxima. CZCS data coincident with the sea truth data were also checked. A regression of satellite-derived pigment vs ship-derived pigment had a coefficient of determination. The satellite underestimated the true pigment concentration in mesotrophic and oligotrophic waters and overestimated the pigment concentration in eutrophic waters. The error in the satellite estimate showed no trends with time between 1978 and 1986.

  18. Implementation and Operational Research: What Happens After a Negative Test for Tuberculosis? Evaluating Adherence to TB Diagnostic Algorithms in South African Primary Health Clinics

    PubMed Central

    Grant, A. D.; Chihota, V.; Ginindza, S.; Mvusi, L.; Churchyard, G. J.; Fielding, K.L.

    2016-01-01

    Introduction and Background: Diagnostic tests for tuberculosis (TB) using sputum have suboptimal sensitivity among HIV-positive persons. We assessed health care worker adherence to TB diagnostic algorithms after negative sputum test results. Methods: The XTEND (Xpert for TB—Evaluating a New Diagnostic) trial compared outcomes among people tested for TB in primary care clinics using Xpert MTB/RIF vs. smear microscopy as the initial test. We analyzed data from XTEND participants who were HIV positive or HIV status unknown, whose initial sputum Xpert MTB/RIF or microscopy result was negative. If chest radiography, sputum culture, or hospital referral took place, the algorithm for TB diagnosis was considered followed. Analysis of intervention (Xpert MTB/RIF) effect on algorithm adherence used methods for cluster-randomized trials with small number of clusters. Results: Among 4037 XTEND participants with initial negative test results, 2155 (53%) reported being or testing HIV positive and 540 (14%) had unknown HIV status. Among 2155 HIV-positive participants [684 (32%) male, mean age 37 years (range, 18–79 years)], there was evidence of algorithm adherence among 515 (24%). Adherence was less likely among persons tested initially with Xpert MTB/RIF vs. smear [14% (142/1031) vs. 32% (364/1122), adjusted risk ratio 0.34 (95% CI: 0.17 to 0.65)] and for participants with unknown vs. positive HIV status [59/540 (11%) vs. 507/2155 (24%)]. Conclusions: We observed poorer adherence to TB diagnostic algorithms among HIV-positive persons tested initially with Xpert MTB/RIF vs. microscopy. Poor adherence to TB diagnostic algorithms and incomplete coverage of HIV testing represents a missed opportunity to diagnose TB and HIV, and may contribute to TB mortality. PMID:26966843

  19. Testing Nelder-Mead based repulsion algorithms for multiple roots of nonlinear systems via a two-level factorial design of experiments.

    PubMed

    Ramadas, Gisela C V; Rocha, Ana Maria A C; Fernandes, Edite M G P

    2015-01-01

    This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as 'erf', is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. PMID:25875591

  20. Testing Nelder-Mead Based Repulsion Algorithms for Multiple Roots of Nonlinear Systems via a Two-Level Factorial Design of Experiments

    PubMed Central

    Fernandes, Edite M. G. P.

    2015-01-01

    This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as ‘erf’, is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. PMID:25875591

  1. Decision making for diagnosis and management: algorithms from experts for molecular testing.

    PubMed

    Bumpous, Jeffrey; Celestre, Miranda D; Pribitkin, Edmund; Stack, Brendan C

    2014-08-01

    Cases are presented in light of the current diagnostic and therapeutic trends in management of thyroid nodules and well-differentiated cancers. Demographic, historical, and population-based risk factors are used to risk stratify cases. Ultrasonographic features and other imaging are discussed with regard to appropriateness of utilization and impact on management. The role of traditional cytologic and histopathologic analysis with fine-needle aspiration and intraoperative frozen sections is discussed, including diagnostic nuances and limitations. The emerging role of biomarkers such as Braf are evaluated regarding their role in contemporary assessment of thyroid nodules by reviewing practical cases. PMID:25041961

  2. Quantum algorithms

    NASA Astrophysics Data System (ADS)

    Abrams, Daniel S.

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  3. Three-dimensional graphics simulator for testing mine machine computer-controlled algorithms -- phase 1 development

    SciTech Connect

    Ambrose, D.H. )

    1993-01-01

    Using three-dimensional (3-D) graphics computing to evaluate new technologies for computer-assisted mining systems illustrates how these visual techniques can redefine the way researchers look at raw scientific data. The US Bureau of Mines is using 3-D graphics computing to obtain cheaply, easily, and quickly information about the operation and design of current and proposed mechanical coal and metal-nonmetal mining systems. Bureau engineers developed a graphics simulator for a continuous miner that enables a realistic test for experimental software that controls the functions of a machine. Some of the specific simulated functions of the continuous miner are machine motion, appendage motion, machine position, and machine sensors. The simulator uses data files generated in the laboratory or mine using a computer-assisted mining machine. The data file contains information from a laser-based guidance system and a data acquisition system that records all control commands given to a computer-assisted mining machine. This report documents the first phase in developing the simulator and discusses simulator requirements, features of the initial simulator, and several examples of its application. During this endeavor, Bureau engineers discovered and appreciated the simulator's potential to assist their investigations of machine controls and navigation systems.

  4. Crack measurement: Development, testing and applications of an automatic image-based algorithm

    NASA Astrophysics Data System (ADS)

    Barazzetti, Luigi; Scaioni, Marco

    The paper presents an Image-based Method for Crack Analysis (IMCA) which is capable of processing a sequence of digital imagery to perform a twofold task: (i) the extraction of crack borders and the evaluation of its width across the longitudinal profile; (ii) the measurement of crack deformations (width, sliding and rotation). Here both problems are solved in 2-D, but an extension to 3-D is also addressed. The equipment needed to apply the method is made up of a digital camera (or a still video-camera in case a high frequency in data acquisition is necessary), an orientation frame which establishes the object reference system, a pair of signalized supports to be placed in a permanent way on both sides of the crack to compute deformations; however, permanent targets are mandatory only for case (ii). The measurement process is carried out in a fully automatic way, a fact also that makes this technique highly operational for unskilled people in engineering surveying or photogrammetry. The accuracy of the proposed method, evaluated in experimental tests adopting different consumer digital cameras, is about ± 5-20 μm, like the accuracy of most deformometers, but with the advantage of automation and of augmented achievable information; moreover, the image sequence can be archived and off-line measurements could be performed at any time.

  5. Evaluation of the Repeatability of the Delta Q Duct Leakage Testing TechniqueIncluding Investigation of Robust Analysis Techniques and Estimates of Weather Induced Uncertainty

    SciTech Connect

    Dickerhoff, Darryl; Walker, Iain

    2008-08-01

    for the pressure station approach. Walker and Dickerhoff also included estimates of DeltaQ test repeatability based on the results of field tests where two houses were tested multiple times. The two houses were quite leaky (20-25 Air Changes per Hour at 50Pa (0.2 in. water) (ACH50)) and were located in the San Francisco Bay area. One house was tested on a calm day and the other on a very windy day. Results were also presented for two additional houses that were tested by other researchers in Minneapolis, MN and Madison, WI, that had very tight envelopes (1.8 and 2.5 ACH50). These tight houses had internal duct systems and were tested without operating the central blower--sometimes referred to as control tests. The standard deviations between the multiple tests for all four houses were found to be about 1% of the envelope air flow at 50 Pa (0.2 in. water) (Q50) that led to the suggestion of this as a rule of thumb for estimating DeltaQ uncertainty. Because DeltaQ is based on measuring envelope air flows it makes sense for uncertainty to scale with envelope leakage. However, these tests were on a limited data set and one of the objectives of the current study is to increase the number of tested houses. This study focuses on answering two questions: (1) What is the uncertainty associated with changes in weather (primarily wind) conditions during DeltaQ testing? (2) How can these uncertainties be reduced? The first question is addressing issues of repeatability. To study this five houses were tested as many times as possible over a day. Weather data was recorded on-site--including the local windspeed. The result from these five houses were combined with the two Bay Area homes from the previous studies. The variability of the tests (represented by the standard deviation) is the repeatability of the test method for that house under the prevailing weather conditions. Because the testing was performed over a day a wide range of wind speeds was achieved following typical

  6. Development, refinement, and testing of a short term solar flare prediction algorithm

    NASA Technical Reports Server (NTRS)

    Smith, Jesse B., Jr.

    1993-01-01

    During the period included in this report, the expenditure of time and effort, and progress toward performance of the tasks and accomplishing the goals set forth in the two year research grant proposal, consisted primarily of calibration and analysis of selected data sets. The heliographic limits of 30 degrees from central meridian were continued. As previously reported, all analyses are interactive and are performed by the Principal Investigator. It should also be noted that the analysis time involved by the Principal Investigator during this reporting period was limited, partially due to illness and partially resulting from other uncontrollable factors. The calibration technique (as developed by MSFC solar scientists), incorporates sets of constants which vary according to the wave length of the observation data set. One input constant is then varied interactively to correct for observing conditions, etc., to result in a maximum magnetic field strength (in the calibrated data), based on a separate analysis. There is some insecurity in the methodology and the selection of variables to yield the most self-consistent results for variable maximum field strengths and for variable observing/atmospheric conditions. Several data sets were analyzed using differing constant sets, and separate analyses to differing maximum field strength - toward standardizing methodology and technique for the most self-consistent results for the large number of cases. It may be necessary to recalibrate some of the analyses, but the sc analyses are retained on the optical disks and can still be used with recalibration where necessary. Only the extracted parameters will be changed.

  7. Clostridium difficile testing algorithms using glutamate dehydrogenase antigen and C. difficile toxin enzyme immunoassays with C. difficile nucleic acid amplification testing increase diagnostic yield in a tertiary pediatric population.

    PubMed

    Ota, Kaede V; McGowan, Karin L

    2012-04-01

    We evaluated the performance of the rapid C. diff Quik Chek Complete's glutamate dehydrogenase antigen (GDH) and toxin A/B (CDT) tests in two algorithmic approaches for a tertiary pediatric population: algorithm 1 entailed initial testing with GDH/CDT followed by loop-mediated isothermal amplification (LAMP), and algorithm 2 entailed GDH/CDT followed by cytotoxicity neutralization assay (CCNA) for adjudication of discrepant GDH-positive/CDT-negative results. A true positive (TP) was defined as positivity by CCNA or positivity by LAMP plus another test (GDH, CDT, or the Premier C. difficile toxin A and B enzyme immunoassay [P-EIA]). A total of 141 specimens from 141 patients yielded 27 TPs and 19% prevalence. Sensitivity, specificity, positive predictive value, and negative predictive value were 56%, 100%, 100%, and 90% for P-EIA and 81%, 100%, 100%, and 96% for both algorithm 1 and algorithm 2. In summary, GDH-based algorithms detected C. difficile infections with superior sensitivity compared to P-EIA. The algorithms allowed immediate reporting of half of all TPs, but LAMP or CCNA was required to confirm the presence or absence of toxigenic C. difficile in GDH-positive/CDT-negative specimens. PMID:22259201

  8. Clostridium difficile Testing Algorithms Using Glutamate Dehydrogenase Antigen and C. difficile Toxin Enzyme Immunoassays with C. difficile Nucleic Acid Amplification Testing Increase Diagnostic Yield in a Tertiary Pediatric Population

    PubMed Central

    McGowan, Karin L.

    2012-01-01

    We evaluated the performance of the rapid C. diff Quik Chek Complete's glutamate dehydrogenase antigen (GDH) and toxin A/B (CDT) tests in two algorithmic approaches for a tertiary pediatric population: algorithm 1 entailed initial testing with GDH/CDT followed by loop-mediated isothermal amplification (LAMP), and algorithm 2 entailed GDH/CDT followed by cytotoxicity neutralization assay (CCNA) for adjudication of discrepant GDH-positive/CDT-negative results. A true positive (TP) was defined as positivity by CCNA or positivity by LAMP plus another test (GDH, CDT, or the Premier C. difficile toxin A and B enzyme immunoassay [P-EIA]). A total of 141 specimens from 141 patients yielded 27 TPs and 19% prevalence. Sensitivity, specificity, positive predictive value, and negative predictive value were 56%, 100%, 100%, and 90% for P-EIA and 81%, 100%, 100%, and 96% for both algorithm 1 and algorithm 2. In summary, GDH-based algorithms detected C. difficile infections with superior sensitivity compared to P-EIA. The algorithms allowed immediate reporting of half of all TPs, but LAMP or CCNA was required to confirm the presence or absence of toxigenic C. difficile in GDH-positive/CDT-negative specimens. PMID:22259201

  9. Tests of a simple data merging algorithm for the GONG project

    NASA Technical Reports Server (NTRS)

    Williams, W. E.; Hill, F.

    1992-01-01

    The GONG (Global Oscillation Network Group) project proposes to reduce the impact of diurnal variations on helioseismic measurements by making long-term observations of solar images from six sites placed around the globe. The sun will be observed nearly constantly for three years, resulting in the acquisition of l+ terabyte of image data. To use the solar network to maximum advantage, the images from the sites must be combined into a single time series to determine mode frequencies, amplitudes, and line widths. Initial versions of combined, i.e., merged, time series were made using a simple weighted average of data from different sites taken simultaneously. In order to accurately assess the impact of the data merge on the helioseismic measurements, a set of artificial solar disk images was made using a standard solar model and containing a well known set of oscillation modes and frequencies. This undegraded data set and data products computed from it were used to judge the relative merits of various data merging schemes. The artificial solar disk images were subjected to various instrumental and atmospheric degradations, dependent on site and time, in order to create a set of images simulating those likely to be taken at the site. The degraded artificial solar disk images for the six observing sites were combined in various ways to form merged time series of images and mode coefficients. Various forms of a weighted average were used, including an equally-weighted average, an average with weights dependent upon air mass and averages with weights dependent on various quality assurance parameters. Both the undegraded solar disk image time series and several time series made up of various combinations of the degraded solar disk images from the six sites were subjected to standard helioseismic measurement processing. This processing consisted of coordinate remapping, detrending, spherical harmonic transformation, computation of power series for the oscillation mode

  10. Revised Phase II Plan for the National Education Practice File Development Project Including: Creation; Pilot Testing; and Evaluation of a Test Practice File. Product 1.7/1.8 (Product 1.6 Appended).

    ERIC Educational Resources Information Center

    Benson, Gregory, Jr.; And Others

    A detailed work plan is presented for the conduct of Phase II activities, which are concerned with creating a pilot test file, conducting a test of it, evaluating the process and input of the file, and preparing the file management plan. Due to the outcomes of activities in Phase I, this plan was revised from an earlier outline. Included in the…

  11. Development and test results of a flight management algorithm for fuel conservative descents in a time-based metered traffic environment

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Cannon, D. G.

    1980-01-01

    A simple flight management descent algorithm designed to improve the accuracy of delivering an airplane in a fuel-conservative manner to a metering fix at a time designated by air traffic control was developed and flight tested. This algorithm provides a three dimensional path with terminal area time constraints (four dimensional) for an airplane to make an idle thrust, clean configured (landing gear up, flaps zero, and speed brakes retracted) descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithm is described. The results of the flight tests flown with the Terminal Configured Vehicle airplane are presented.

  12. Planning fuel-conservative descents with or without time constraints using a small programmable calculator: Algorithm development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1983-01-01

    A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight tests flown with a T-39A (Sabreliner) airplane are presented.

  13. Reconstruction of hyperspectral reflectance for optically complex turbid inland lakes: test of a new scheme and implications for inversion algorithms.

    PubMed

    Sun, Deyong; Hu, Chuanmin; Qiu, Zhongfeng; Wang, Shengqiang

    2015-06-01

    A new scheme has been proposed by Lee et al. (2014) to reconstruct hyperspectral (400 - 700 nm, 5 nm resolution) remote sensing reflectance (Rrs(λ), sr-1) of representative global waters using measurements at 15 spectral bands. This study tested its applicability to optically complex turbid inland waters in China, where Rrs(λ) are typically much higher than those used in Lee et al. (2014). Strong interdependence of Rrs(λ) between neighboring bands (≤ 10 nm interval) was confirmed, with Pearson correlation coefficient (PCC) mostly above 0.98. The scheme of Lee et al. (2014) for Rrs(λ) re-construction with its original global parameterization worked well with this data set, while new parameterization showed improvement in reducing uncertainties in the reconstructed Rrs(λ). Mean absolute error (MAERrsi)) in the reconstructed Rrs(λ) was mostly < 0.0002 sr-1 between 400 and 700nm, and mean relative error (MRERrsi)) was < 1% when the comparison was made between reconstructed and measured Rrs(λ) spectra. When Rrs(λ) at the MODIS bands were used to reconstruct the hyperspectral Rrs(λ), MAERrsi) was < 0.001 sr-1 and MRERrsi) was < 3%. When Rrs(λ) at the MERIS bands were used, MAERrsi) in the reconstructed hyperspectral Rrs(λ) was < 0.0004 sr-1 and MRERrsi) was < 1%. These results have significant implications for inversion algorithms to retrieve concentrations of phytoplankton pigments (e.g., chlorophyll-a or Chla, and phycocyanin or PC) and total suspended materials (TSM) as well as absorption coefficient of colored dissolved organic matter (CDOM), as some of the algorithms were developed from in situ Rrs(λ) data using spectral bands that

  14. Estimating Implementation and Operational Costs of an Integrated Tiered CD4 Service including Laboratory and Point of Care Testing in a Remote Health District in South Africa

    PubMed Central

    Cassim, Naseem; Coetzee, Lindi M.; Schnippel, Kathryn; Glencross, Deborah K.

    2014-01-01

    Background An integrated tiered service delivery model (ITSDM) has been proposed to provide ‘full-coverage’ of CD4 services throughout South Africa. Five tiers are described, defined by testing volumes and number of referring health-facilities. These include: (1) Tier-1/decentralized point-of-care service (POC) in a single site; Tier-2/POC-hub servicing processing <30–40 samples from 8–10 health-clinics; Tier-3/Community laboratories servicing ∼50 health-clinics, processing <150 samples/day; high-volume centralized laboratories (Tier-4 and Tier-5) processing <300 or >600 samples/day and serving >100 or >200 health-clinics, respectively. The objective of this study was to establish costs of existing and ITSDM-tiers 1, 2 and 3 in a remote, under-serviced district in South Africa. Methods Historical health-facility workload volumes from the Pixley-ka-Seme district, and the total volumes of CD4 tests performed by the adjacent district referral CD4 laboratories, linked to locations of all referring clinics and related laboratory-to-result turn-around time (LTR-TAT) data, were extracted from the NHLS Corporate-Data-Warehouse for the period April-2012 to March-2013. Tiers were costed separately (as a cost-per-result) including equipment, staffing, reagents and test consumable costs. A one-way sensitivity analyses provided for changes in reagent price, test volumes and personnel time. Results The lowest cost-per-result was noted for the existing laboratory-based Tiers- 4 and 5 ($6.24 and $5.37 respectively), but with related increased LTR-TAT of >24–48 hours. Full service coverage with TAT <6-hours could be achieved with placement of twenty-seven Tier-1/POC or eight Tier-2/POC-hubs, at a cost-per-result of $32.32 and $15.88 respectively. A single district Tier-3 laboratory also ensured ‘full service coverage’ and <24 hour LTR-TAT for the district at $7.42 per-test. Conclusion Implementing a single Tier-3/community laboratory to extend and improve delivery

  15. Lord-Wingersky Algorithm Version 2.0 for Hierarchical Item Factor Models with Applications in Test Scoring, Scale Alignment, and Model Fit Testing. CRESST Report 830

    ERIC Educational Resources Information Center

    Cai, Li

    2013-01-01

    Lord and Wingersky's (1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined…

  16. High-Speed Wind-Tunnel Tests of a Model of the Lockheed YP-80A Airplane Including Correlation with Flight Tests and Tests of Dive-Recovery Flaps

    NASA Technical Reports Server (NTRS)

    Cleary, Joseph W.; Gray, Lyle J.

    1947-01-01

    This report contains the results of tests of a 1/3-scale model of the Lockheed YP-90A "Shooting Star" airplane and a comparison of drag, maximum lift coefficient, and elevator angle required for level flight as measured in the wind tunnel and in flight. Included in the report are the general aerodynamic characteristics of the model and of two types of dive-recovery flaps, one at several positions along the chord on the lower surface of the wing and the other on the lower surface of the fuselage. The results show good agreement between the flight and wind-tunnel measurements at all Mach numbers. The results indicate that the YP-80A is controllable in pitch by the elevators to a Mach number of at least 0.85. The fuselage dive-recovery flaps are effective for producing a climbing moment and increasing the drag at Mach numbers up to at least 0.8. The wing dive-recovery flaps are most effective for producing a climbing moment at 0.75 Mach number. At 0.85 Mach number, their effectiveness is approximately 50 percent of the maximum. The optimum position for the wing dive-recovery flaps to produce a climbing moment is at approximately 35 percent of the chord.

  17. Seroconverting Blood Donors as a Resource for Characterising and Optimising Recent Infection Testing Algorithms for Incidence Estimation

    PubMed Central

    Kassanjee, Reshma; Welte, Alex; McWalter, Thomas A.; Keating, Sheila M.; Vermeulen, Marion; Stramer, Susan L.; Busch, Michael P.

    2011-01-01

    Introduction Biomarker-based cross-sectional incidence estimation requires a Recent Infection Testing Algorithm (RITA) with an adequately large mean recency duration, to achieve reasonable survey counts, and a low false-recent rate, to minimise exposure to further bias and imprecision. Estimating these characteristics requires specimens from individuals with well-known seroconversion dates or confirmed long-standing infection. Specimens with well-known seroconversion dates are typically rare and precious, presenting a bottleneck in the development of RITAs. Methods The mean recency duration and a ‘false-recent rate’ are estimated from data on seroconverting blood donors. Within an idealised model for the dynamics of false-recent results, blood donor specimens were used to characterise RITAs by a new method that maximises the likelihood of cohort-level recency classifications, rather than modelling individual sojourn times in recency. Results For a range of assumptions about the false-recent results (0% to 20% of biomarker response curves failing to reach the threshold distinguishing test-recent and test-non-recent infection), the mean recency duration of the Vironostika-LS ranged from 154 (95% CI: 96–231) to 274 (95% CI: 234–313) days in the South African donor population (n = 282), and from 145 (95% CI: 67–226) to 252 (95% CI: 194–308) days in the American donor population (n = 106). The significance of gender and clade on performance was rejected (p−value = 10%), and utility in incidence estimation appeared comparable to that of a BED-like RITA. Assessment of the Vitros-LS (n = 108) suggested potentially high false-recent rates. Discussion The new method facilitates RITA characterisation using widely available specimens that were previously overlooked, at the cost of possible artefacts. While accuracy and precision are insufficient to provide estimates suitable for incidence surveillance, a low-cost approach for preliminary

  18. Testing the GLAaS algorithm for dose measurements on low- and high-energy photon beams using an amorphous silicon portal imager

    SciTech Connect

    Nicolini, Giorgia; Fogliata, Antonella; Vanetti, Eugenio; Clivio, Alessandro; Vetterli, Daniel; Cozzi, Luca

    2008-02-15

    The GLAaS algorithm for pretreatment intensity modulation radiation therapy absolute dose verification based on the use of amorphous silicon detectors, as described in Nicolini et al. [G. Nicolini, A. Fogliata, E. Vanetti, A. Clivio, and L. Cozzi, Med. Phys. 33, 2839-2851 (2006)], was tested under a variety of experimental conditions to investigate its robustness, the possibility of using it in different clinics and its performance. GLAaS was therefore tested on a low-energy Varian Clinac (6 MV) equipped with an amorphous silicon Portal Vision PV-aS500 with electronic readout IAS2 and on a high-energy Clinac (6 and 15 MV) equipped with a PV-aS1000 and IAS3 electronics. Tests were performed for three calibration conditions: A: adding buildup on the top of the cassette such that SDD-SSD=d{sub max} and comparing measurements with corresponding doses computed at d{sub max}, B: without adding any buildup on the top of the cassette and considering only the intrinsic water-equivalent thickness of the electronic portal imaging devices device (0.8 cm), and C: without adding any buildup on the top of the cassette but comparing measurements against doses computed at d{sub max}. This procedure is similar to that usually applied when in vivo dosimetry is performed with solid state diodes without sufficient buildup material. Quantitatively, the gamma index ({gamma}), as described by Low et al. [D. A. Low, W. B. Harms, S. Mutic, and J. A. Purdy, Med. Phys. 25, 656-660 (1998)], was assessed. The {gamma} index was computed for a distance to agreement (DTA) of 3 mm. The dose difference {delta}D was considered as 2%, 3%, and 4%. As a measure of the quality of results, the fraction of field area with gamma larger than 1 (%FA) was scored. Results over a set of 50 test samples (including fields from head and neck, breast, prostate, anal canal, and brain cases) and from the long-term routine usage, demonstrated the robustness and stability of GLAaS. In general, the mean values of %FA

  19. Development and Field-Testing of a Study Protocol, including a Web-Based Occupant Survey Tool, for Use in Intervention Studies of Indoor Environmental Quality

    SciTech Connect

    Mendell, Mark; Eliseeva, Ekaterina; Spears, Michael; Fisk, William J.

    2009-06-01

    We developed and pilot-tested an overall protocol for intervention studies to evaluate the effects of indoor environmental changes in office buildings on the health symptoms and comfort of occupants. The protocol includes a web-based survey to assess the occupant's responses, as well as specific features of study design and analysis. The pilot study, carried out on two similar floors in a single building, compared two types of ventilation system filter media. With support from the building's Facilities staff, the implementation of the filter change intervention went well. While the web-based survey tool worked well also, low overall response rates (21-34percent among the three work groups included) limited our ability to evaluate the filter intervention., The total number of questionnaires returned was low even though we extended the study from eight to ten weeks. Because another simultaneous study we conducted elsewhere using the same survey had a high response rate (>70percent), we conclude that the low response here resulted from issues specific to this pilot, including unexpected restrictions by some employing agencies on communication with occupants.

  20. The Doylestown Algorithm: A Test to Improve the Performance of AFP in the Detection of Hepatocellular Carcinoma.

    PubMed

    Wang, Mengjun; Devarajan, Karthik; Singal, Amit G; Marrero, Jorge A; Dai, Jianliang; Feng, Ziding; Rinaudo, Jo Ann S; Srivastava, Sudhir; Evans, Alison; Hann, Hie-Won; Lai, Yinzhi; Yang, Hushan; Block, Timothy M; Mehta, Anand

    2016-02-01

    Biomarkers for the early diagnosis of hepatocellular carcinoma (HCC) are needed to decrease mortality from this cancer. However, as new biomarkers have been slow to be brought to clinical practice, we have developed a diagnostic algorithm that utilizes commonly used clinical measurements in those at risk of developing HCC. Briefly, as α-fetoprotein (AFP) is routinely used, an algorithm that incorporated AFP values along with four other clinical factors was developed. Discovery analysis was performed on electronic data from patients who had liver disease (cirrhosis) alone or HCC in the background of cirrhosis. The discovery set consisted of 360 patients from two independent locations. A logistic regression algorithm was developed that incorporated log-transformed AFP values with age, gender, alkaline phosphatase, and alanine aminotransferase levels. We define this as the Doylestown algorithm. In the discovery set, the Doylestown algorithm improved the overall performance of AFP by 10%. In subsequent external validation in over 2,700 patients from three independent sites, the Doylestown algorithm improved detection of HCC as compared with AFP alone by 4% to 20%. In addition, at a fixed specificity of 95%, the Doylestown algorithm improved the detection of HCC as compared with AFP alone by 2% to 20%. In conclusion, the Doylestown algorithm consolidates clinical laboratory values, with age and gender, which are each individually associated with HCC risk, into a single value that can be used for HCC risk assessment. As such, it should be applicable and useful to the medical community that manages those at risk for developing HCC. PMID:26712941

  1. A new single nucleotide polymorphism in CAPN1 extends the current tenderness marker test to include cattle of Bos indicus, Bos taurus, and crossbred descent.

    PubMed

    White, S N; Casas, E; Wheeler, T L; Shackelford, S D; Koohmaraie, M; Riley, D G; Chase, C C; Johnson, D D; Keele, J W; Smith, T P L

    2005-09-01

    The three objectives of this study were to 1) test for the existence of beef tenderness markers in the CAPN1 gene segregating in Brahman cattle; 2) test existing CAPN1 tenderness markers in indicus-influenced crossbred cattle; and 3) produce a revised marker system for use in cattle of all subspecies backgrounds. Previously, two SNP in the CAPN1 gene have been described that could be used to guide selection in Bos taurus cattle (designated Markers 316 and 530), but neither marker segregates at high frequency in Brahman cattle. In this study, we examined three additional SNP in CAPN1 to determine whether variation in this gene could be associated with tenderness in a large, multisire American Brahman population. One marker (termed 4751) was associated with shear force on postmortem d 7 (P < 0.01), 14 (P = 0.015), and 21 (P < 0.001) in this population, demonstrating that genetic variation important for tenderness segregates in Bos indicus cattle at or near CAPN1. Marker 4751 also was associated with shear force (P < 0.01) in the same large, multisire population of cattle of strictly Bos taurus descent that was used to develop the previously reported SNP (referred to as the Germplasm Evaluation [GPE] Cycle 7 population), indicating the possibility that one marker could have wide applicability in cattle of all subspecies backgrounds. To test this hypothesis, Marker 4751 was tested in a third large, multisire cattle population of crossbred subspecies descent (including sire breeds of Brangus, Beefmaster, Bonsmara, Romosinuano, Hereford, and Angus referred to as the GPE Cycle 8 population). The highly significant association of Marker 4751 with shear force in this population (P < 0.001) confirms the usefulness of Marker 4751 in cattle of all subspecies backgrounds, including Bos taurus, Bos indicus, and crossbred descent. This wide applicability adds substantial value over previously released Markers 316 and 530. However, Marker 316, which had previously been shown to be

  2. High rate of missed HIV infections in individuals with indeterminate or negative HIV western blots based on current HIV testing algorithm in China.

    PubMed

    Liu, Man-Qing; Zhu, Ze-Rong; Kong, Wen-Hua; Tang, Li; Peng, Jin-Song; Wang, Xia; Xu, Jun; Schilling, Robert F; Cai, Thomas; Zhou, Wang

    2016-08-01

    It remains unclear if China's current HIV antibody testing algorithm misses a substantial number of HIV infected individuals. Of 196 specimens with indeterminate or negative results on HIV western blot (WB) retrospectively examined by HIV-1 nucleic acid test (NAT), 67.57% (75/111) of indeterminate WB samples, and 16.47% (14/85) of negative WB samples were identified as NAT positive. HIV-1 loads in negative WB samples were significantly higher than those in indeterminate WB samples. Notably, 86.67% (13/15) of samples with negative WB and double positive immunoassay results were NAT positive. The rate of HIV-1 infections missed by China's current HIV testing algorithm is unacceptably high. Thus, China should consider using NAT or integrating fourth generation ELISA into current only antibodies-based HIV confirmation. J. Med. Virol. 88:1462-1466, 2016. © 2016 Wiley Periodicals, Inc. PMID:26856240

  3. Preliminary results from a subsonic high-angle-of-attack flush airdata sensing (HI-FADS) system - Design, calibration, algorithm development, and flight test evaluation

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Moes, Timothy R.; Larson, Terry J.

    1990-01-01

    A nonintrusive high angle-of-attack flush airdata sensing (HI-FADS) system was installed and flight-tested on the F-18 high alpha research vehicle. This paper discusses the airdata algorithm development and composite results expressed as airdata parameter estimates and describes the HI-FADS system hardware, calibration techniques, and algorithm development. An independent empirical verification was performed over a large portion of the subsonic flight envelope. Test points were obtained for Mach numbers from 0.15 to 0.94 and angles of attack from -8.0 to 55.0 deg. Angles of sideslip ranged from -15.0 to 15.0 deg, and test altitudes ranged from 18,000 to 40,000 ft. The HI-FADS system gave excellent results over the entire subsonic Mach number range up to 55 deg angle of attack. The internal pneumatic frequency response of the system is accurate to beyond 10 Hz.

  4. An Innovative Thinking-Based Intelligent Information Fusion Algorithm

    PubMed Central

    Hu, Liang; Liu, Gang; Zhou, Jin

    2013-01-01

    This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information. PMID:23956699

  5. In vivo rodent erythrocyte micronucleus assay. II. Some aspects of protocol design including repeated treatments, integration with toxicity testing, and automated scoring.

    PubMed

    Hayashi, M; MacGregor, J T; Gatehouse, D G; Adler, I D; Blakey, D H; Dertinger, S D; Krishna, G; Morita, T; Russo, A; Sutou, S

    2000-01-01

    An expert working group on the in vivo micronucleus assay, formed as part of the International Workshop on Genotoxicity Test Procedures (IWGTP), discussed protocols for the conduct of established and proposed micronucleus assays at a meeting held March 25-26, 1999 in Washington, DC, in conjunction with the annual meeting of the Environmental Mutagen Society. The working group reached consensus on a number issues, including: (1) protocols using repeated dosing in mice and rats; (2) integration of the (rodent erythrocyte) micronucleus assay into general toxicology studies; (3) the possible omission of concurrently-treated positive control animals from the assay; (4) automation of micronucleus scoring by flow cytometry or image analysis; (5) criteria for regulatory acceptance; (6) detection of aneuploidy induction in the micronucleus assay; and (7) micronucleus assays in tissues (germ cells, other organs, neonatal tissue) other than bone marrow. This report summarizes the discussions and recommendations of this working group. In the classic rodent erythrocyte assay, treatment schedules using repeated dosing of mice or rats, and integration of assays using such schedules into short-term toxicology studies, were considered acceptable as long as certain study criteria were met. When the micronucleus assay is integrated into ongoing toxicology studies, relatively short-term repeated-dose studies should be used preferentially because there is not yet sufficient data to demonstrate that conservative dose selection in longer term studies (longer than 1 month) does not reduce the sensitivity of the assay. Additional validation data are needed to resolve this point. In studies with mice, either bone marrow or blood was considered acceptable as the tissue for assessing micronucleus induction, provided that the absence of spleen function has been verified in the animal strains used. In studies with rats, the principal endpoint should be the frequency of micronucleated immature

  6. Improved error estimates of a discharge algorithm for remotely sensed river measurements: Test cases on Sacramento and Garonne Rivers

    NASA Astrophysics Data System (ADS)

    Yoon, Yeosang; Garambois, Pierre-André; Paiva, Rodrigo C. D.; Durand, Michael; Roux, Hélène; Beighley, Edward

    2016-01-01

    We present an improvement to a previously presented algorithm that used a Bayesian Markov Chain Monte Carlo method for estimating river discharge from remotely sensed observations of river height, width, and slope. We also present an error budget for discharge calculations from the algorithm. The algorithm may be utilized by the upcoming Surface Water and Ocean Topography (SWOT) mission. We present a detailed evaluation of the method using synthetic SWOT-like observations (i.e., SWOT and AirSWOT, an airborne version of SWOT). The algorithm is evaluated using simulated AirSWOT observations over the Sacramento and Garonne Rivers that have differing hydraulic characteristics. The algorithm is also explored using SWOT observations over the Sacramento River. SWOT and AirSWOT height, width, and slope observations are simulated by corrupting the "true" hydraulic modeling results with instrument error. Algorithm discharge root mean square error (RMSE) was 9% for the Sacramento River and 15% for the Garonne River for the AirSWOT case using expected observation error. The discharge uncertainty calculated from Manning's equation was 16.2% and 17.1%, respectively. For the SWOT scenario, the RMSE and uncertainty of the discharge estimate for the Sacramento River were 15% and 16.2%, respectively. A method based on the Kalman filter to correct errors of discharge estimates was shown to improve algorithm performance. From the error budget, the primary source of uncertainty was the a priori uncertainty of bathymetry and roughness parameters. Sensitivity to measurement errors was found to be a function of river characteristics. For example, Steeper Garonne River is less sensitive to slope errors than the flatter Sacramento River.

  7. Corrective Action Investigation Plan for Corrective Action Unit 410: Waste Disposal Trenches, Tonopah Test Range, Nevada, Revision 0 (includes ROTCs 1, 2, and 3)

    SciTech Connect

    NNSA /NV

    2002-07-16

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Operations Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 410 under the Federal Facility Agreement and Consent Order. Corrective Action Unit 410 is located on the Tonopah Test Range (TTR), which is included in the Nevada Test and Training Range (formerly the Nellis Air Force Range) approximately 140 miles northwest of Las Vegas, Nevada. This CAU is comprised of five Corrective Action Sites (CASs): TA-19-002-TAB2, Debris Mound; TA-21-003-TANL, Disposal Trench; TA-21-002-TAAL, Disposal Trench; 09-21-001-TA09, Disposal Trenches; 03-19-001, Waste Disposal Site. This CAU is being investigated because contaminants may be present in concentrations that could potentially pose a threat to human health and/or the environment, and waste may have been disposed of with out appropriate controls. Four out of five of these CASs are the result of weapons testing and disposal activities at the TTR, and they are grouped together for site closure based on the similarity of the sites (waste disposal sites and trenches). The fifth CAS, CAS 03-19-001, is a hydrocarbon spill related to activities in the area. This site is grouped with this CAU because of the location (TTR). Based on historical documentation and process know-ledge, vertical and lateral migration routes are possible for all CASs. Migration of contaminants may have occurred through transport by infiltration of precipitation through surface soil which serves as a driving force for downward migration of contaminants. Land-use scenarios limit future use of these CASs to industrial activities. The suspected contaminants of potential concern which have been identified are volatile organic compounds; semivolatile organic compounds; high explosives; radiological constituents including depleted uranium

  8. Germline MLH1 and MSH2 mutational spectrum including frequent large genomic aberrations in Hungarian hereditary non-polyposis colorectal cancer families: Implications for genetic testing

    PubMed Central

    Papp, Janos; Kovacs, Marietta E; Olah, Edith

    2007-01-01

    AIM: To analyze the prevalence of germline MLH1 and MSH2 gene mutations and evaluate the clinical characteristics of Hungarian hereditary non-polyposis colorectal cancer (HNPCC) families. METHODS: Thirty-six kindreds were tested for mutations using conformation sensitive gel electrophoreses, direct sequencing and also screening for genomic rearrangements applying multiplex ligation-dependent probe amplification (MLPA). RESULTS: Eighteen germline mutations (50%) were identified, 9 in MLH1 and 9 in MSH2. Sixteen of these sequence alterations were considered pathogenic, the remaining two were non-conservative missense alterations occurring at highly conserved functional motifs. The majority of the definite pathogenic mutations (81%, 13/16) were found in families fulfilling the stringent Amsterdam I/II criteria, including three rearrangements revealed by MLPA (two in MSH2 and one in MLH1). However, in three out of sixteen HNPCC-suspected families (19%), a disease-causing alteration could be revealed. Furthermore, nine mutations described here are novel, and none of the sequence changes were found in more than one family. CONCLUSION: Our study describes for the first time the prevalence and spectrum of germline mismatch repair gene mutations in Hungarian HNPCC and suspected-HNPCC families. The results presented here suggest that clinical selection criteria should be relaxed and detection of genomic rearrangements should be included in genetic screening in this population. PMID:17569143

  9. An interactive ontology-driven information system for simulating background radiation and generating scenarios for testing special nuclear materials detection algorithms

    DOE PAGESBeta

    Sorokine, Alexandre; Schlicher, Bob G.; Ward, Richard C.; Wright, Michael C.; Kruse, Kara L.; Bhaduri, Budhendra; Slepoy, Alexander

    2015-05-22

    This paper describes an original approach to generating scenarios for the purpose of testing the algorithms used to detect special nuclear materials (SNM) that incorporates the use of ontologies. Separating the signal of SNM from the background requires sophisticated algorithms. To assist in developing such algorithms, there is a need for scenarios that capture a very wide range of variables affecting the detection process, depending on the type of detector being used. To provide such a cpability, we developed an ontology-driven information system (ODIS) for generating scenarios that can be used in creating scenarios for testing of algorithms for SNMmore » detection. The ontology-driven scenario generator (ODSG) is an ODIS based on information supplied by subject matter experts and other documentation. The details of the creation of the ontology, the development of the ontology-driven information system, and the design of the web user interface (UI) are presented along with specific examples of scenarios generated using the ODSG. We demonstrate that the paradigm behind the ODSG is capable of addressing the problem of semantic complexity at both the user and developer levels. Compared to traditional approaches, an ODIS provides benefits such as faithful representation of the users' domain conceptualization, simplified management of very large and semantically diverse datasets, and the ability to handle frequent changes to the application and the UI. Furthermore, the approach makes possible the generation of a much larger number of specific scenarios based on limited user-supplied information« less

  10. An interactive ontology-driven information system for simulating background radiation and generating scenarios for testing special nuclear materials detection algorithms

    SciTech Connect

    Sorokine, Alexandre; Schlicher, Bob G.; Ward, Richard C.; Wright, Michael C.; Kruse, Kara L.; Bhaduri, Budhendra; Slepoy, Alexander

    2015-05-22

    This paper describes an original approach to generating scenarios for the purpose of testing the algorithms used to detect special nuclear materials (SNM) that incorporates the use of ontologies. Separating the signal of SNM from the background requires sophisticated algorithms. To assist in developing such algorithms, there is a need for scenarios that capture a very wide range of variables affecting the detection process, depending on the type of detector being used. To provide such a cpability, we developed an ontology-driven information system (ODIS) for generating scenarios that can be used in creating scenarios for testing of algorithms for SNM detection. The ontology-driven scenario generator (ODSG) is an ODIS based on information supplied by subject matter experts and other documentation. The details of the creation of the ontology, the development of the ontology-driven information system, and the design of the web user interface (UI) are presented along with specific examples of scenarios generated using the ODSG. We demonstrate that the paradigm behind the ODSG is capable of addressing the problem of semantic complexity at both the user and developer levels. Compared to traditional approaches, an ODIS provides benefits such as faithful representation of the users' domain conceptualization, simplified management of very large and semantically diverse datasets, and the ability to handle frequent changes to the application and the UI. Furthermore, the approach makes possible the generation of a much larger number of specific scenarios based on limited user-supplied information

  11. Corrective Action Investigation Plan for Corrective Action Unit 204: Storage Bunkers, Nevada Test Site, Nevada (December 2002, Revision No.: 0), Including Record of Technical Change No. 1

    SciTech Connect

    NNSA /NSO

    2002-12-12

    The Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Operations Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 204 under the Federal Facility Agreement and Consent Order. Corrective Action Unit 204 is located on the Nevada Test Site approximately 65 miles northwest of Las Vegas, Nevada. This CAU is comprised of six Corrective Action Sites (CASs) which include: 01-34-01, Underground Instrument House Bunker; 02-34-01, Instrument Bunker; 03-34-01, Underground Bunker; 05-18-02, Chemical Explosives Storage; 05-33-01, Kay Blockhouse; 05-99-02, Explosive Storage Bunker. Based on site history, process knowledge, and previous field efforts, contaminants of potential concern for Corrective Action Unit 204 collectively include radionuclides, beryllium, high explosives, lead, polychlorinated biphenyls, total petroleum hydrocarbons, silver, warfarin, and zinc phosphide. The primary question for the investigation is: ''Are existing data sufficient to evaluate appropriate corrective actions?'' To address this question, resolution of two decision statements is required. Decision I is to ''Define the nature of contamination'' by identifying any contamination above preliminary action levels (PALs); Decision II is to ''Determine the extent of contamination identified above PALs. If PALs are not exceeded, the investigation is completed. If PALs are exceeded, then Decision II must be resolved. In addition, data will be obtained to support waste management decisions. Field activities will include radiological land area surveys, geophysical surveys to identify any subsurface metallic and nonmetallic debris, field screening for applicable contaminants of potential concern, collection and analysis of surface and subsurface soil samples from biased locations, and step-out sampling to define the extent of

  12. A pseudo-spectral algorithm and test cases for the numerical solution of the two-dimensional rotating Green-Naghdi shallow water equations

    NASA Astrophysics Data System (ADS)

    Pearce, J. D.; Esler, J. G.

    2010-10-01

    A pseudo-spectral algorithm is presented for the solution of the rotating Green-Naghdi shallow water equations in two spatial dimensions. The equations are first written in vorticity-divergence form, in order to exploit the fact that time-derivatives then appear implicitly in the divergence equation only. A nonlinear equation must then be solved at each time-step in order to determine the divergence tendency. The nonlinear equation is solved by means of a simultaneous iteration in spectral space to determine each Fourier component. The key to the rapid convergence of the iteration is the use of a good initial guess for the divergence tendency, which is obtained from polynomial extrapolation of the solution obtained at previous time-levels. The algorithm is therefore best suited to be used with a standard multi-step time-stepping scheme (e.g. leap-frog). Two test cases are presented to validate the algorithm for initial value problems on a square periodic domain. The first test is to verify cnoidal wave speeds in one-dimension against analytical results. The second test is to ensure that the Miles-Salmon potential vorticity is advected as a parcel-wise conserved tracer throughout the nonlinear evolution of a perturbed jet subject to shear instability. The algorithm is demonstrated to perform well in each test. The resulting numerical model is expected to be of use in identifying paradigmatic behavior in mesoscale flows in the atmosphere and ocean in which both vortical, nonlinear and dispersive effects are important.

  13. SEBAL-A: A remote sensing ET algorithm that accounts for advection with limited data. Part II: Test for transferability

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Because the Surface Energy Balance Algorithm for Land (SEBAL) tends to underestimate ET under conditions of advection, the model was modified by incorporating an advection component as part of the energy usable for crop evapotranspiration (ET). The modification involved the estimation of advected en...

  14. Flight tests of three-dimensional path-redefinition algorithms for transition from Radio Navigation (RNAV) to Microwave Landing System (MLS) navigation when flying an aircraft on autopilot

    NASA Technical Reports Server (NTRS)

    Hueschen, Richard M.

    1988-01-01

    This report contains results of flight tests for three path update algorithms designed to provide smooth transition for an aircraft guidance system from DME, VORTAC, and barometric navaids to the more precise MLS by modifying the desired 3-D flight path. The first algorithm, called Zero Cross Track, eliminates the discontinuity in cross-track and altitude error at transition by designating the first valid MLS aircraft position as the desired first waypoint, while retaining all subsequent waypoints. The discontinuity in track angle is left unaltered. The second, called Tangent Path, also eliminates the discontinuity in cross-track and altitude errors and chooses a new desired heading to be tangent to the next oncoming circular arc turn. The third, called Continued Track, eliminates the discontinuity in cross-track, altitude, and track angle errors by accepting the current MLS position and track angle as the desired ones and recomputes the location of the next waypoint. The flight tests were conducted on the Transportation Systems Research Vehicle, a small twin-jet transport aircraft modified for research under the Advanced Transport Operating Systems program at Langley Research Center. The flight tests showed that the algorithms provided a smooth transition to MLS.

  15. Political violence and child adjustment in Northern Ireland: Testing pathways in a social-ecological model including single-and two-parent families.

    PubMed

    Cummings, E Mark; Schermerhorn, Alice C; Merrilees, Christine E; Goeke-Morey, Marcie C; Shirlow, Peter; Cairns, Ed

    2010-07-01

    Moving beyond simply documenting that political violence negatively impacts children, we tested a social-ecological hypothesis for relations between political violence and child outcomes. Participants were 700 mother-child (M = 12.1 years, SD = 1.8) dyads from 18 working-class, socially deprived areas in Belfast, Northern Ireland, including single- and two-parent families. Sectarian community violence was associated with elevated family conflict and children's reduced security about multiple aspects of their social environment (i.e., family, parent-child relations, and community), with links to child adjustment problems and reductions in prosocial behavior. By comparison, and consistent with expectations, links with negative family processes, child regulatory problems, and child outcomes were less consistent for nonsectarian community violence. Support was found for a social-ecological model for relations between political violence and child outcomes among both single- and two-parent families, with evidence that emotional security and adjustment problems were more negatively affected in single-parent families. The implications for understanding social ecologies of political violence and children's functioning are discussed. PMID:20604605

  16. Ceftazidime/avibactam tested against Gram-negative bacteria from intensive care unit (ICU) and non-ICU patients, including those with ventilator-associated pneumonia.

    PubMed

    Sader, Helio S; Castanheira, Mariana; Flamm, Robert K; Mendes, Rodrigo E; Farrell, David J; Jones, Ronald N

    2015-07-01

    Ceftazidime/avibactam consists of ceftazidime combined with the novel non-β-lactam β-lactamase inhibitor avibactam, which inhibits Ambler classes A, C and some D enzymes. Clinical isolates were collected from 71 US medical centres in 2012-2013 and were tested for susceptibility at a central laboratory by reference broth microdilution methods. Results for 4381 bacterial isolates from intensive care unit (ICU) patients as well as those from ventilator-associated pneumonia (VAP) (n=435) were analysed and compared with those of 14 483 organisms from non-ICU patients. β-Lactamase-encoding genes were evaluated for 966 Enterobacteriaceae by a microarray-based assay. Ceftazidime/avibactam was active against 99.8/100.0% of Enterobacteriaceae (MIC90, 0.25/0.25mg/L) from ICU/non-ICU patients (2948/10,872 strains), including isolates from VAP (99.1%), multidrug-resistant (MDR) strains (99.3%), extensively drug-resistant (XDR) strains (96.5%) and meropenem-non-susceptible strains (98.0%), at MICs of ≤8mg/L. Against Enterobacteriaceae, susceptibility rates for ceftazidime, piperacillin/tazobactam and meropenem (ICU/non-ICU) were 86.1/91.8%, 88.0/94.3% and 97.8/99.2%, respectively. Meropenem was active against 75.1/85.4% of MDR Enterobacteriaceae and 8.1/27.1% of XDR Enterobacteriaceae from ICU/non-ICU patients. When tested against Pseudomonas aeruginosa, ceftazidime/avibactam inhibited 95.6/97.5% of isolates from ICU/non-ICU (842/2240 isolates), 97.3% of isolates from VAP, 80.7% of ceftazidime-non-susceptible and 80.7% of MDR isolates at ≤8mg/L. Susceptibility rates for P. aeruginosa from ICU/non-ICU were 77.7/86.9% for ceftazidime, 71.2/82.2% for piperacillin/tazobactam and 76.6/84.7% for meropenem. In summary, lower susceptibility rates were observed among ICU compared with non-ICU isolates. Ceftazidime/avibactam exhibited potent activity against a large collection of Gram-negative organisms from ICU and non-ICU patients and provided greater coverage than currently

  17. The evaluation of the OSGLR algorithm for restructurable controls

    NASA Technical Reports Server (NTRS)

    Bonnice, W. F.; Wagner, E.; Hall, S. R.; Motyka, P.

    1986-01-01

    The detection and isolation of commercial aircraft control surface and actuator failures using the orthogonal series generalized likelihood ratio (OSGLR) test was evaluated. The OSGLR algorithm was chosen as the most promising algorithm based on a preliminary evaluation of three failure detection and isolation (FDI) algorithms (the detection filter, the generalized likelihood ratio test, and the OSGLR test) and a survey of the literature. One difficulty of analytic FDI techniques and the OSGLR algorithm in particular is their sensitivity to modeling errors. Therefore, methods of improving the robustness of the algorithm were examined with the incorporation of age-weighting into the algorithm being the most effective approach, significantly reducing the sensitivity of the algorithm to modeling errors. The steady-state implementation of the algorithm based on a single cruise linear model was evaluated using a nonlinear simulation of a C-130 aircraft. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling the linear models used by the algorithm on dynamic pressure and flap deflection was also considered. Since simply scheduling the linear models over the entire flight envelope is unlikely to be adequate, scheduling of the steady-state implentation of the algorithm was briefly investigated.

  18. Item Selection in Computerized Adaptive Testing: Improving the a-Stratified Design with the Sympson-Hetter Algorithm

    ERIC Educational Resources Information Center

    Leung, Chi-Keung; Chang, Hua-Hua; Hau, Kit-Tai

    2002-01-01

    Item exposure control, test-overlap minimization, and the efficient use of item pool are some of the important issues in computerized adaptive testing (CAT) designs. The overexposure of some items and high test-overlap rate may cause both item and test security problems. Previously these problems associated with the maximum information (Max-I)…

  19. Rapid Diagnostic Tests for Dengue Virus Infection in Febrile Cambodian Children: Diagnostic Accuracy and Incorporation into Diagnostic Algorithms

    PubMed Central

    Carter, Michael J.; Emary, Kate R.; Moore, Catherine E.; Parry, Christopher M.; Sona, Soeng; Putchhat, Hor; Reaksmey, Sin; Chanpheaktra, Ngoun; Stoesser, Nicole; Dobson, Andrew D. M.; Day, Nicholas P. J.; Kumar, Varun; Blacksell, Stuart D.

    2015-01-01

    Background Dengue virus (DENV) infection is prevalent across tropical regions and may cause severe disease. Early diagnosis may improve supportive care. We prospectively assessed the Standard Diagnostics (Korea) BIOLINE Dengue Duo DENV rapid diagnostic test (RDT) to NS1 antigen and anti-DENV IgM (NS1 and IgM) in children in Cambodia, with the aim of improving the diagnosis of DENV infection. Methodology and principal findings We enrolled children admitted to hospital with non-localised febrile illnesses during the 5-month DENV transmission season. Clinical and laboratory variables, and DENV RDT results were recorded at admission. Children had blood culture and serological and molecular tests for common local pathogens, including reference laboratory DENV NS1 antigen and IgM assays. 337 children were admitted with non-localised febrile illness over 5 months. 71 (21%) had DENV infection (reference assay positive). Sensitivity was 58%, and specificity 85% for RDT NS1 and IgM combined. Conditional inference framework analysis showed the additional value of platelet and white cell counts for diagnosis of DENV infection. Variables associated with diagnosis of DENV infection were not associated with critical care admission (70 children, 21%) or mortality (19 children, 6%). Known causes of mortality were melioidosis (4), other sepsis (5), and malignancy (1). 22 (27%) children with a positive DENV RDT had a treatable other infection. Conclusions The DENV RDT had low sensitivity for the diagnosis of DENV infection. The high co-prevalence of infections in our cohort indicates the need for a broad microbiological assessment of non-localised febrile illness in these children. PMID:25710684

  20. Blockage and flow studies of a generalized test apparatus including various wing configurations in the Langley 7-inch Mach 7 Pilot Tunnel

    NASA Technical Reports Server (NTRS)

    Albertson, C. W.

    1982-01-01

    A 1/12th scale model of the Curved Surface Test Apparatus (CSTA), which will be used to study aerothermal loads and evaluate Thermal Protection Systems (TPS) on a fuselage-type configuration in the Langley 8-Foot High Temperature Structures Tunnel (8 ft HTST), was tested in the Langley 7-Inch Mach 7 Pilot Tunnel. The purpose of the tests was to study the overall flow characteristics and define an envelope for testing the CSTA in the 8 ft HTST. Wings were tested on the scaled CSTA model to select a wing configuration with the most favorable characteristics for conducting TPS evaluations for curved and intersecting surfaces. The results indicate that the CSTA and selected wing configuration can be tested at angles of attack up to 15.5 and 10.5 degrees, respectively. The base pressure for both models was at the expected low level for most test conditions. Results generally indicate that the CSTA and wing configuration will provide a useful test bed for aerothermal pads and thermal structural concept evaluation over a broad range of flow conditions in the 8 ft HTST.

  1. The value of care algorithms.

    PubMed

    Myers, Timothy

    2006-09-01

    The use of protocols or care algorithms in medical facilities has increased in the managed care environment. The definition and application of care algorithms, with a particular focus on the treatment of acute bronchospasm, are explored in this review. The benefits and goals of using protocols, especially in the treatment of asthma, to standardize patient care based on clinical guidelines and evidence-based medicine are explained. Ideally, evidence-based protocols should translate research findings into best medical practices that would serve to better educate patients and their medical providers who are administering these protocols. Protocols should include evaluation components that can monitor, through some mechanism of quality assurance, the success and failure of the instrument so that modifications can be made as necessary. The development and design of an asthma care algorithm can be accomplished by using a four-phase approach: phase 1, identifying demographics, outcomes, and measurement tools; phase 2, reviewing, negotiating, and standardizing best practice; phase 3, testing and implementing the instrument and collecting data; and phase 4, analyzing the data and identifying areas of improvement and future research. The experiences of one medical institution that implemented an asthma care algorithm in the treatment of pediatric asthma are described. Their care algorithms served as tools for decision makers to provide optimal asthma treatment in children. In addition, the studies that used the asthma care algorithm to determine the efficacy and safety of ipratropium bromide and levalbuterol in children with asthma are described. PMID:16945065

  2. POSE Algorithms for Automated Docking

    NASA Technical Reports Server (NTRS)

    Heaton, Andrew F.; Howard, Richard T.

    2011-01-01

    POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data.

  3. Design and performance testing of an avalanche photodiode receiver with multiplication gain control algorithm for intersatellite laser communication

    NASA Astrophysics Data System (ADS)

    Yu, Xiaonan; Tong, Shoufeng; Dong, Yan; Song, Yansong; Hao, Shicong; Lu, Jing

    2016-06-01

    An avalanche photodiode (APD) receiver for intersatellite laser communication links is proposed and its performance is experimentally demonstrated. In the proposed system, a series of analog circuits are used not only to adjust the temperature and control the bias voltage but also to monitor the current and recover the clock from the communication data. In addition, the temperature compensation and multiplication gain control algorithm are embedded in the microcontroller to improve the performance of the receiver. As shown in the experiment, with the change of communication rate from 10 to 2000 Mbps, the detection sensitivity of the APD receiver varies from -47 to -34 dBm. Moreover, due to the existence of the multiplication gain control algorithm, the dynamic range of the APD receiver is effectively improved, while the dynamic range at 10, 100, and 1000 Mbps is 38.7, 37.7, and 32.8 dB, respectively. As a result, the experimental results agree well with the theoretical predictions, and the receiver will improve the flexibility of the intersatellite links without increasing the cost.

  4. Implementation and testing of a real-time 3-component phase picking program for Earthworm using the CECM algorithm

    NASA Astrophysics Data System (ADS)

    Baker, B. I.; Friberg, P. A.

    2014-12-01

    Modern seismic networks typically deploy three component (3C) sensors, but still fail to utilize all of the information available in the seismograms when performing automated phase picking for real-time event location. In most cases a variation on a short term over long term average threshold detector is used for picking and then an association program is used to assign phase types to the picks. However, the 3C waveforms from an earthquake contain an abundance of information related to the P and S phases in both their polarization and energy partitioning. An approach that has been overlooked and has demonstrated encouraging results is the Component Energy Comparison Method (CECM) by Nagano et al. as published in Geophysics 1989. CECM is well suited to being used in real-time because the calculation is not computationally intensive. Furthermore, the CECM method has fewer tuning variables (3) than traditional pickers in Earthworm such as the Rex Allen algorithm (N=18) or even the Anthony Lomax Filter Picker module (N=5). In addition to computing the CECM detector we study the detector sensitivity by rotating the signal into principle components as well as estimating the P phase onset from a curvature function describing the CECM as opposed to the CECM itself. We present our results implementing this algorithm in a real-time module for Earthworm and show the improved phase picks as compared to the traditional single component pickers using Earthworm.

  5. Electromagnetic scattering by magnetic spheres: Theory and algorithms

    NASA Astrophysics Data System (ADS)

    Milham, Merill E.

    1994-10-01

    The theory for the scattering of magnetic spheres is developed by means of scaling functions. This theory leads in a natural way to the development of scattering algorithms which use exponential scaling to overcome computational overflow problems. The design and testing of the algorithm is described. Fortran codes which implement the algorithmic design are presented and examples of code use are given. Listings of the code are included.

  6. Sweeping algorithms for five-point stencils and banded matrices

    SciTech Connect

    Kwong, Man Kam.

    1992-06-01

    We record MATLAB experiments implementing the sweeping algorithms we proposed recently to solve five-point stencils arising from the discretization of partial differential equations, notably the Ginzburg-Landau equations from the theory of superconductivity. Algorithms tested include two-direction, multistage, and partial sweeping.

  7. Overview of an Algorithm Plugin Package (APP)

    NASA Astrophysics Data System (ADS)

    Linda, M.; Tilmes, C.; Fleig, A. J.

    2004-12-01

    Science software that runs operationally is fundamentally different than software that runs on a scientist's desktop. There are complexities in hosting software for automated production that are necessary and significant. Identifying common aspects of these complexities can simplify algorithm integration. We use NASA's MODIS and OMI data production systems as examples. An Algorithm Plugin Package (APP) is science software that is combined with algorithm-unique elements that permit the algorithm to interface with, and function within, the framework of a data processing system. The framework runs algorithms operationally against large quantities of data. The extra algorithm-unique items are constrained by the design of the data processing system. APPs often include infrastructure that is vastly similar. When the common elements in APPs are identified and abstracted, the cost of APP development, testing, and maintenance will be reduced. This paper is an overview of the extra algorithm-unique pieces that are shared between MODAPS and OMIDAPS APPs. Our exploration of APP structure will help builders of other production systems identify their common elements and reduce algorithm integration costs. Our goal is to complete the development of a library of functions and a menu of implementation choices that reflect common needs of APPs. The library and menu will reduce the time and energy required for science developers to integrate algorithms into production systems.

  8. Comparison of options for reduction of noise in the test section of the NASA Langley 4x7m wind tunnel, including reduction of nozzle area

    NASA Technical Reports Server (NTRS)

    Hayden, R. E.

    1984-01-01

    The acoustically significant features of the NASA 4X7m wind tunnel and the Dutch-German DNW low speed tunnel are compared to illustrate the reasons for large differences in background noise in the open jet test sections of the two tunnels. Also introduced is the concept of reducing test section noise levels through fan and turning vane source reductions which can be brought about by reducing the nozzle cross sectional area, and thus the circuit mass flow for a particular exit velocity. The costs and benefits of treating sources, paths, and changing nozzle geometry are reviewed.

  9. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  10. Real time test of the long-range aftershock algorithm as a tool for mid-term earthquake prediction in Southern California

    NASA Astrophysics Data System (ADS)

    Prozorov, A. G.; Schreider, S. Yu.

    1990-04-01

    Result of the algorithm of earthquake prediction, published in 1982, is examined in this paper. The algorithm is based on the hypothesis of long-range interaction between strong and moderate earthquakes in a region. It has been applied to the prediction of earthquakes with M≥6.4 in Southern California for the time interval 1932 1979. The retrospective results were as follows: 9 out of 10 strong earthquakes were predicted with average spatial accuracy of 58 km and average delay time (the time interval between a strong earthquake and its best precursor) 9.4 years varying from 0.8 to 27.9 years. During the time interval following the period studied in that publication, namely in 1980 1988, four earthquakes occurred in the region which had a magnitude of M≥6.4 at least in one of the catalogs: Caltech or NOAA. Three earthquakes—Coalinga of May, 1983, Chalfant Valley of July, 1985 and Superstition Hills of November, 1987—were successfully predicted by the published algorithm. The missed event is a couple of two Mammoth Lake earthquakes of May, 1980 which we consider as one event due to their time-space closeness. This event occurred near the northern boundary of the region, and it also would have been predicted if we had moved the northern boundary from 38°N to the 39°N; the precision of the prediction in this case would be 30 km. The average area declared by the algorithm as the area of increased probability of strong earthquake, e.g., the area within 111-km distance of all long-range aftershocks currently present on the map of the region during 1980 1988 is equal to 47% of the total area of the region if the latter is measured in accordance with the density distribution of earthquakes in California, approximated by the catalog of earthquakes with M≥5. In geometrical terms it is approximately equal to 17% of the total area. Thus the result of the real time test shows a 1.6 times increase of the occurrence of C-events in the alarmed area relative to the

  11. Steering Organoids Toward Discovery: Self-Driving Stem Cells Are Opening a World of Possibilities, Including Drug Testing and Tissue Sourcing.

    PubMed

    Solis, Michele

    2016-01-01

    Since the 1980s, stem cells' shape-shifting abilities have wowed scientists. With proper handling, a few growth factors, and some time, stem cells can be cooked up into specific cell types, including neurons, muscle, and skin. PMID:27414630

  12. Aerodynamics of a sphere and an oblate spheroid for Mach numbers from 0.6 to 10.5 including some effects of test conditions

    NASA Technical Reports Server (NTRS)

    Spearman, M. Leroy; Braswell, Dorothy O.

    1993-01-01

    Wind-tunnel tests were made for spheres of various sizes over a range of Mach numbers and Reynolds numbers. The results indicated some conditions where the drag was affected by changes in the afterbody pressure due to a shock reflection from the tunnel wall. This effect disappeared when the Mach number was increased for a given sphere size or when the sphere size was decreased for a given Mach number. Drag measurements and Schlieren photographs are presented that show the possibility of obtaining inaccurate data when tests are made with a sphere too large for the test section size and Mach number. Tests were also made of an oblate spheroid. The results indicated a region at high Mach numbers where inherent positive static stability might occur with the oblate-face forward. The drag results are compared with those for a sphere as well as those for various other shapes. The drag results for the oblate spheroid and the sphere are also compared with some calculated results.

  13. Informative-Transmission Disequilibrium Test (i-TDT):Combined Linkage and Association Mapping That Includes Unaffected Offspring as Well as Affected Offspring

    Technology Transfer Automated Retrieval System (TEKTRAN)

    To date, there is no test valid for the composite null hypothesis of no linkage or no association that utilizes transmission information from heterozygous parents to their unaffected offspring as well as the affected offspring from ascertained nuclear families. Since the unaffected siblings also pro...

  14. Use of an Aptitude Test in University Entrance--A Validity Study: Updated Analyses of Higher Education Destinations, Including 2007 Entrants

    ERIC Educational Resources Information Center

    Kirkup, Catherine; Wheater, Rebecca; Morrison, Jo; Durbin, Ben

    2010-01-01

    In 2005, the National Foundation for Educational Research (NFER) was commissioned to evaluate the potential value of using an aptitude test as an additional tool in the selection of candidates for admission to higher education (HE). This five-year study is co-funded by the National Foundation for Educational Research (NFER), the Department for…

  15. Political Violence and Child Adjustment in Northern Ireland: Testing Pathways in a Social-Ecological Model Including Single- and Two-Parent Families

    ERIC Educational Resources Information Center

    Cummings, E. Mark; Schermerhorn, Alice C.; Merrilees, Christine E.; Goeke-Morey, Marcie C.; Shirlow, Peter; Cairns, Ed

    2010-01-01

    Moving beyond simply documenting that political violence negatively impacts children, we tested a social-ecological hypothesis for relations between political violence and child outcomes. Participants were 700 mother-child (M = 12.1 years, SD = 1.8) dyads from 18 working-class, socially deprived areas in Belfast, Northern Ireland, including…

  16. Hardware-In-The-Loop Testing of Continuous Control Algorithms for a Precision Formation Flying Demonstration Mission

    NASA Technical Reports Server (NTRS)

    Naasz, Bo J.; Burns, Richard D.; Gaylor, David; Higinbotham, John

    2004-01-01

    A sample mission sequence is defined for a low earth orbit demonstration of Precision Formation Flying (PFF). Various guidance navigation and control strategies are discussed for use in the PFF experiment phases. A sample PFF experiment is implemented and tested in a realistic Hardware-in-the-Loop (HWIL) simulation using the Formation Flying Test Bed (FFTB) at NASA's Goddard Space Flight Center.

  17. Algorithm Helps Monitor Engine Operation

    NASA Technical Reports Server (NTRS)

    Eckerling, Sherry J.; Panossian, Hagop V.; Kemp, Victoria R.; Taniguchi, Mike H.; Nelson, Richard L.

    1995-01-01

    Real-Time Failure Control (RTFC) algorithm part of automated monitoring-and-shutdown system being developed to ensure safety and prevent major damage to equipment during ground tests of main engine of space shuttle. Includes redundant sensors, controller voting logic circuits, automatic safe-limit logic circuits, and conditional-decision logic circuits, all monitored by human technicians. Basic principles of system also applicable to stationary powerplants and other complex machinery systems.

  18. High Stakes Testing in Texas: An Analysis of the Impact of Including Special Education Students in the Texas Academic Excellence Indicator System.

    ERIC Educational Resources Information Center

    Linton, Thomas H.

    The accountability subset of the Texas Assessment of Academic Skills (TAAS) was studied over 4 years to identify trends that might explain why the 1999 TAAS passing rate did not decrease as was predicted. Expanding the accountability index in those years to include special education students was expected to cause a decline in the TAAS…

  19. Updated treatment algorithm of pulmonary arterial hypertension.

    PubMed

    Galiè, Nazzareno; Corris, Paul A; Frost, Adaani; Girgis, Reda E; Granton, John; Jing, Zhi Cheng; Klepetko, Walter; McGoon, Michael D; McLaughlin, Vallerie V; Preston, Ioana R; Rubin, Lewis J; Sandoval, Julio; Seeger, Werner; Keogh, Anne

    2013-12-24

    The demands on a pulmonary arterial hypertension (PAH) treatment algorithm are multiple and in some ways conflicting. The treatment algorithm usually includes different types of recommendations with varying degrees of scientific evidence. In addition, the algorithm is required to be comprehensive but not too complex, informative yet simple and straightforward. The type of information in the treatment algorithm are heterogeneous including clinical, hemodynamic, medical, interventional, pharmacological and regulatory recommendations. Stakeholders (or users) including physicians from various specialties and with variable expertise in PAH, nurses, patients and patients' associations, healthcare providers, regulatory agencies and industry are often interested in the PAH treatment algorithm for different reasons. These are the considerable challenges faced when proposing appropriate updates to the current evidence-based treatment algorithm.The current treatment algorithm may be divided into 3 main areas: 1) general measures, supportive therapy, referral strategy, acute vasoreactivity testing and chronic treatment with calcium channel blockers; 2) initial therapy with approved PAH drugs; and 3) clinical response to the initial therapy, combination therapy, balloon atrial septostomy, and lung transplantation. All three sections will be revisited highlighting information newly available in the past 5 years and proposing updates where appropriate. The European Society of Cardiology grades of recommendation and levels of evidence will be adopted to rank the proposed treatments. PMID:24355643

  20. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal

  1. Control Algorithms and Simulated Environment Developed and Tested for Multiagent Robotics for Autonomous Inspection of Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Wong, Edmond

    2005-01-01

    The NASA Glenn Research Center and academic partners are developing advanced multiagent robotic control algorithms that will enable the autonomous inspection and repair of future propulsion systems. In this application, on-wing engine inspections will be performed autonomously by large groups of cooperative miniature robots that will traverse the surfaces of engine components to search for damage. The eventual goal is to replace manual engine inspections that require expensive and time-consuming full engine teardowns and allow the early detection of problems that would otherwise result in catastrophic component failures. As a preliminary step toward the long-term realization of a practical working system, researchers are developing the technology to implement a proof-of-concept testbed demonstration. In a multiagent system, the individual agents are generally programmed with relatively simple controllers that define a limited set of behaviors. However, these behaviors are designed in such a way that, through the localized interaction among individual agents and between the agents and the environment, they result in self-organized, emergent group behavior that can solve a given complex problem, such as cooperative inspection. One advantage to the multiagent approach is that it allows for robustness and fault tolerance through redundancy in task handling. In addition, the relatively simple agent controllers demand minimal computational capability, which in turn allows for greater miniaturization of the robotic agents.

  2. Sourcebook of locations of geophysical surveys in tunnels and horizontal holes, including results of seismic refraction surveys, Rainier Mesa, Aqueduct Mesa, and Area 16, Nevada Test Site

    USGS Publications Warehouse

    Carroll, R.D.; Kibler, J.E.

    1983-01-01

    Seismic refraction surveys have been obtained sporadically in tunnels in zeolitized tuff at the Nevada Test Site since the late 1950's. Commencing in 1967 and continuing to date (1982), .extensive measurements of shear- and compressional-wave velocities have been made in five tunnel complexes in Rainier and Aqueduct Mesas and in one tunnel complex in Shoshone Mountain. The results of these surveys to 1980 are compiled in this report. In addition, extensive horizontal drilling was initiated in 1967 in connection with geologic exploration in these tunnel complexes for sites for nuclear weapons tests. Seismic and electrical surveys were conducted in the majority of these holes. The type and location of these tunnel and borehole surveys are indexed in this report. Synthesis of the seismic refraction data indicates a mean compressional-wave velocity near the nuclear device point (WP) of 23 tunnel events of 2,430 m/s (7,970 f/s) with a range of 1,846-2,753 m/s (6,060-9,030 f/s). The mean shear-wave velocity of 17 tunnel events is 1,276 m/s (4,190 f/s) with a range of 1,140-1,392 m/s (3,740-4,570 f/s). Experience indicates that these velocity variations are due chiefly to the extent of fracturing and (or) the presence of partially saturated rock in the region of the survey.

  3. Image change detection algorithms: a systematic survey.

    PubMed

    Radke, Richard J; Andra, Srinivas; Al-Kofahi, Omar; Roysam, Badrinath

    2005-03-01

    Detecting regions of change in multiple images of the same scene taken at different times is of widespread interest due to a large number of applications in diverse disciplines, including remote sensing, surveillance, medical diagnosis and treatment, civil infrastructure, and underwater sensing. This paper presents a systematic survey of the common processing steps and core decision rules in modern change detection algorithms, including significance and hypothesis testing, predictive models, the shading model, and background modeling. We also discuss important preprocessing methods, approaches to enforcing the consistency of the change mask, and principles for evaluating and comparing the performance of change detection algorithms. It is hoped that our classification of algorithms into a relatively small number of categories will provide useful guidance to the algorithm designer. PMID:15762326

  4. Knowledge-based tracking algorithm

    NASA Astrophysics Data System (ADS)

    Corbeil, Allan F.; Hawkins, Linda J.; Gilgallon, Paul F.

    1990-10-01

    This paper describes the Knowledge-Based Tracking (KBT) algorithm for which a real-time flight test demonstration was recently conducted at Rome Air Development Center (RADC). In KBT processing, the radar signal in each resolution cell is thresholded at a lower than normal setting to detect low RCS targets. This lower threshold produces a larger than normal false alarm rate. Therefore, additional signal processing including spectral filtering, CFAR and knowledge-based acceptance testing are performed to eliminate some of the false alarms. TSC's knowledge-based Track-Before-Detect (TBD) algorithm is then applied to the data from each azimuth sector to detect target tracks. In this algorithm, tentative track templates are formed for each threshold crossing and knowledge-based association rules are applied to the range, Doppler, and azimuth measurements from successive scans. Lastly, an M-association out of N-scan rule is used to declare a detection. This scan-to-scan integration enhances the probability of target detection while maintaining an acceptably low output false alarm rate. For a real-time demonstration of the KBT algorithm, the L-band radar in the Surveillance Laboratory (SL) at RADC was used to illuminate a small Cessna 310 test aircraft. The received radar signal wa digitized and processed by a ST-100 Array Processor and VAX computer network in the lab. The ST-100 performed all of the radar signal processing functions, including Moving Target Indicator (MTI) pulse cancelling, FFT Doppler filtering, and CFAR detection. The VAX computers performed the remaining range-Doppler clustering, beamsplitting and TBD processing functions. The KBT algorithm provided a 9.5 dB improvement relative to single scan performance with a nominal real time delay of less than one second between illumination and display.

  5. Algorithms and analysis for underwater vehicle plume tracing.

    SciTech Connect

    Byrne, Raymond Harry; Savage, Elizabeth L.; Hurtado, John Edward; Eskridge, Steven E.

    2003-07-01

    The goal of this research was to develop and demonstrate cooperative 3-D plume tracing algorithms for miniature autonomous underwater vehicles. Applications for this technology include Lost Asset and Survivor Location Systems (L-SALS) and Ship-in-Port Patrol and Protection (SP3). This research was a joint effort that included Nekton Research, LLC, Sandia National Laboratories, and Texas A&M University. Nekton Research developed the miniature autonomous underwater vehicles while Sandia and Texas A&M developed the 3-D plume tracing algorithms. This report describes the plume tracing algorithm and presents test results from successful underwater testing with pseudo-plume sources.

  6. An aerial radiological survey of the Tonopah Test Range including Clean Slate 1,2,3, Roller Coaster, decontamination area, Cactus Springs Ranch target areas. Central Nevada

    SciTech Connect

    Proctor, A.E.; Hendricks, T.J.

    1995-08-01

    An aerial radiological survey was conducted of major sections of the Tonopah Test Range (TTR) in central Nevada from August through October 1993. The survey consisted of aerial measurements of both natural and man-made gamma radiation emanating from the terrestrial surface. The initial purpose of the survey was to locate depleted uranium (detecting {sup 238}U) from projectiles which had impacted on the TTR. The examination of areas near Cactus Springs Ranch (located near the western boundary of the TTR) and an animal burial area near the Double Track site were secondary objectives. When more widespread than expected {sup 241}Am contamination was found around the Clean Slates sites, the survey was expanded to cover the area surrounding the Clean Slates and also the Double Track site. Results are reported as radiation isopleths superimposed on aerial photographs of the area.

  7. Corrective Action Investigation Plan for Corrective Action Unit 529: Area 25 Contaminated Materials, Nevada Test Site, Nevada, Rev. 0, Including Record of Technical Change No. 1

    SciTech Connect

    U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office

    2003-02-26

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 529, Area 25 Contaminated Materials, Nevada Test Site (NTS), Nevada, under the Federal Facility Agreement and Consent Order. CAU 529 consists of one Corrective Action Site (25-23-17). For the purpose of this investigation, the Corrective Action Site has been divided into nine parcels based on the separate and distinct releases. A conceptual site model was developed for each parcel to address the translocation of contaminants from each release. The results of this investigation will be used to support a defensible evaluation of corrective action alternatives in the corrective action decision document.

  8. Design Science Research toward Designing/Prototyping a Repeatable Model for Testing Location Management (LM) Algorithms for Wireless Networking

    ERIC Educational Resources Information Center

    Peacock, Christopher

    2012-01-01

    The purpose of this research effort was to develop a model that provides repeatable Location Management (LM) testing using a network simulation tool, QualNet version 5.1 (2011). The model will provide current and future protocol developers a framework to simulate stable protocol environments for development. This study used the Design Science…

  9. Quadrupole Alignment and Trajectory Correction for Future Linear Colliders: SLC Tests of a Dispersion-Free Steering Algorithm

    SciTech Connect

    Assmann, R

    2004-06-08

    The feasibility of future linear colliders depends on achieving very tight alignment and steering tolerances. All proposals (NLC, JLC, CLIC, TESLA and S-BAND) currently require a total emittance growth in the main linac of less than 30-100% [1]. This should be compared with a 100% emittance growth in the much smaller SLC linac [2]. Major advances in alignment and beam steering techniques beyond those used in the SLC are necessary for the next generation of linear colliders. In this paper, we present an experimental study of quadrupole alignment with a dispersion-free steering algorithm. A closely related method (wakefield-free steering) takes into account wakefield effects [3]. However, this method can not be studied at the SLC. The requirements for future linear colliders lead to new and unconventional ideas about alignment and beam steering. For example, no dipole correctors are foreseen for the standard trajectory correction in the NLC [4]; beam steering will be done by moving the quadrupole positions with magnet movers. This illustrates the close symbiosis between alignment, beam steering and beam dynamics that will emerge. It is no longer possible to consider the accelerator alignment as static with only a few surveys and realignments per year. The alignment in future linear colliders will be a dynamic process in which the whole linac, with thousands of beam-line elements, is aligned in a few hours or minutes, while the required accuracy of about 5 pm for the NLC quadrupole alignment [4] is a factor of 20 higher than in existing accelerators. The major task in alignment and steering is the accurate determination of the optimum beam-line position. Ideally one would like all elements to be aligned along a straight line. However, this is not practical. Instead a ''smooth curve'' is acceptable as long as its wavelength is much longer than the betatron wavelength of the accelerated beam. Conventional alignment methods are limited in accuracy by errors in the survey

  10. A depth-averaged debris-flow model that includes the effects of evolving dilatancy: II. Numerical predictions and experimental tests.

    USGS Publications Warehouse

    George, David L.; Iverson, Richard M.

    2014-01-01

    We evaluate a new depth-averaged mathematical model that is designed to simulate all stages of debris-flow motion, from initiation to deposition. A companion paper shows how the model’s five governing equations describe simultaneous evolution of flow thickness, solid volume fraction, basal pore-fluid pressure, and two components of flow momentum. Each equation contains a source term that represents the influence of state-dependent granular dilatancy. Here we recapitulate the equations and analyze their eigenstructure to show that they form a hyperbolic system with desirable stability properties. To solve the equations we use a shock-capturing numerical scheme with adaptive mesh refinement, implemented in an open-source software package we call D-Claw. As tests of D-Claw, we compare model output with results from two sets of large-scale debris-flow experiments. One set focuses on flow initiation from landslides triggered by rising pore-water pressures, and the other focuses on downstream flow dynamics, runout, and deposition. D-Claw performs well in predicting evolution of flow speeds, thicknesses, and basal pore-fluid pressures measured in each type of experiment. Computational results illustrate the critical role of dilatancy in linking coevolution of the solid volume fraction and pore-fluid pressure, which mediates basal Coulomb friction and thereby regulates debris-flow dynamics.

  11. Corrective Action Investigation Plan for Corrective Action Unit 516: Septic Systems and Discharge Points, Nevada Test Site, Nevada, Rev. 0, Including Record of Technical Change No. 1

    SciTech Connect

    2003-04-28

    This Corrective Action Investigation Plan (CAIP) contains the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Sites Office's (NNSA/NSO's) approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 516, Septic Systems and Discharge Points, Nevada Test Site (NTS), Nevada, under the Federal Facility Agreement and Consent Order. CAU 516 consists of six Corrective Action Sites: 03-59-01, Building 3C-36 Septic System; 03-59-02, Building 3C-45 Septic System; 06-51-01, Sump Piping, 06-51-02, Clay Pipe and Debris; 06-51-03, Clean Out Box and Piping; and 22-19-04, Vehicle Decontamination Area. Located in Areas 3, 6, and 22 of the NTS, CAU 516 is being investigated because disposed waste may be present without appropriate controls, and hazardous and/or radioactive constituents may be present or migrating at concentrations and locations that could potentially pose a threat to human health and the environment. Existing information and process knowledge on the expected nature and extent of contamination of CAU 516 are insufficient to select preferred corrective action alternatives; therefore, additional information will be obtained by conducting a corrective action investigation. The results of this field investigation will support a defensible evaluation of corrective action alternatives in the corrective action decision document. Record of Technical Change No. 1 is dated 3/2004.

  12. Corrective Action Investigation Plan for Corrective Action Unit 536: Area 3 Release Site, Nevada Test Site, Nevada (Rev. 0 / June 2003), Including Record of Technical Change No. 1

    SciTech Connect

    2003-06-27

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office's approach to collect the data necessary to evaluate corrective action alternatives (CAAs) appropriate for the closure of Corrective Action Unit (CAU) 536: Area 3 Release Site, Nevada Test Site, Nevada, under the Federal Facility Agreement and Consent Order. Corrective Action Unit 536 consists of a single Corrective Action Site (CAS): 03-44-02, Steam Jenny Discharge. The CAU 536 site is being investigated because existing information on the nature and extent of possible contamination is insufficient to evaluate and recommend corrective action alternatives for CAS 03-44-02. The additional information will be obtained by conducting a corrective action investigation (CAI) prior to evaluating CAAs and selecting the appropriate corrective action for this CAS. The results of this field investigation are to be used to support a defensible evaluation of corrective action alternatives in the corrective action decision document. Record of Technical Change No. 1 is dated 3-2004.

  13. A programme of studies including assessment of diagnostic accuracy of school hearing screening tests and a cost-effectiveness model of school entry hearing screening programmes.

    PubMed Central

    Fortnum, Heather; Ukoumunne, Obioha C; Hyde, Chris; Taylor, Rod S; Ozolins, Mara; Errington, Sam; Zhelev, Zhivko; Pritchard, Clive; Benton, Claire; Moody, Joanne; Cocking, Laura; Watson, Julian; Roberts, Sarah

    2016-01-01

    BACKGROUND Identification of permanent hearing impairment at the earliest possible age is crucial to maximise the development of speech and language. Universal newborn hearing screening identifies the majority of the 1 in 1000 children born with a hearing impairment, but later onset can occur at any time and there is no optimum time for further screening. A universal but non-standardised school entry screening (SES) programme is in place in many parts of the UK but its value is questioned. OBJECTIVES To evaluate the diagnostic accuracy of hearing screening tests and the cost-effectiveness of the SES programme in the UK. DESIGN Systematic review, case-control diagnostic accuracy study, comparison of routinely collected data for services with and without a SES programme, parental questionnaires, observation of practical implementation and cost-effectiveness modelling. SETTING Second- and third-tier audiology services; community. PARTICIPANTS Children aged 4-6 years and their parents. MAIN OUTCOME MEASURES Diagnostic accuracy of two hearing screening devices, referral rate and source, yield, age at referral and cost per quality-adjusted life-year. RESULTS The review of diagnostic accuracy studies concluded that research to date demonstrates marked variability in the design, methodological quality and results. The pure-tone screen (PTS) (Amplivox, Eynsham, UK) and HearCheck (HC) screener (Siemens, Frimley, UK) devices had high sensitivity (PTS ≥ 89%, HC ≥ 83%) and specificity (PTS ≥ 78%, HC ≥ 83%) for identifying hearing impairment. The rate of referral for hearing problems was 36% lower with SES (Nottingham) relative to no SES (Cambridge) [rate ratio 0.64, 95% confidence interval (CI) 0.59 to 0.69; p < 0.001]. The yield of confirmed cases did not differ between areas with and without SES (rate ratio 0.82, 95% CI 0.63 to 1.06; p = 0.12). The mean age of referral did not differ between areas with and without SES for all referrals but children

  14. Mapping of Schistosomiasis and Soil-Transmitted Helminths in Namibia: The First Large-Scale Protocol to Formally Include Rapid Diagnostic Tests

    PubMed Central

    Sousa-Figueiredo, José Carlos; Stanton, Michelle C.; Katokele, Stark; Arinaitwe, Moses; Adriko, Moses; Balfour, Lexi; Reiff, Mark; Lancaster, Warren; Noden, Bruce H.; Bock, Ronnie; Stothard, J. Russell

    2015-01-01

    Background Namibia is now ready to begin mass drug administration of praziquantel and albendazole against schistosomiasis and soil-transmitted helminths, respectively. Although historical data identifies areas of transmission of these neglected tropical diseases (NTDs), there is a need to update epidemiological data. For this reason, Namibia adopted a new protocol for mapping of schistosomiasis and geohelminths, formally integrating rapid diagnostic tests (RDTs) for infections and morbidity. In this article, we explain the protocol in detail, and introduce the concept of ‘mapping resolution’, as well as present results and treatment recommendations for northern Namibia. Methods/Findings/Interpretation This new protocol allowed a large sample to be surveyed (N = 17 896 children from 299 schools) at relatively low cost (7 USD per person mapped) and very quickly (28 working days). All children were analysed by RDTs, but only a sub-sample was also diagnosed by light microscopy. Overall prevalence of schistosomiasis in the surveyed areas was 9.0%, highly associated with poorer access to potable water (OR = 1.5, P<0.001) and defective (OR = 1.2, P<0.001) or absent sanitation infrastructure (OR = 2.0, P<0.001). Overall prevalence of geohelminths, more particularly hookworm infection, was 12.2%, highly associated with presence of faecal occult blood (OR = 1.9, P<0.001). Prevalence maps were produced and hot spots identified to better guide the national programme in drug administration, as well as targeted improvements in water, sanitation and hygiene. The RDTs employed (circulating cathodic antigen and microhaematuria for Schistosoma mansoni and S. haematobium, respectively) performed well, with sensitivities above 80% and specificities above 95%. Conclusion/Significance This protocol is cost-effective and sensitive to budget limitations and the potential economic and logistical strains placed on the national Ministries of Health. Here we present a high resolution map

  15. Spectrum of cytopathologic features of epithelioid sarcoma in a series of 7 uncommon cases with immunohistochemical results, including loss of INI1/SMARCB1 in two test cases.

    PubMed

    Rekhi, Bharat; Singh, Neha

    2016-07-01

    Diagnosis of an epithelioid sarcoma (ES) is challenging on fine needle aspiration cytology (FNAC) smears. There are few documented series describing cytopathologic features and immunostaining results of ESs. The present study describes cytopathologic features of seven cases of ES. All seven tumors occurred in males within age-range of 22-61 years; in sites, such as forearm (n = 3), hand (n = 2), thigh (n = 1), and inguinal region (n = 1). FNAC was performed for metastatic lesions (n = 5), recurrent lesions (n = 4), as well as for a primary diagnosis (n = 1). FNAC smears in most cases were moderate to hypercellular, composed of polygonal cells(seven cases) and spindle cells(three cases), arranged in loosely cohesive groups, non-overlapping clusters, and scattered singly, containing moderate to abundant cytoplasm, defined cell borders, vesicular nuclei, and discernible nucleoli. Variable cytopathologic features identified in certain cases were "rhabdoid-like" intracytoplasmic inclusions (n = 5), giant cells (n = 3), and interspersed scanty, metachromatic stroma (n = 4). Histopathologic examination revealed two cases of conventional-type ES, three of proximal/large cell-type ES, and two cases of mixed-type ES, displaying features of conventional and proximal subtypes. By immunohistochemistry (IHC), tumor cells were positive for cytokeratin (CK)(4/5), epithelial membrane antigen (EMA) (6/6), panCK (1/1), vimentin (3/3), and CD34 (7/7). Tumor cells were completely negative for INI1/SMARCB1 (0/2) and CD31 (0/5). In our settings, FNAC was mostly performed in recurrent and/or metastatic cases of ES, and rarely for a primary diagnosis of ES. Important cytopathologic features of ESs include loosely cohesive, non-overlapping clusters of polygonal cells with variable "rhabdoid-like" and spindle cells. Optimal diagnostic IHC markers in such cases include CK, EMA, AE1AE3, CD34, and INI1/SMARCB1. Clinical correlation is imperative in all

  16. Description of nuclear systems with a self-consistent configuration-mixing approach: Theory, algorithm, and application to the 12C test nucleus

    NASA Astrophysics Data System (ADS)

    Robin, C.; Pillet, N.; Peña Arteaga, D.; Berger, J.-F.

    2016-02-01

    Background: Although self-consistent multiconfiguration methods have been used for decades to address the description of atomic and molecular many-body systems, only a few trials have been made in the context of nuclear structure. Purpose: This work aims at the development of such an approach to describe in a unified way various types of correlations in nuclei in a self-consistent manner where the mean-field is improved as correlations are introduced. The goal is to reconcile the usually set-apart shell-model and self-consistent mean-field methods. Method: This approach is referred to as "variational multiparticle-multihole configuration mixing method." It is based on a double variational principle which yields a set of two coupled equations that determine at the same time the expansion coefficients of the many-body wave function and the single-particle states. The solution of this problem is obtained by building a doubly iterative numerical algorithm. Results: The formalism is derived and discussed in a general context, starting from a three-body Hamiltonian. Links to existing many-body techniques such as the formalism of Green's functions are established. First applications are done using the two-body D1S Gogny effective force. The numerical procedure is tested on the 12C nucleus to study the convergence features of the algorithm in different contexts. Ground-state properties as well as single-particle quantities are analyzed, and the description of the first 2+ state is examined. Conclusions: The self-consistent multiparticle-multihole configuration mixing method is fully applied for the first time to the description of a test nucleus. This study makes it possible to validate our numerical algorithm and leads to encouraging results. To test the method further, we will realize in the second article of this series a systematic description of more nuclei and observables obtained by applying the newly developed numerical procedure with the same Gogny force. As

  17. Fast algorithm for detecting community structure in networks

    NASA Astrophysics Data System (ADS)

    Newman, M. E.

    2004-06-01

    Many networks display community structure—groups of vertices within which connections are dense but between which they are sparser—and sensitive computer algorithms have in recent years been developed for detecting this structure. These algorithms, however, are computationally demanding, which limits their application to small networks. Here we describe an algorithm which gives excellent results when tested on both computer-generated and real-world networks and is much faster, typically thousands of times faster, than previous algorithms. We give several example applications, including one to a collaboration network of more than 50 000 physicists.

  18. Semioptimal practicable algorithmic cooling

    SciTech Connect

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-04-15

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  19. Mono and multi-objective optimization techniques applied to a large range of industrial test cases using Metamodel assisted Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Fourment, Lionel; Ducloux, Richard; Marie, Stéphane; Ejday, Mohsen; Monnereau, Dominique; Massé, Thomas; Montmitonnet, Pierre

    2010-06-01

    The use of material processing numerical simulation allows a strategy of trial and error to improve virtual processes without incurring material costs or interrupting production and therefore save a lot of money, but it requires user time to analyze the results, adjust the operating conditions and restart the simulation. Automatic optimization is the perfect complement to simulation. Evolutionary Algorithm coupled with metamodelling makes it possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. Ten industrial partners have been selected to cover the different area of the mechanical forging industry and provide different examples of the forming simulation tools. It aims to demonstrate that it is possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. The large computational time is handled by a metamodel approach. It allows interpolating the objective function on the entire parameter space by only knowing the exact function values at a reduced number of "master points". Two algorithms are used: an evolution strategy combined with a Kriging metamodel and a genetic algorithm combined with a Meshless Finite Difference Method. The later approach is extended to multi-objective optimization. The set of solutions, which corresponds to the best possible compromises between the different objectives, is then computed in the same way. The population based approach allows using the parallel capabilities of the utilized computer with a high efficiency. An optimization module, fully embedded within the Forge2009 IHM, makes possible to cover all the defined examples, and the use of new multi-core hardware to compute several simulations at the same time reduces the needed time dramatically. The presented examples

  20. The Soil Moisture Active Passive Mission (SMAP) Science Data Products: Results of Testing with Field Experiment and Algorithm Testbed Simulation Environment Data

    NASA Technical Reports Server (NTRS)

    Entekhabi, Dara; Njoku, Eni E.; O'Neill, Peggy E.; Kellogg, Kent H.; Entin, Jared K.

    2010-01-01

    Talk outline 1. Derivation of SMAP basic and applied science requirements from the NRC Earth Science Decadal Survey applications 2. Data products and latencies 3. Algorithm highlights 4. SMAP Algorithm Testbed 5. SMAP Working Groups and community engagement

  1. Prevalence of leprous neuropathy determined by neurosensory testing in an endemic zone in Ecuador: Development of an algorithm to identify patients benefiting from early neurolysis.

    PubMed

    Baltodano, Pablo A; Rochlin, Danielle H; Noboa, Jonathan; Sarhane, Karim A; Rosson, Gedge D; Dellon, A Lee

    2016-07-01

    The success of a microneurosurgical intervention in leprous neuropathy (LN) depends on the diagnosis of chronic compression before irreversible paralysis and digital loss occurs. In order to determine the effectiveness of a different approach for early identification of LN, neurosensory testing with the Pressure-Specified Sensory Device™ (PSSD), a validated and sensitive test, was performed in an endemic zone for leprosy. A cross-sectional study was conducted to analyze a patient sample meeting the World Health Organization (WHO) criteria for Hansen's disease. The prevalence of LN was based on the presence of ≥1 abnormal PSSD pressure threshold for a two-point static touch. A total of 312 upper and lower extremity nerves were evaluated in 39 patients. The PSSD found a 97.4% prevalence of LN. Tinel's sign was identified in 60% of these patients. An algorithm for early identification of patients with LN was proposed using PSSD testing based on the unilateral screening of the ulnar and deep peroneal nerves. PMID:27156203

  2. Implementation of Real-Time Testing of Earthquake Early Warning Algorithms: Using the California Integrated Seismic Network (CISN) Infrastructure as a Test Bed for the P Amplitude and Period Monitor for a Single Station

    NASA Astrophysics Data System (ADS)

    Solanki, K.; Hauksson, E.; Kanamori, H.; Friberg, P.; Wu, Y.

    2006-12-01

    A necessary first step toward the goal of implementing proof-of-concept projects for earthquake early warning (EEW) is the real-time testing of the seismological algorithms. To provide the most appropriate environment, the CISN has designed and implemented a platform for such testing. We are testing the amplitude (Pd) and period (Tau-c) monitor developed for providing on-site earthquake early warning (EEW) using data from a single seismic station. We have designed and implemented a framework generator that can automatically generate code for waveform- processing systems. The framework generator is based on Code Worker software www.codeworker.org, which provides APIs and a scripting language to build parsers and template processing engines. Higher-level description of the waveform processing system is required to generate the waveform-processing framework. We have implemented Domain Specific Language DSL to provide description of the waveform-processing framework. The framework generator allows the developer to focus more on the waveform processing algorithms and frees him/her from repetitive and tedious coding tasks. It also has an automatic gap detector, transparent buffer management, and built in thread management. We have implemented the waveform-processing framework to process real-time waveforms coming from the dataloggers deployed throughout southern California by the Southern California Seismic Network. The system also has the capability of processing data from archived events to facilitate off-line testing. An application feeds data from MiniSEED packets into the Wave Data Area (WDA). The system that grabs the data from the WDA processes each real-time data stream independently. To verify results, sac files are generated at each processing step. Currently, we are processing broadband data streams from 160 stations and determining Pd and Tau-c as local earthquakes occur in southern California. We present the results from this testing and compare the

  3. Polynomial Algorithms for Item Matching.

    ERIC Educational Resources Information Center

    Armstrong, Ronald D.; Jones, Douglas H.

    1992-01-01

    Polynomial algorithms are presented that are used to solve selected problems in test theory, and computational results from sample problems with several hundred decision variables are provided that demonstrate the benefits of these algorithms. The algorithms are based on optimization theory in networks (graphs). (SLD)

  4. 3D-radiation hydro simulations of disk-planet interactions. I. Numerical algorithm and test cases

    NASA Astrophysics Data System (ADS)

    Klahr, H.; Kley, W.

    2006-01-01

    We study the evolution of an embedded protoplanet in a circumstellar disk using the 3D-Radiation Hydro code TRAMP, and treat the thermodynamics of the gas properly in three dimensions. The primary interest of this work lies in the demonstration and testing of the numerical method. We show how far numerical parameters can influence the simulations of gap opening. We study a standard reference model under various numerical approximations. Then we compare the commonly used locally isothermal approximation to the radiation hydro simulation using an equation for the internal energy. Models with different treatments of the mass accretion process are compared. Often mass accumulates in the Roche lobe of the planet creating a hydrostatic atmosphere around the planet. The gravitational torques induced by the spiral pattern of the disk onto the planet are not strongly affected in the average magnitude, but the short time scale fluctuations are stronger in the radiation hydro models. An interesting result of this work lies in the analysis of the temperature structure around the planet. The most striking effect of treating the thermodynamics properly is the formation of a hot pressure-supported bubble around the planet with a pressure scale height of H/R ≈ 0.5 rather than a thin Keplerian circumplanetary accretion disk.

  5. Phase unwrapping algorithms in laser propagation simulation

    NASA Astrophysics Data System (ADS)

    Du, Rui; Yang, Lijia

    2013-08-01

    Currently simulating on laser propagation in atmosphere usually need to deal with beam in strong turbulence, which may lose a part of information via Fourier Transform to simulate the transmission, makes the phase of beam as a 2-D array wrap by 2π . An effective unwrapping algorithm is needed for continuing result and faster calculation. The unwrapping algorithms in atmospheric propagation are similar to the unwrapping algorithm in radar or 3-D surface rebuilding, but not the same. In this article, three classic unwrapping algorithms: the block least squares (BLS), mask-cut (MCUT), and the Flynn's minimal discontinuity algorithm (FMD) are tried in wave-front reconstruction simulation. Each of those algorithms are tested 100 times in 6 same conditions, including low(64x64), medium(128x128), and high(256x256) resolution phase array, with and without noises. Compared the results, the conclusions are delivered as follows. The BLS-based algorithm is the fastest, and the result is acceptable in low resolution environment without noise. The MCUT are higher in accuracy, though they are slower with the array resolution increased, and it is sensitive to noise, resulted in large area errors. Flynn's algorithm has the better accuracy, and it occupies large memory in calculation. After all, the article delivered a new algorithm that based on Active on Vertex (AOV) Network, to build a logical graph to cut the search space then find minimal discontinuity solution. The AOV is faster than MCUT in dealing with high resolution phase arrays, and better accuracy as FMD that has been tested.

  6. Algorithmically specialized parallel computers

    SciTech Connect

    Snyder, L.; Jamieson, L.H.; Gannon, D.B.; Siegel, H.J.

    1985-01-01

    This book is based on a workshop which dealt with array processors. Topics considered include algorithmic specialization using VLSI, innovative architectures, signal processing, speech recognition, image processing, specialized architectures for numerical computations, and general-purpose computers.

  7. Sampling Within k-Means Algorithm to Cluster Large Datasets

    SciTech Connect

    Bejarano, Jeremy; Bose, Koushiki; Brannan, Tyler; Thomas, Anita; Adragni, Kofi; Neerchal, Nagaraj; Ostrouchov, George

    2011-08-01

    Due to current data collection technology, our ability to gather data has surpassed our ability to analyze it. In particular, k-means, one of the simplest and fastest clustering algorithms, is ill-equipped to handle extremely large datasets on even the most powerful machines. Our new algorithm uses a sample from a dataset to decrease runtime by reducing the amount of data analyzed. We perform a simulation study to compare our sampling based k-means to the standard k-means algorithm by analyzing both the speed and accuracy of the two methods. Results show that our algorithm is significantly more efficient than the existing algorithm with comparable accuracy. Further work on this project might include a more comprehensive study both on more varied test datasets as well as on real weather datasets. This is especially important considering that this preliminary study was performed on rather tame datasets. Also, these datasets should analyze the performance of the algorithm on varied values of k. Lastly, this paper showed that the algorithm was accurate for relatively low sample sizes. We would like to analyze this further to see how accurate the algorithm is for even lower sample sizes. We could find the lowest sample sizes, by manipulating width and confidence level, for which the algorithm would be acceptably accurate. In order for our algorithm to be a success, it needs to meet two benchmarks: match the accuracy of the standard k-means algorithm and significantly reduce runtime. Both goals are accomplished for all six datasets analyzed. However, on datasets of three and four dimension, as the data becomes more difficult to cluster, both algorithms fail to obtain the correct classifications on some trials. Nevertheless, our algorithm consistently matches the performance of the standard algorithm while becoming remarkably more efficient with time. Therefore, we conclude that analysts can use our algorithm, expecting accurate results in considerably less time.

  8. Identifying Risk Factors for Recent HIV Infection in Kenya Using a Recent Infection Testing Algorithm: Results from a Nationally Representative Population-Based Survey

    PubMed Central

    Kim, Andrea A.; Parekh, Bharat S.; Umuro, Mamo; Galgalo, Tura; Bunnell, Rebecca; Makokha, Ernest; Dobbs, Trudy; Murithi, Patrick; Muraguri, Nicholas; De Cock, Kevin M.; Mermin, Jonathan

    2016-01-01

    Introduction A recent infection testing algorithm (RITA) that can distinguish recent from long-standing HIV infection can be applied to nationally representative population-based surveys to characterize and identify risk factors for recent infection in a country. Materials and Methods We applied a RITA using the Limiting Antigen Avidity Enzyme Immunoassay (LAg) on stored HIV-positive samples from the 2007 Kenya AIDS Indicator Survey. The case definition for recent infection included testing recent on LAg and having no evidence of antiretroviral therapy use. Multivariate analysis was conducted to determine factors associated with recent and long-standing infection compared to HIV-uninfected persons. All estimates were weighted to adjust for sampling probability and nonresponse. Results Of 1,025 HIV-antibody-positive specimens, 64 (6.2%) met the case definition for recent infection and 961 (93.8%) met the case definition for long-standing infection. Compared to HIV-uninfected individuals, factors associated with higher adjusted odds of recent infection were living in Nairobi (adjusted odds ratio [AOR] 11.37; confidence interval [CI] 2.64–48.87) and Nyanza (AOR 4.55; CI 1.39–14.89) provinces compared to Western province; being widowed (AOR 8.04; CI 1.42–45.50) or currently married (AOR 6.42; CI 1.55–26.58) compared to being never married; having had ≥ 2 sexual partners in the last year (AOR 2.86; CI 1.51–5.41); not using a condom at last sex in the past year (AOR 1.61; CI 1.34–1.93); reporting a sexually transmitted infection (STI) diagnosis or symptoms of STI in the past year (AOR 1.97; CI 1.05–8.37); and being aged <30 years with: 1) HSV-2 infection (AOR 8.84; CI 2.62–29.85), 2) male genital ulcer disease (AOR 8.70; CI 2.36–32.08), or 3) lack of male circumcision (AOR 17.83; CI 2.19–144.90). Compared to HIV-uninfected persons, factors associated with higher adjusted odds of long-standing infection included living in Coast (AOR 1.55; CI 1.04–2

  9. HEATR project: ATR algorithm parallelization

    NASA Astrophysics Data System (ADS)

    Deardorf, Catherine E.

    1998-09-01

    High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.

  10. 34 CFR 303.15 - Include; including.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 2 2010-07-01 2010-07-01 false Include; including. 303.15 Section 303.15 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF SPECIAL EDUCATION AND REHABILITATIVE SERVICES, DEPARTMENT OF EDUCATION EARLY INTERVENTION PROGRAM FOR INFANTS AND TODDLERS...

  11. Analysis of image thresholding segmentation algorithms based on swarm intelligence

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Lu, Kai; Gao, Yinghui; Yang, Bo

    2013-03-01

    Swarm intelligence-based image thresholding segmentation algorithms are playing an important role in the research field of image segmentation. In this paper, we briefly introduce the theories of four existing image segmentation algorithms based on swarm intelligence including fish swarm algorithm, artificial bee colony, bacteria foraging algorithm and particle swarm optimization. Then some image benchmarks are tested in order to show the differences of the segmentation accuracy, time consumption, convergence and robustness for Salt & Pepper noise and Gaussian noise of these four algorithms. Through these comparisons, this paper gives qualitative analyses for the performance variance of the four algorithms. The conclusions in this paper would give a significant guide for the actual image segmentation.

  12. Experiments with conjugate gradient algorithms for homotopy curve tracking

    NASA Technical Reports Server (NTRS)

    Irani, Kashmira M.; Ribbens, Calvin J.; Watson, Layne T.; Kamat, Manohar P.; Walker, Homer F.

    1991-01-01

    There are algorithms for finding zeros or fixed points of nonlinear systems of equations that are globally convergent for almost all starting points, i.e., with probability one. The essence of all such algorithms is the construction of an appropriate homotopy map and then tracking some smooth curve in the zero set of this homotopy map. HOMPACK is a mathematical software package implementing globally convergent homotopy algorithms with three different techniques for tracking a homotopy zero curve, and has separate routines for dense and sparse Jacobian matrices. The HOMPACK algorithms for sparse Jacobian matrices use a preconditioned conjugate gradient algorithm for the computation of the kernel of the homotopy Jacobian matrix, a required linear algebra step for homotopy curve tracking. Here, variants of the conjugate gradient algorithm are implemented in the context of homotopy curve tracking and compared with Craig's preconditioned conjugate gradient method used in HOMPACK. The test problems used include actual large scale, sparse structural mechanics problems.

  13. Comparison of Snow Mass Estimates from a Prototype Passive Microwave Snow Algorithm, a Revised Algorithm and a Snow Depth Climatology

    NASA Technical Reports Server (NTRS)

    Foster, J. L.; Chang, A. T. C.; Hall, D. K.

    1997-01-01

    While it is recognized that no single snow algorithm is capable of producing accurate global estimates of snow depth, for research purposes it is useful to test an algorithm's performance in different climatic areas in order to see how it responds to a variety of snow conditions. This study is one of the first to develop separate passive microwave snow algorithms for North America and Eurasia by including parameters that consider the effects of variations in forest cover and crystal size on microwave brightness temperature. A new algorithm (GSFC 1996) is compared to a prototype algorithm (Chang et al., 1987) and to a snow depth climatology (SDC), which for this study is considered to be a standard reference or baseline. It is shown that the GSFC 1996 algorithm compares much more favorably to the SDC than does the Chang et al. (1987) algorithm. For example, in North America in February there is a 15% difference between the GSFC 198-96 Algorithm and the SDC, but with the Chang et al. (1987) algorithm the difference is greater than 50%. In Eurasia, also in February, there is only a 1.3% difference between the GSFC 1996 algorithm and the SDC, whereas with the Chang et al. (1987) algorithm the difference is about 20%. As expected, differences tend to be less when the snow cover extent is greater, particularly for Eurasia. The GSFC 1996 algorithm performs better in North America in each month than dose the Chang et al. (1987) algorithm. This is also the case in Eurasia, except in April and May when the Chang et al.(1987) algorithms is in closer accord to the SDC than is GSFC 1996 algorithm.

  14. Space-Based Near-Infrared CO2 Measurements: Testing the Orbiting Carbon Observatory Retrieval Algorithm and Validation Concept Using SCIAMACHY Observations over Park Falls, Wisconsin

    NASA Technical Reports Server (NTRS)

    Bosch, H.; Toon, G. C.; Sen, B.; Washenfelder, R. A.; Wennberg, P. O.; Buchwitz, M.; deBeek, R.; Burrows, J. P.; Crisp, D.; Christi, M.; Connor, B. J.; Natraj, V.; Yung, Y. L.

    2006-01-01

    test of the OCO retrieval algorithm and validation concept using NIR spectra measured from space. Finally, we argue that significant improvements in precision and accuracy could be obtained from a dedicated CO2 instrument such as OCO, which has much higher spectral and spatial resolutions than SCIAMACHY. These measurements would then provide critical data for improving our understanding of the carbon cycle and carbon sources and sinks.

  15. A flight management algorithm and guidance for fuel-conservative descents in a time-based metered air traffic environment: Development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1984-01-01

    A simple airborne flight management descent algorithm designed to define a flight profile subject to the constraints of using idle thrust, a clean airplane configuration (landing gear up, flaps zero, and speed brakes retracted), and fixed-time end conditions was developed and flight tested in the NASA TSRV B-737 research airplane. The research test flights, conducted in the Denver ARTCC automated time-based metering LFM/PD ATC environment, demonstrated that time guidance and control in the cockpit was acceptable to the pilots and ATC controllers and resulted in arrival of the airplane over the metering fix with standard deviations in airspeed error of 6.5 knots, in altitude error of 23.7 m (77.8 ft), and in arrival time accuracy of 12 sec. These accuracies indicated a good representation of airplane performance and wind modeling. Fuel savings will be obtained on a fleet-wide basis through a reduction of the time error dispersions at the metering fix and on a single-airplane basis by presenting the pilot with guidance for a fuel-efficient descent.

  16. CYCLOPS: A mobile robotic platform for testing and validating image processing and autonomous navigation algorithms in support of artificial vision prostheses.

    PubMed

    Fink, Wolfgang; Tarbell, Mark A

    2009-12-01

    While artificial vision prostheses are quickly becoming a reality, actual testing time with visual prosthesis carriers is at a premium. Moreover, it is helpful to have a more realistic functional approximation of a blind subject. Instead of a normal subject with a healthy retina looking at a low-resolution (pixelated) image on a computer monitor or head-mounted display, a more realistic approximation is achieved by employing a subject-independent mobile robotic platform that uses a pixelated view as its sole visual input for navigation purposes. We introduce CYCLOPS: an AWD, remote controllable, mobile robotic platform that serves as a testbed for real-time image processing and autonomous navigation systems for the purpose of enhancing the visual experience afforded by visual prosthesis carriers. Complete with wireless Internet connectivity and a fully articulated digital camera with wireless video link, CYCLOPS supports both interactive tele-commanding via joystick, and autonomous self-commanding. Due to its onboard computing capabilities and extended battery life, CYCLOPS can perform complex and numerically intensive calculations, such as image processing and autonomous navigation algorithms, in addition to interfacing to additional sensors. Its Internet connectivity renders CYCLOPS a worldwide accessible testbed for researchers in the field of artificial vision systems. CYCLOPS enables subject-independent evaluation and validation of image processing and autonomous navigation systems with respect to the utility and efficiency of supporting and enhancing visual prostheses, while potentially reducing to a necessary minimum the need for valuable testing time with actual visual prosthesis carriers. PMID:19651459

  17. Test plan: Sealing of the Disturbed Rock Zone (DRZ), including Marker Bed 139 (MB139) and the overlying halite, below the repository horizon, at the Waste Isolation Pilot Plant. Small-scale seal performance test-series F

    SciTech Connect

    Ahrens, E.H.

    1992-05-01

    This test plan describes activities intended to demonstrate equipment and techniques for producing, injecting, and evaluating microfine cementitious grout. The grout will be injected in fractured rock located below the repository horizon at the Waste Isolation Pilot Plant (WIPP). These data are intended to support the development of the Alcove Gas Barrier System (AGBS), the design of upcoming, large-scale seal tests, and ongoing laboratory evaluations of grouting efficacy. Degradation of the grout will be studied in experiments conducted in parallel with the underground grouting experiment.

  18. Verifying a Computer Algorithm Mathematically.

    ERIC Educational Resources Information Center

    Olson, Alton T.

    1986-01-01

    Presents an example of mathematics from an algorithmic point of view, with emphasis on the design and verification of this algorithm. The program involves finding roots for algebraic equations using the half-interval search algorithm. The program listing is included. (JN)

  19. An exact accelerated stochastic simulation algorithm

    PubMed Central

    Mjolsness, Eric; Orendorff, David; Chatelain, Philippe; Koumoutsakos, Petros

    2009-01-01

    An exact method for stochastic simulation of chemical reaction networks, which accelerates the stochastic simulation algorithm (SSA), is proposed. The present “ER-leap” algorithm is derived from analytic upper and lower bounds on the multireaction probabilities sampled by SSA, together with rejection sampling and an adaptive multiplicity for reactions. The algorithm is tested on a number of well-quantified reaction networks and is found experimentally to be very accurate on test problems including a chaotic reaction network. At the same time ER-leap offers a substantial speedup over SSA with a simulation time proportional to the 2∕3 power of the number of reaction events in a Galton–Watson process. PMID:19368432

  20. Testing a Variety of Encryption Technologies

    SciTech Connect

    Henson, T J

    2001-04-09

    Review and test speeds of various encryption technologies using Entrust Software. Multiple encryption algorithms are included in the product. Algorithms tested were IDEA, CAST, DES, and RC2. Test consisted of taking a 7.7 MB Word document file which included complex graphics and timing encryption, decryption and signing. Encryption is discussed in the GIAC Kickstart section: Information Security: The Big Picture--Part VI.

  1. Fractal Landscape Algorithms for Environmental Simulations

    NASA Astrophysics Data System (ADS)

    Mao, H.; Moran, S.

    2014-12-01

    Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.

  2. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  3. Testing.

    ERIC Educational Resources Information Center

    Killoran, James, Ed.

    1984-01-01

    This journal issue addresses the issue of testing in the social studies classroom. The first article, "The Role of Testing" (Bragaw), focuses on the need for tests to reflect the objectives of the study completed. The varying functions of pop quizzes, weekly tests, and unit tests are explored. "Testing Thinking Processes" (Killoran, Zimmer, and…

  4. Cloud masking and surface classi[|#12#|]cation algorithm for GCOM-C1/SGLI purpose

    NASA Astrophysics Data System (ADS)

    Chen, N.; Tanikawa, T.; Li, W.; Stamnes, K. H.; Hori, M.; Aoki, T.

    2011-12-01

    We have developed new algorithms for cloud masking and surface classification of The Global Change Observation Mission-Climate/Second-generation Global Imager (GCOM-C1/SGLI). Our goal is to identify clear-sky pixels of snow-covered surfaces for our snow parameter retrieval algorithms. The cloud masking algorithm includes multiple indices including the Normalized Difference Snow Index (NDSI), the Normalized Difference Vegetation Index (NDVI), a Brightness Temperature test and the R1.38 test over land and ocean to provide a collective index of cloud screening of the scene. The Normalized Difference Ice Index (NDII) test along with a R0.67/R0.86 test provide discrimination of snow and sea ice over the ocean. Our algorithm is further validated against the MODIS MOD35 cloud product and the CLoud and Aerosol Unbiased Decision Intellectual Algorithm (CLAUDIA) results.

  5. The Use of Genetic Algorithms as an Inverse Technique to Guide the Design and Implementation of Research at a Test Site in Shelby County, Tennessee

    NASA Astrophysics Data System (ADS)

    Gentry, R. W.

    2002-12-01

    The Shelby Farms test site in Shelby County, Tennessee is being developed to better understand recharge hydraulics to the Memphis aquifer in areas where leakage through an overlying aquitard occurs. The site is unique in that it demonstrates many opportunities for interdisciplinary research regarding environmental tracers, anthropogenic impacts and inverse modeling. The objective of the research funding the development of the test site is to better understand the groundwater hydrology and hydraulics between a shallow alluvial aquifer and the Memphis aquifer given an area of leakage, defined as an aquitard window. The site is situated in an area on the boundary of a highly developed urban area and is currently being used by an agricultural research agency and a local recreational park authority. Also, an abandoned landfill is situated to the immediate south of the window location. Previous research by the USGS determined the location of the aquitard window subsequent to the landfill closure. Inverse modeling using a genetic algorithm approach has identified the likely extents of the area of the window given an interaquifer accretion rate. These results, coupled with additional fieldwork, have been used to guide the direction of the field studies and the overall design of the research project. This additional work has encompassed the drilling of additional monitoring wells in nested groups by rotasonic drilling methods. The core collected during the drilling will provide additional constraints to the physics of the problem that may provide additional help in redefining the conceptual model. The problem is non-unique with respect to the leakage area and accretion rate and further research is being performed to provide some idea of the advective flow paths using a combination of tritium and 3He analyses and geochemistry. The outcomes of the research will result in a set of benchmark data and physical infrastructure that can be used to evaluate other environmental

  6. Use of a genetic algorithm to analyze robust stability problems

    SciTech Connect

    Murdock, T.M.; Schmitendorf, W.E.; Forrest, S.

    1990-01-01

    This note resents a genetic algorithm technique for testing the stability of a characteristic polynomial whose coefficients are functions of unknown but bounded parameters. This technique is fast and can handle a large number of parametric uncertainties. We also use this method to determine robust stability margins for uncertain polynomials. Several benchmark examples are included to illustrate the two uses of the algorithm. 27 refs., 4 figs.

  7. TVFMCATS. Time Variant Floating Mean Counting Algorithm

    SciTech Connect

    Huffman, R.K.

    1999-05-01

    This software was written to test a time variant floating mean counting algorithm. The algorithm was developed by Westinghouse Savannah River Company and a provisional patent has been filed on the algorithm. The test software was developed to work with the Val Tech model IVB prototype version II count rate meter hardware. The test software was used to verify the algorithm developed by WSRC could be correctly implemented with the vendor`s hardware.

  8. Time Variant Floating Mean Counting Algorithm

    1999-06-03

    This software was written to test a time variant floating mean counting algorithm. The algorithm was developed by Westinghouse Savannah River Company and a provisional patent has been filed on the algorithm. The test software was developed to work with the Val Tech model IVB prototype version II count rate meter hardware. The test software was used to verify the algorithm developed by WSRC could be correctly implemented with the vendor''s hardware.

  9. Investigation of registration algorithms for the automatic tile processing system

    NASA Technical Reports Server (NTRS)

    Tamir, Dan E.

    1995-01-01

    The Robotic Tile Inspection System (RTPS), under development in NASA-KSC, is expected to automate the processes of post-flight re-water-proofing and the process of inspection of the Shuttle heat absorbing tiles. An important task of the robot vision sub-system is to register the 'real-world' coordinates with the coordinates of the robot model of the Shuttle tiles. The model coordinates relate to a tile data-base and pre-flight tile-images. In the registration process, current (post-flight) images are aligned with pre-flight images to detect the rotation and translation displacement required for the coordinate systems rectification. The research activities performed this summer included study and evaluation of the registration algorithm that is currently implemented by the RTPS, as well as, investigation of the utility of other registration algorithms. It has been found that the current algorithm is not robust enough. This algorithm has a success rate of less than 80% and is, therefore, not suitable for complying with the requirements of the RTPS. Modifications to the current algorithm has been developed and tested. These modifications can improve the performance of the registration algorithm in a significant way. However, this improvement is not sufficient to satisfy system requirements. A new algorithm for registration has been developed and tested. This algorithm presented very high degree of robustness with success rate of 96%.

  10. LEED I/V determination of the structure of a MoO3 monolayer on Au(111): Testing the performance of the CMA-ES evolutionary strategy algorithm, differential evolution, a genetic algorithm and tensor LEED based structural optimization

    NASA Astrophysics Data System (ADS)

    Primorac, E.; Kuhlenbeck, H.; Freund, H.-J.

    2016-07-01

    The structure of a thin MoO3 layer on Au(111) with a c(4 × 2) superstructure was studied with LEED I/V analysis. As proposed previously (Quek et al., Surf. Sci. 577 (2005) L71), the atomic structure of the layer is similar to that of a MoO3 single layer as found in regular α-MoO3. The layer on Au(111) has a glide plane parallel to the short unit vector of the c(4 × 2) unit cell and the molybdenum atoms are bridge-bonded to two surface gold atoms with the structure of the gold surface being slightly distorted. The structural refinement of the structure was performed with the CMA-ES evolutionary strategy algorithm which could reach a Pendry R-factor of ∼ 0.044. In the second part the performance of CMA-ES is compared with that of the differential evolution method, a genetic algorithm and the Powell optimization algorithm employing I/V curves calculated with tensor LEED.

  11. Comparative study of texture detection and classification algorithms

    NASA Astrophysics Data System (ADS)

    Koltsov, P. P.

    2011-08-01

    A description and results of application of the computer system PETRA (performance evaluation of texture recognition algorithms) are given. This system is designed for the comparative study of texture analysis algorithms; it includes a database of textured images and a collection of software implementations of texture analysis algorithms. The functional capabilities of the system are illustrated using texture classification examples. Test examples are taken from the Brodatz album, MeasTech database, and a set of aerospace images. Results of a comparative evaluation of five well-known texture analysis methods are described—Gabor filters, Laws masks, ring/wedge filters, gray-level cooccurrence matrices (GLCMs), and autoregression image model.

  12. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  13. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  14. Constructive neural network learning algorithms

    SciTech Connect

    Parekh, R.; Yang, Jihoon; Honavar, V.

    1996-12-31

    Constructive Algorithms offer an approach for incremental construction of potentially minimal neural network architectures for pattern classification tasks. These algorithms obviate the need for an ad-hoc a-priori choice of the network topology. The constructive algorithm design involves alternately augmenting the existing network topology by adding one or more threshold logic units and training the newly added threshold neuron(s) using a stable variant of the perception learning algorithm (e.g., pocket algorithm, thermal perception, and barycentric correction procedure). Several constructive algorithms including tower, pyramid, tiling, upstart, and perception cascade have been proposed for 2-category pattern classification. These algorithms differ in terms of their topological and connectivity constraints as well as the training strategies used for individual neurons.

  15. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  16. The challenges of implementing and testing two signal processing algorithms for high rep-rate Coherent Doppler Lidar for wind sensing

    NASA Astrophysics Data System (ADS)

    Abdelazim, S.; Santoro, D.; Arend, M.; Moshary, F.; Ahmed, S.

    2015-05-01

    In this paper, we present two signal processing algorithms implemented using the FPGA. The first algorithm involves explicate time gating of received signals that correspond to a desired spatial resolution, performing a Fast Fourier Transform (FFT) calculation on each individual time gate, taking the square modulus of the FFT to form a power spectrum and then accumulating these power spectra for 10k return signals. The second algorithm involves calculating the autocorrelation of the backscattered signals and then accumulating the autocorrelation for 10k pulses. Efficient implementation of each of these two signal processing algorithms on an FPGA is challenging because it requires there to be tradeoffs between retaining the full data word width, managing the amount of on chip memory used and respecting the constraints imposed by the data width of the FPGA. A description of the approach used to manage these tradeoffs for each of the two signal processing algorithms are presented and explained in this article. Results of atmospheric measurements obtained through these two embedded programming techniques are also presented.

  17. Power spectral estimation algorithms

    NASA Technical Reports Server (NTRS)

    Bhatia, Manjit S.

    1989-01-01

    Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.

  18. Application of Modified Differential Evolution Algorithm to Magnetotelluric and Vertical Electrical Sounding Data

    NASA Astrophysics Data System (ADS)

    Mingolo, Nusharin; Sarakorn, Weerachai

    2016-04-01

    In this research, the Modified Differential Evolution (DE) algorithm is proposed and applied to the Magnetotelluric (MT) and Vertical Electrical sounding (VES) data to reveal the reasonable resistivity structure. The common processes of DE algorithm, including initialization, mutation and crossover, are modified by introducing both new control parameters and some constraints to obtain the fitting-reasonable resistivity model. The validity and efficiency of our developed modified DE algorithm is tested on both synthetic and real observed data. Our developed DE algorithm is also compared to the well-known OCCAM's algorithm for real case of MT data. For the synthetic case, our modified DE algorithm with appropriate control parameters can reveal the reasonable-fitting models when compared to the original synthetic models. For the real data case, the resistivity structures revealed by our algorithm are closed to those obtained by OCCAM's inversion, but our obtained structures reveal layers more apparently.

  19. Test Review: Wilkinson, G. S., & Robertson, G. J. (2006). Wide Range Achievement Test--Fourth Edition. Lutz, FL: Psychological Assessment Resources. WRAT4 Introductory Kit (Includes Manual, 25 Test/Response Forms [Blue and Green], and Accompanying Test Materials): $243.00

    ERIC Educational Resources Information Center

    Dell, Cindy Ann; Harrold, Barbara; Dell, Thomas

    2008-01-01

    The Wide Range Achievement Test-Fourth Edition (WRAT4) is designed to provide "a quick, simple, psychometrically sound assessment of academic skills". The test was first published in 1946 by Joseph F. Jastak, with the purpose of augmenting the cognitive performance measures of the Wechsler-Bellevue Scales, developed by David Wechsler. Jastak…

  20. Pump apparatus including deconsolidator

    DOEpatents

    Sonwane, Chandrashekhar; Saunders, Timothy; Fitzsimmons, Mark Andrew

    2014-10-07

    A pump apparatus includes a particulate pump that defines a passage that extends from an inlet to an outlet. A duct is in flow communication with the outlet. The duct includes a deconsolidator configured to fragment particle agglomerates received from the passage.

  1. Algorithms for fast axisymmetric drop shape analysis measurements by a charge coupled device video camera and simulation procedure for test and evaluation

    NASA Astrophysics Data System (ADS)

    Busoni, Lorenzo; Carlà, Marcello; Lanzi, Leonardo

    2001-06-01

    A set of fast algorithms for axisymmetric drop shape analysis measurements is described. Speed has been improved by more than 1 order of magnitude over previously available procedures. Frame analysis is performed and drop characteristics and interfacial tension γ are computed in less than 40 ms on a Pentium III 450 MHz PC, while preserving an overall accuracy in Δγ/γ close to 1×10-4. A new procedure is described to evaluate both the algorithms performance and the contribution of each source of experimental error to the overall measurement accuracy.

  2. Optical modulator including grapene

    DOEpatents

    Liu, Ming; Yin, Xiaobo; Zhang, Xiang

    2016-06-07

    The present invention provides for a one or more layer graphene optical modulator. In a first exemplary embodiment the optical modulator includes an optical waveguide, a nanoscale oxide spacer adjacent to a working region of the waveguide, and a monolayer graphene sheet adjacent to the spacer. In a second exemplary embodiment, the optical modulator includes at least one pair of active media, where the pair includes an oxide spacer, a first monolayer graphene sheet adjacent to a first side of the spacer, and a second monolayer graphene sheet adjacent to a second side of the spacer, and at least one optical waveguide adjacent to the pair.

  3. Lightning detection and exposure algorithms for smartphones

    NASA Astrophysics Data System (ADS)

    Wang, Haixin; Shao, Xiaopeng; Wang, Lin; Su, Laili; Huang, Yining

    2015-05-01

    This study focuses on the key theory of lightning detection, exposure and the experiments. Firstly, the algorithm based on differential operation between two adjacent frames is selected to remove the lightning background information and extract lighting signal, and the threshold detection algorithm is applied to achieve the purpose of precise detection of lightning. Secondly, an algorithm is proposed to obtain scene exposure value, which can automatically detect external illumination status. Subsequently, a look-up table could be built on the basis of the relationships between the exposure value and average image brightness to achieve rapid automatic exposure. Finally, based on a USB 3.0 industrial camera including a CMOS imaging sensor, a set of hardware test platform is established and experiments are carried out on this platform to verify the performances of the proposed algorithms. The algorithms can effectively and fast capture clear lightning pictures such as special nighttime scenes, which will provide beneficial supporting to the smartphone industry, since the current exposure methods in smartphones often lost capture or induce overexposed or underexposed pictures.

  4. Parallel Clustering Algorithms for Structured AMR

    SciTech Connect

    Gunney, B T; Wissink, A M; Hysom, D A

    2005-10-26

    We compare several different parallel implementation approaches for the clustering operations performed during adaptive gridding operations in patch-based structured adaptive mesh refinement (SAMR) applications. Specifically, we target the clustering algorithm of Berger and Rigoutsos (BR91), which is commonly used in many SAMR applications. The baseline for comparison is a simplistic parallel extension of the original algorithm that works well for up to O(10{sup 2}) processors. Our goal is a clustering algorithm for machines of up to O(10{sup 5}) processors, such as the 64K-processor IBM BlueGene/Light system. We first present an algorithm that avoids the unneeded communications of the simplistic approach to improve the clustering speed by up to an order of magnitude. We then present a new task-parallel implementation to further reduce communication wait time, adding another order of magnitude of improvement. The new algorithms also exhibit more favorable scaling behavior for our test problems. Performance is evaluated on a number of large scale parallel computer systems, including a 16K-processor BlueGene/Light system.

  5. The Xmath Integration Algorithm

    ERIC Educational Resources Information Center

    Bringslid, Odd

    2009-01-01

    The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…

  6. Multisensor data fusion algorithm development

    SciTech Connect

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  7. Implementing a self-structuring data learning algorithm

    NASA Astrophysics Data System (ADS)

    Graham, James; Carson, Daniel; Ternovskiy, Igor

    2016-05-01

    In this paper, we elaborate on what we did to implement our self-structuring data learning algorithm. To recap, we are working to develop a data learning algorithm that will eventually be capable of goal driven pattern learning and extrapolation of more complex patterns from less complex ones. At this point we have developed a conceptual framework for the algorithm, but have yet to discuss our actual implementation and the consideration and shortcuts we needed to take to create said implementation. We will elaborate on our initial setup of the algorithm and the scenarios we used to test our early stage algorithm. While we want this to be a general algorithm, it is necessary to start with a simple scenario or two to provide a viable development and testing environment. To that end, our discussion will be geared toward what we include in our initial implementation and why, as well as what concerns we may have. In the future, we expect to be able to apply our algorithm to a more general approach, but to do so within a reasonable time, we needed to pick a place to start.

  8. On algorithmic rate-coded AER generation.

    PubMed

    Linares-Barranco, Alejandro; Jimenez-Moreno, Gabriel; Linares-Barranco, Bernabé; Civit-Balcells, Antón

    2006-05-01

    This paper addresses the problem of converting a conventional video stream based on sequences of frames into the spike event-based representation known as the address-event-representation (AER). In this paper we concentrate on rate-coded AER. The problem is addressed as an algorithmic problem, in which different methods are proposed, implemented and tested through software algorithms. The proposed algorithms are comparatively evaluated according to different criteria. Emphasis is put on the potential of such algorithms for a) doing the frame-based to event-based representation in real time, and b) that the resulting event streams ressemble as much as possible those generated naturally by rate-coded address-event VLSI chips, such as silicon AER retinae. It is found that simple and straightforward algorithms tend to have high potential for real time but produce event distributions that differ considerably from those obtained in AER VLSI chips. On the other hand, sophisticated algorithms that yield better event distributions are not efficient for real time operations. The methods based on linear-feedback-shift-register (LFSR) pseudorandom number generation is a good compromise, which is feasible for real time and yield reasonably well distributed events in time. Our software experiments, on a 1.6-GHz Pentium IV, show that at 50% AER bus load the proposed algorithms require between 0.011 and 1.14 ms per 8 bit-pixel per frame. One of the proposed LFSR methods is implemented in real time hardware using a prototyping board that includes a VirtexE 300 FPGA. The demonstration hardware is capable of transforming frames of 64 x 64 pixels of 8-bit depth at a frame rate of 25 frames per second, producing spike events at a peak rate of 10(7) events per second. PMID:16722179

  9. How Are Mate Preferences Linked with Actual Mate Selection? Tests of Mate Preference Integration Algorithms Using Computer Simulations and Actual Mating Couples

    PubMed Central

    Conroy-Beam, Daniel; Buss, David M.

    2016-01-01

    Prior mate preference research has focused on the content of mate preferences. Yet in real life, people must select mates among potentials who vary along myriad dimensions. How do people incorporate information on many different mate preferences in order to choose which partner to pursue? Here, in Study 1, we compare seven candidate algorithms for integrating multiple mate preferences in a competitive agent-based model of human mate choice evolution. This model shows that a Euclidean algorithm is the most evolvable solution to the problem of selecting fitness-beneficial mates. Next, across three studies of actual couples (Study 2: n = 214; Study 3: n = 259; Study 4: n = 294) we apply the Euclidean algorithm toward predicting mate preference fulfillment overall and preference fulfillment as a function of mate value. Consistent with the hypothesis that mate preferences are integrated according to a Euclidean algorithm, we find that actual mates lie close in multidimensional preference space to the preferences of their partners. Moreover, this Euclidean preference fulfillment is greater for people who are higher in mate value, highlighting theoretically-predictable individual differences in who gets what they want. These new Euclidean tools have important implications for understanding real-world dynamics of mate selection. PMID:27276030

  10. Test of the Semi-Analytical Case 1 and Gelbstoff Case 2 SeaWiFS Algorithm with a Global Data Set

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1997-01-01

    The algorithm-development activities at USF during the second half of 1997 have concentrated on data collection and theoretical modeling. Six abstracts were submitted for presentation at the AGU conference in San Diego, California during February 9-13, 1998. Four papers were submitted to JGR and Applied Optics for publication.

  11. How Are Mate Preferences Linked with Actual Mate Selection? Tests of Mate Preference Integration Algorithms Using Computer Simulations and Actual Mating Couples.

    PubMed

    Conroy-Beam, Daniel; Buss, David M

    2016-01-01

    Prior mate preference research has focused on the content of mate preferences. Yet in real life, people must select mates among potentials who vary along myriad dimensions. How do people incorporate information on many different mate preferences in order to choose which partner to pursue? Here, in Study 1, we compare seven candidate algorithms for integrating multiple mate preferences in a competitive agent-based model of human mate choice evolution. This model shows that a Euclidean algorithm is the most evolvable solution to the problem of selecting fitness-beneficial mates. Next, across three studies of actual couples (Study 2: n = 214; Study 3: n = 259; Study 4: n = 294) we apply the Euclidean algorithm toward predicting mate preference fulfillment overall and preference fulfillment as a function of mate value. Consistent with the hypothesis that mate preferences are integrated according to a Euclidean algorithm, we find that actual mates lie close in multidimensional preference space to the preferences of their partners. Moreover, this Euclidean preference fulfillment is greater for people who are higher in mate value, highlighting theoretically-predictable individual differences in who gets what they want. These new Euclidean tools have important implications for understanding real-world dynamics of mate selection. PMID:27276030

  12. Improving DTI tractography by including diagonal tract propagation.

    PubMed

    Taylor, Paul A; Cho, Kuan-Hung; Lin, Ching-Po; Biswal, Bharat B

    2012-01-01

    Tractography algorithms have been developed to reconstruct likely WM pathways in the brain from diffusion tensor imaging (DTI) data. In this study, an elegant and simple means for improving existing tractography algorithms is proposed by allowing tracts to propagate through diagonal trajectories between voxels, instead of only rectilinearly to their facewise neighbors. A series of tests (using both real and simulated data sets) are utilized to show several benefits of this new approach. First, the inclusion of diagonal tract propagation decreases the dependence of an algorithm on the arbitrary orientation of coordinate axes and therefore reduces numerical errors associated with that bias (which are also demonstrated here). Moreover, both quantitatively and qualitatively, including diagonals decreases overall noise sensitivity of results and leads to significantly greater efficiency in scanning protocols; that is, the obtained tracts converge much more quickly (i.e., in a smaller amount of scanning time) to those of data sets with high SNR and spatial resolution. Importantly, the inclusion of diagonal propagation adds essentially no appreciable time of calculation or computational costs to standard methods. This study focuses on the widely-used streamline tracking method, FACT (fiber assessment by continuous tracking), and the modified method is termed "FACTID" (FACT including diagonals). PMID:22970125

  13. Old And New Algorithms For Toeplitz Systems

    NASA Astrophysics Data System (ADS)

    Brent, Richard P.

    1988-02-01

    Toeplitz linear systems and Toeplitz least squares problems commonly arise in digital signal processing. In this paper we survey some old, "well known" algorithms and some recent algorithms for solving these problems. We concentrate our attention on algorithms which can be implemented efficiently on a variety of parallel machines (including pipelined vector processors and systolic arrays). We distinguish between algorithms which require inner products, and algorithms which avoid inner products, and thus are better suited to parallel implementation on some parallel architectures. Finally, we mention some "asymptotically fast" 0(n(log n)2) algorithms and compare them with 0(n2) algorithms.

  14. A Frequency-Domain Substructure System Identification Algorithm

    NASA Technical Reports Server (NTRS)

    Blades, Eric L.; Craig, Roy R., Jr.

    1996-01-01

    A new frequency-domain system identification algorithm is presented for system identification of substructures, such as payloads to be flown aboard the Space Shuttle. In the vibration test, all interface degrees of freedom where the substructure is connected to the carrier structure are either subjected to active excitation or are supported by a test stand with the reaction forces measured. The measured frequency-response data is used to obtain a linear, viscous-damped model with all interface-degree of freedom entries included. This model can then be used to validate analytical substructure models. This procedure makes it possible to obtain not only the fixed-interface modal data associated with a Craig-Bampton substructure model, but also the data associated with constraint modes. With this proposed algorithm, multiple-boundary-condition tests are not required, and test-stand dynamics is accounted for without requiring a separate modal test or finite element modeling of the test stand. Numerical simulations are used in examining the algorithm's ability to estimate valid reduced-order structural models. The algorithm's performance when frequency-response data covering narrow and broad frequency bandwidths is used as input is explored. Its performance when noise is added to the frequency-response data and the use of different least squares solution techniques are also examined. The identified reduced-order models are also compared for accuracy with other test-analysis models and a formulation for a Craig-Bampton test-analysis model is also presented.

  15. Space vehicle Viterbi decoder. [data converters, algorithms

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The design and fabrication of an extremely low-power, constraint-length 7, rate 1/3 Viterbi decoder brassboard capable of operating at information rates of up to 100 kb/s is presented. The brassboard is partitioned to facilitate a later transition to an LSI version requiring even less power. The effect of soft-decision thresholds, path memory lengths, and output selection algorithms on the bit error rate is evaluated. A branch synchronization algorithm is compared with a more conventional approach. The implementation of the decoder and its test set (including all-digital noise source) are described along with the results of various system tests and evaluations. Results and recommendations are presented.

  16. An innovative localisation algorithm for railway vehicles

    NASA Astrophysics Data System (ADS)

    Allotta, B.; D'Adamio, P.; Malvezzi, M.; Pugi, L.; Ridolfi, A.; Rindi, A.; Vettori, G.

    2014-11-01

    . The estimation strategy has good performance also under degraded adhesion conditions and could be put on board of high-speed railway vehicles; it represents an accurate and reliable solution. The IMU board is tested via a dedicated Hardware in the Loop (HIL) test rig: it includes an industrial robot able to replicate the motion of the railway vehicle. Through the generated experimental outputs the performances of the innovative localisation algorithm have been evaluated: the HIL test rig permitted to test the proposed algorithm, avoiding expensive (in terms of time and cost) on-track tests, obtaining encouraging results. In fact, the preliminary results show a significant improvement of the position and speed estimation performances compared to those obtained with SCMT algorithms, currently in use on the Italian railway network.

  17. The systems biology simulation core algorithm

    PubMed Central

    2013-01-01

    Background With the increasing availability of high dimensional time course data for metabolites, genes, and fluxes, the mathematical description of dynamical systems has become an essential aspect of research in systems biology. Models are often encoded in formats such as SBML, whose structure is very complex and difficult to evaluate due to many special cases. Results This article describes an efficient algorithm to solve SBML models that are interpreted in terms of ordinary differential equations. We begin our consideration with a formal representation of the mathematical form of the models and explain all parts of the algorithm in detail, including several preprocessing steps. We provide a flexible reference implementation as part of the Systems Biology Simulation Core Library, a community-driven project providing a large collection of numerical solvers and a sophisticated interface hierarchy for the definition of custom differential equation systems. To demonstrate the capabilities of the new algorithm, it has been tested with the entire SBML Test Suite and all models of BioModels Database. Conclusions The formal description of the mathematics behind the SBML format facilitates the implementation of the algorithm within specifically tailored programs. The reference implementation can be used as a simulation backend for Java™-based programs. Source code, binaries, and documentation can be freely obtained under the terms of the LGPL version 3 from http://simulation-core.sourceforge.net. Feature requests, bug reports, contributions, or any further discussion can be directed to the mailing list simulation-core-development@lists.sourceforge.net. PMID:23826941

  18. HPTN 071 (PopART): Rationale and design of a cluster-randomised trial of the population impact of an HIV combination prevention intervention including universal testing and treatment – a study protocol for a cluster randomised trial

    PubMed Central

    2014-01-01

    Background Effective interventions to reduce HIV incidence in sub-Saharan Africa are urgently needed. Mathematical modelling and the HIV Prevention Trials Network (HPTN) 052 trial results suggest that universal HIV testing combined with immediate antiretroviral treatment (ART) should substantially reduce incidence and may eliminate HIV as a public health problem. We describe the rationale and design of a trial to evaluate this hypothesis. Methods/Design A rigorously-designed trial of universal testing and treatment (UTT) interventions is needed because: i) it is unknown whether these interventions can be delivered to scale with adequate uptake; ii) there are many uncertainties in the models such that the population-level impact of these interventions is unknown; and ii) there are potential adverse effects including sexual risk disinhibition, HIV-related stigma, over-burdening of health systems, poor adherence, toxicity, and drug resistance. In the HPTN 071 (PopART) trial, 21 communities in Zambia and South Africa (total population 1.2 m) will be randomly allocated to three arms. Arm A will receive the full PopART combination HIV prevention package including annual home-based HIV testing, promotion of medical male circumcision for HIV-negative men, and offer of immediate ART for those testing HIV-positive; Arm B will receive the full package except that ART initiation will follow current national guidelines; Arm C will receive standard of care. A Population Cohort of 2,500 adults will be randomly selected in each community and followed for 3 years to measure the primary outcome of HIV incidence. Based on model projections, the trial will be well-powered to detect predicted effects on HIV incidence and secondary outcomes. Discussion Trial results, combined with modelling and cost data, will provide short-term and long-term estimates of cost-effectiveness of UTT interventions. Importantly, the three-arm design will enable assessment of how much could be achieved by

  19. A Rotor Tip Vortex Tracing Algorithm for Image Post-Processing

    NASA Technical Reports Server (NTRS)

    Overmeyer, Austin D.

    2015-01-01

    A neurite tracing algorithm, originally developed for medical image processing, was used to trace the location of the rotor tip vortex in density gradient flow visualization images. The tracing algorithm was applied to several representative test images to form case studies. The accuracy of the tracing algorithm was compared to two current methods including a manual point and click method and a cross-correlation template method. It is shown that the neurite tracing algorithm can reduce the post-processing time to trace the vortex by a factor of 10 to 15 without compromising the accuracy of the tip vortex location compared to other methods presented in literature.

  20. Approximate string matching algorithms for limited-vocabulary OCR output correction

    NASA Astrophysics Data System (ADS)

    Lasko, Thomas A.; Hauser, Susan E.

    2000-12-01

    Five methods for matching words mistranslated by optical character recognition to their most likely match in a reference dictionary were tested on data from the archives of the National Library of Medicine. The methods, including an adaptation of the cross correlation algorithm, the generic edit distance algorithm, the edit distance algorithm with a probabilistic substitution matrix, Bayesian analysis, and Bayesian analysis on an actively thinned reference dictionary were implemented and their accuracy rates compared. Of the five, the Bayesian algorithm produced the most correct matches (87%), and had the advantage of producing scores that have a useful and practical interpretation.

  1. FORTRAN Algorithm for Image Processing

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Hull, David R.

    1987-01-01

    FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.

  2. Solar Occultation Retrieval Algorithm Development

    NASA Technical Reports Server (NTRS)

    Lumpe, Jerry D.

    2004-01-01

    This effort addresses the comparison and validation of currently operational solar occultation retrieval algorithms, and the development of generalized algorithms for future application to multiple platforms. initial development of generalized forward model algorithms capable of simulating transmission data from of the POAM II/III and SAGE II/III instruments. Work in the 2" quarter will focus on: completion of forward model algorithms, including accurate spectral characteristics for all instruments, and comparison of simulated transmission data with actual level 1 instrument data for specific occultation events.

  3. STAR Algorithm Integration Team - Facilitating operational algorithm development

    NASA Astrophysics Data System (ADS)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  4. The VITRO Score (Von Willebrand Factor Antigen/Thrombocyte Ratio) as a New Marker for Clinically Significant Portal Hypertension in Comparison to Other Non-Invasive Parameters of Fibrosis Including ELF Test

    PubMed Central

    Hametner, Stephanie; Ferlitsch, Arnulf; Ferlitsch, Monika; Etschmaier, Alexandra; Schöfl, Rainer; Ziachehabi, Alexander; Maieron, Andreas

    2016-01-01

    Background Clinically significant portal hypertension (CSPH), defined as hepatic venous pressure gradient (HVPG) ≥10 mmHg, causes major complications. HVPG is not always available, so a non-invasive tool to diagnose CSPH would be useful. VWF-Ag can be used to diagnose. Using the VITRO score (the VWF-Ag/platelet ratio) instead of VWF-Ag itself improves the diagnostic accuracy of detecting cirrhosis/ fibrosis in HCV patients. Aim This study tested the diagnostic accuracy of VITRO score detecting CSPH compared to HVPG measurement. Methods All patients underwent HVPG testing and were categorised as CSPH or no CSPH. The following patient data were determined: CPS, D’Amico stage, VITRO score, APRI and transient elastography (TE). Results The analysis included 236 patients; 170 (72%) were male, and the median age was 57.9 (35.2–76.3; 95% CI). Disease aetiology included ALD (39.4%), HCV (23.4%), NASH (12.3%), other (8.1%) and unknown (11.9%). The CPS showed 140 patients (59.3%) with CPS A; 56 (23.7%) with CPS B; and 18 (7.6%) with CPS C. 136 patients (57.6%) had compensated and 100 (42.4%) had decompensated cirrhosis; 83.9% had HVPG ≥10 mmHg. The VWF-Ag and the VITRO score increased significantly with worsening HVPG categories (P<0.0001). ROC analysis was performed for the detection of CSPH and showed AUC values of 0.92 for TE, 0.86 for VITRO score, 0.79 for VWF-Ag, 0.68 for ELF and 0.62 for APRI. Conclusion The VITRO score is an easy way to diagnose CSPH independently of CPS in routine clinical work and may improve the management of patients with cirrhosis. PMID:26895398

  5. Corrective Action Investigation Plan for Corrective Action Unit 254: Area 25 R-MAD Decontamination Facility, Nevada Test Site, Nevada (includes ROTC No. 1, date 01/25/1999)

    SciTech Connect

    DOE /NV

    1999-07-29

    This Corrective Action Investigation Plan contains the US Department of Energy, Nevada Operations Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 254 under the Federal Facility Agreement and Consent Order. Corrective Action Unit 254 consists of Corrective Action Site (CAS) 25-23-06, Decontamination Facility. Located in Area 25 at the Nevada Test Site (NTS), CAU 254 was used between 1963 through 1973 for the decontamination of test-car hardware and tooling used in the Nuclear Rocket Development Station program. The CAS is composed of a fenced area measuring approximately 119 feet by 158 feet that includes Building 3126, an associated aboveground storage tank, a potential underground storage area, two concrete decontamination pads, a generator, two sumps, and a storage yard. Based on site history, the scope of this plan is to resolve the problem statement identified during the Data Quality Objectives process that decontamination activities at this CAU site may have resulted in the release of contaminants of concern (COCs) onto building surfaces, down building drains to associated leachfields, and to soils associated with two concrete decontamination pads located outside the building. Therefore, the scope of the corrective action field investigation will involve soil sampling at biased and random locations in the yard using a direct-push method, scanning and static radiological surveys, and laboratory analyses of all soil/building samples. Historical information provided by former NTS employees indicates that solvents and degreasers may have been used in the decontamination processes; therefore, potential COCs include volatile/semivolatile organic compounds, Resource Conservation and Recovery Act metals, petroleum hydrocarbons, polychlorinated biphenyls, pesticides, asbestos, gamma-emitting radionuclides, plutonium, uranium, and strontium-90. The results of this

  6. Object-oriented algorithmic laboratory for ordering sparse matrices

    SciTech Connect

    Kumfert, G K

    2000-05-01

    generate the global ordering. Our software laboratory, ''Spinole'', implements state-of-the-art ordering algorithms for sparse matrices and graphs. We have used it to examine and augment the behavior of existing algorithms and test new ones. Its 40,000+ lilies of C++ code includes a base library test drivers, sample applications, and interfaces to C, C++, Matlab, and PETSc. Spinole is freely available and can be built on a variety of UNIX platforms as well as WindowsNT.

  7. Synthesis of Greedy Algorithms Using Dominance Relations

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2010-01-01

    Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.

  8. An Introduction to the Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Tian, Jian-quan; Miao, Dan-min; Zhu, Xia; Gong, Jing-jing

    2007-01-01

    Computerized adaptive testing (CAT) has unsurpassable advantages over traditional testing. It has become the mainstream in large scale examinations in modern society. This paper gives a brief introduction to CAT including differences between traditional testing and CAT, the principles of CAT, psychometric theory and computer algorithms of CAT, the…

  9. Variable Selection using MM Algorithms

    PubMed Central

    Hunter, David R.; Li, Runze

    2009-01-01

    Variable selection is fundamental to high-dimensional statistical modeling. Many variable selection techniques may be implemented by maximum penalized likelihood using various penalty functions. Optimizing the penalized likelihood function is often challenging because it may be nondifferentiable and/or nonconcave. This article proposes a new class of algorithms for finding a maximizer of the penalized likelihood for a broad class of penalty functions. These algorithms operate by perturbing the penalty function slightly to render it differentiable, then optimizing this differentiable function using a minorize-maximize (MM) algorithm. MM algorithms are useful extensions of the well-known class of EM algorithms, a fact that allows us to analyze the local and global convergence of the proposed algorithm using some of the techniques employed for EM algorithms. In particular, we prove that when our MM algorithms converge, they must converge to a desirable point; we also discuss conditions under which this convergence may be guaranteed. We exploit the Newton-Raphson-like aspect of these algorithms to propose a sandwich estimator for the standard errors of the estimators. Our method performs well in numerical tests. PMID:19458786

  10. Evaluation of the expected moments algorithm and a multiple low-outlier test for flood frequency analysis at streamgaging stations in Arizona

    USGS Publications Warehouse

    Paretti, Nicholas V.; Kennedy, Jeffrey R.; Cohn, Timothy A.

    2014-01-01

    Flooding is among the costliest natural disasters in terms of loss of life and property in Arizona, which is why the accurate estimation of flood frequency and magnitude is crucial for proper structural design and accurate floodplain mapping. Current guidelines for flood frequency analysis in the United States are described in Bulletin 17B (B17B), yet since B17B’s publication in 1982 (Interagency Advisory Committee on Water Data, 1982), several improvements have been proposed as updates for future guidelines. Two proposed updates are the Expected Moments Algorithm (EMA) to accommodate historical and censored data, and a generalized multiple Grubbs-Beck (MGB) low-outlier test. The current guidelines use a standard Grubbs-Beck (GB) method to identify low outliers, changing the determination of the moment estimators because B17B uses a conditional probability adjustment to handle low outliers while EMA censors the low outliers. B17B and EMA estimates are identical if no historical information or censored or low outliers are present in the peak-flow data. EMA with MGB (EMA-MGB) test was compared to the standard B17B (B17B-GB) method for flood frequency analysis at 328 streamgaging stations in Arizona. The methods were compared using the relative percent difference (RPD) between annual exceedance probabilities (AEPs), goodness-of-fit assessments, random resampling procedures, and Monte Carlo simulations. The AEPs were calculated and compared using both station skew and weighted skew. Streamgaging stations were classified by U.S. Geological Survey (USGS) National Water Information System (NWIS) qualification codes, used to denote historical and censored peak-flow data, to better understand the effect that nonstandard flood information has on the flood frequency analysis for each method. Streamgaging stations were also grouped according to geographic flood regions and analyzed separately to better understand regional differences caused by physiography and climate. The B

  11. Water flow algorithm decision support tool for travelling salesman problem

    NASA Astrophysics Data System (ADS)

    Kamarudin, Anis Aklima; Othman, Zulaiha Ali; Sarim, Hafiz Mohd

    2016-08-01

    This paper discuss about the role of Decision Support Tool in Travelling Salesman Problem (TSP) for helping the researchers who doing research in same area will get the better result from the proposed algorithm. A study has been conducted and Rapid Application Development (RAD) model has been use as a methodology which includes requirement planning, user design, construction and cutover. Water Flow Algorithm (WFA) with initialization technique improvement is used as the proposed algorithm in this study for evaluating effectiveness against TSP cases. For DST evaluation will go through usability testing conducted on system use, quality of information, quality of interface and overall satisfaction. Evaluation is needed for determine whether this tool can assists user in making a decision to solve TSP problems with the proposed algorithm or not. Some statistical result shown the ability of this tool in term of helping researchers to conduct the experiments on the WFA with improvements TSP initialization.

  12. Recursive Algorithm For Linear Regression

    NASA Technical Reports Server (NTRS)

    Varanasi, S. V.

    1988-01-01

    Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.

  13. DETECTION OF SUBSURFACE FACILITIES INCLUDING NON-METALLIC PIPE

    SciTech Connect

    Mr. Herb Duvoisin

    2003-05-26

    CyTerra has leveraged our unique, shallow buried plastic target detection technology developed under US Army contracts into deeper buried subsurface facilities and including nonmetallic pipe detection. This Final Report describes a portable, low-cost, real-time, and user-friendly subsurface plastic pipe detector (LULU- Low Cost Utility Location Unit) that relates to the goal of maintaining the integrity and reliability of the nation's natural gas transmission and distribution network by preventing third party damage, by detecting potential infringements. Except for frequency band and antenna size, the LULU unit is almost identical to those developed for the US Army. CyTerra designed, fabricated, and tested two frequency stepped GPR systems, spanning the frequencies of importance (200 to 1600 MHz), one low and one high frequency system. Data collection and testing was done at a variety of locations (selected for soil type variations) on both targets of opportunity and selected buried targets. We developed algorithms and signal processing techniques that provide for the automatic detection of the buried utility lines. The real time output produces a sound as the radar passes over the utility line alerting the operator to the presence of a buried object. Our unique, low noise/high performance RF hardware, combined with our field tested detection algorithms, represents an important advancement toward achieving the DOE potential infringement goal.

  14. Genetic Bee Colony (GBC) algorithm: A new gene selection method for microarray cancer classification.

    PubMed

    Alshamlan, Hala M; Badr, Ghada H; Alohali, Yousef A

    2015-06-01

    Naturally inspired evolutionary algorithms prove effectiveness when used for solving feature selection and classification problems. Artificial Bee Colony (ABC) is a relatively new swarm intelligence method. In this paper, we propose a new hybrid gene selection method, namely Genetic Bee Colony (GBC) algorithm. The proposed algorithm combines the used of a Genetic Algorithm (GA) along with Artificial Bee Colony (ABC) algorithm. The goal is to integrate the advantages of both algorithms. The proposed algorithm is applied to a microarray gene expression profile in order to select the most predictive and informative genes for cancer classification. In order to test the accuracy performance of the proposed algorithm, extensive experiments were conducted. Three binary microarray datasets are use, which include: colon, leukemia, and lung. In addition, another three multi-class microarray datasets are used, which are: SRBCT, lymphoma, and leukemia. Results of the GBC algorithm are compared with our recently proposed technique: mRMR when combined with the Artificial Bee Colony algorithm (mRMR-ABC). We also compared the combination of mRMR with GA (mRMR-GA) and Particle Swarm Optimization (mRMR-PSO) algorithms. In addition, we compared the GBC algorithm with other related algorithms that have been recently published in the literature, using all benchmark datasets. The GBC algorithm shows superior performance as it achieved the highest classification accuracy along with the lowest average number of selected genes. This proves that the GBC algorithm is a promising approach for solving the gene selection problem in both binary and multi-class cancer classification. PMID:25880524

  15. Surveillance test bed for SDIO

    NASA Astrophysics Data System (ADS)

    Wesley, Michael; Osterheld, Robert; Kyser, Jeff; Farr, Michele; Vandergriff, Linda J.

    1991-08-01

    The Surveillance Test Bed (STB) is a program under development for the Strategic Defense Initiative Organization (SDIO). Its most salient features are (1) the integration of high fidelity backgrounds and optical signal processing models with algorithms for sensor tasking, bulk filtering, track/correlation and discrimination and (2) the integration of radar and optical estimates for track and discrimination. Backgrounds include induced environments such as nuclear events, fragments and debris, and natural environments, such as earth limb, zodiacal light, stars, sun and moon. At the highest level of fidelity, optical emulation hardware combines environmental information with threat information to produce detector samples for signal processing algorithms/hardware under test. Simulation of visible sensors and radars model measurement degradation due to the various environmental effects. The modeled threat is composed of multiple object classes. The number of discrimination classes are further increased by inclusion of fragments, debris and stars. High fidelity measurements will be used to drive bulk filtering algorithms that seek to reject fragments and debris and, in the case of optical sensors, stars. The output of the bulk filters will be used to drive track/correlation algorithms. Track algorithm output will include sequences of measurements that have been degraded by backgrounds, closely spaced objects (CSOs), signal processing errors, bulk filtering errors and miscorrelations; these measurements will be presented as input to the discrimination algorithms. The STB will implement baseline IR track file editing and IR and radar feature extraction and classification algorithms. The baseline will also include data fusion algorithms which will allow the combination of discrimination estimates from multiple sensors, including IR and radar; alternative discrimination algorithms may be substituted for the baseline after STB completion.

  16. PCA-LBG-based algorithms for VQ codebook generation

    NASA Astrophysics Data System (ADS)

    Tsai, Jinn-Tsong; Yang, Po-Yuan

    2015-04-01

    Vector quantisation (VQ) codebooks are generated by combining principal component analysis (PCA) algorithms with Linde-Buzo-Gray (LBG) algorithms. All training vectors are grouped according to the projected values of the principal components. The PCA-LBG-based algorithms include (1) PCA-LBG-Median, which selects the median vector of each group, (2) PCA-LBG-Centroid, which adopts the centroid vector of each group, and (3) PCA-LBG-Random, which randomly selects a vector of each group. The LBG algorithm finds a codebook based on the better vectors sent to an initial codebook by the PCA. The PCA performs an orthogonal transformation to convert a set of potentially correlated variables into a set of variables that are not linearly correlated. Because the orthogonal transformation efficiently distinguishes test image vectors, the proposed PCA-LBG-based algorithm is expected to outperform conventional algorithms in designing VQ codebooks. The experimental results confirm that the proposed PCA-LBG-based algorithms indeed obtain better results compared to existing methods reported in the literature.

  17. Commissioning and initial acceptance tests for a commercial convolution dose calculation algorithm for radiotherapy treatment planning in comparison with Monte Carlo simulation and measurement

    PubMed Central

    Moradi, Farhad; Mahdavi, Seyed Rabi; Mostaar, Ahmad; Motamedi, Mohsen

    2012-01-01

    In this study the commissioning of a dose calculation algorithm in a currently used treatment planning system was performed and the calculation accuracy of two available methods in the treatment planning system i.e., collapsed cone convolution (CCC) and equivalent tissue air ratio (ETAR) was verified in tissue heterogeneities. For this purpose an inhomogeneous phantom (IMRT thorax phantom) was used and dose curves obtained by the TPS (treatment planning system) were compared with experimental measurements and Monte Carlo (MCNP code) simulation. Dose measurements were performed by using EDR2 radiographic films within the phantom. Dose difference (DD) between experimental results and two calculation methods was obtained. Results indicate maximum difference of 12% in the lung and 3% in the bone tissue of the phantom between two methods and the CCC algorithm shows more accurate depth dose curves in tissue heterogeneities. Simulation results show the accurate dose estimation by MCNP4C in soft tissue region of the phantom and also better results than ETAR method in bone and lung tissues. PMID:22973081

  18. Commissioning and initial acceptance tests for a commercial convolution dose calculation algorithm for radiotherapy treatment planning in comparison with Monte Carlo simulation and measurement.

    PubMed

    Moradi, Farhad; Mahdavi, Seyed Rabi; Mostaar, Ahmad; Motamedi, Mohsen

    2012-07-01

    In this study the commissioning of a dose calculation algorithm in a currently used treatment planning system was performed and the calculation accuracy of two available methods in the treatment planning system i.e., collapsed cone convolution (CCC) and equivalent tissue air ratio (ETAR) was verified in tissue heterogeneities. For this purpose an inhomogeneous phantom (IMRT thorax phantom) was used and dose curves obtained by the TPS (treatment planning system) were compared with experimental measurements and Monte Carlo (MCNP code) simulation. Dose measurements were performed by using EDR2 radiographic films within the phantom. Dose difference (DD) between experimental results and two calculation methods was obtained. Results indicate maximum difference of 12% in the lung and 3% in the bone tissue of the phantom between two methods and the CCC algorithm shows more accurate depth dose curves in tissue heterogeneities. Simulation results show the accurate dose estimation by MCNP4C in soft tissue region of the phantom and also better results than ETAR method in bone and lung tissues. PMID:22973081

  19. NOSS altimeter algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.

    1982-01-01

    A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.

  20. NPOESS Tools for Rapid Algorithm Updates

    NASA Astrophysics Data System (ADS)

    Route, G.; Grant, K. D.; Hughes, B.; Reed, B.

    2009-12-01

    The National Oceanic and Atmospheric Administration (NOAA), Department of Defense (DoD), and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation weather and environmental satellite system; the National Polar-orbiting Operational Environmental Satellite System (NPOESS). NPOESS replaces the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA and the Defense Meteorological Satellite Program (DMSP) managed by the DoD. The NPOESS satellites carry a suite of sensors that collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The ground data processing segment for NPOESS is the Interface Data Processing Segment (IDPS), developed by Raytheon Intelligence and Information Systems. The IDPS processes both NPP and NPOESS satellite data to provide environmental data products to NOAA and DoD processing centers operated by the United States government. Northrop Grumman Aerospace Systems Algorithms and Data Products (A&DP) organization is responsible for the algorithms that produce the EDRs, including their quality aspects. As the Calibration and Validation activities move forward following both the NPP launch and subsequent NPOESS launches, rapid algorithm updates may be required. Raytheon and Northrop Grumman have developed tools and processes to enable changes to be evaluated, tested, and moved into the operational baseline in a rapid and efficient manner. This presentation will provide an overview of the tools available to the Cal/Val teams to ensure rapid and accurate assessment of algorithm changes, along with the processes in place to ensure baseline integrity.

  1. Optimization of a chemical identification algorithm

    NASA Astrophysics Data System (ADS)

    Chyba, Thomas H.; Fisk, Brian; Gunning, Christin; Farley, Kevin; Polizzi, Amber; Baughman, David; Simpson, Steven; Slamani, Mohamed-Adel; Almassy, Robert; Da Re, Ryan; Li, Eunice; MacDonald, Steve; Slamani, Ahmed; Mitchell, Scott A.; Pendell-Jones, Jay; Reed, Timothy L.; Emge, Darren

    2010-04-01

    A procedure to evaluate and optimize the performance of a chemical identification algorithm is presented. The Joint Contaminated Surface Detector (JCSD) employs Raman spectroscopy to detect and identify surface chemical contamination. JCSD measurements of chemical warfare agents, simulants, toxic industrial chemicals, interferents and bare surface backgrounds were made in the laboratory and under realistic field conditions. A test data suite, developed from these measurements, is used to benchmark algorithm performance throughout the improvement process. In any one measurement, one of many possible targets can be present along with interferents and surfaces. The detection results are expressed as a 2-category classification problem so that Receiver Operating Characteristic (ROC) techniques can be applied. The limitations of applying this framework to chemical detection problems are discussed along with means to mitigate them. Algorithmic performance is optimized globally using robust Design of Experiments and Taguchi techniques. These methods require figures of merit to trade off between false alarms and detection probability. Several figures of merit, including the Matthews Correlation Coefficient and the Taguchi Signal-to-Noise Ratio are compared. Following the optimization of global parameters which govern the algorithm behavior across all target chemicals, ROC techniques are employed to optimize chemical-specific parameters to further improve performance.

  2. BALL - biochemical algorithms library 1.3

    PubMed Central

    2010-01-01

    Background The Biochemical Algorithms Library (BALL) is a comprehensive rapid application development framework for structural bioinformatics. It provides an extensive C++ class library of data structures and algorithms for molecular modeling and structural bioinformatics. Using BALL as a programming toolbox does not only allow to greatly reduce application development times but also helps in ensuring stability and correctness by avoiding the error-prone reimplementation of complex algorithms and replacing them with calls into the library that has been well-tested by a large number of developers. In the ten years since its original publication, BALL has seen a substantial increase in functionality and numerous other improvements. Results Here, we discuss BALL's current functionality and highlight the key additions and improvements: support for additional file formats, molecular edit-functionality, new molecular mechanics force fields, novel energy minimization techniques, docking algorithms, and support for cheminformatics. Conclusions BALL is available for all major operating systems, including Linux, Windows, and MacOS X. It is available free of charge under the Lesser GNU Public License (LPGL). Parts of the code are distributed under the GNU Public License (GPL). BALL is available as source code and binary packages from the project web site at http://www.ball-project.org. Recently, it has been accepted into the debian project; integration into further distributions is currently pursued. PMID:20973958

  3. Greedy heuristic algorithm for solving series of eee components classification problems*

    NASA Astrophysics Data System (ADS)

    Kazakovtsev, A. L.; Antamoshkin, A. N.; Fedosov, V. V.

    2016-04-01

    Algorithms based on using the agglomerative greedy heuristics demonstrate precise and stable results for clustering problems based on k- means and p-median models. Such algorithms are successfully implemented in the processes of production of specialized EEE components for using in space systems which include testing each EEE device and detection of homogeneous production batches of the EEE components based on results of the tests using p-median models. In this paper, authors propose a new version of the genetic algorithm with the greedy agglomerative heuristic which allows solving series of problems. Such algorithm is useful for solving the k-means and p-median clustering problems when the number of clusters is unknown. Computational experiments on real data show that the preciseness of the result decreases insignificantly in comparison with the initial genetic algorithm for solving a single problem.

  4. Operationalizing hippocampal volume as an enrichment biomarker for amnestic MCI trials: effect of algorithm, test-retest variability and cut-point on trial cost, duration and sample size

    PubMed Central

    Yu, P.; Sun, J.; Wolz, R.; Stephenson, D.; Brewer, J.; Fox, N.C.; Cole, P.E.; Jack, C.R.; Hill, D.L.G.; Schwarz, A.J.

    2014-01-01

    Objective To evaluate the effect of computational algorithm, measurement variability and cut-point on hippocampal volume (HCV)-based patient selection for clinical trials in mild cognitive impairment (MCI). Methods We used normal control and amnestic MCI subjects from ADNI-1 as normative reference and screening cohorts. We evaluated the enrichment performance of four widely-used hippocampal segmentation algorithms (FreeSurfer, HMAPS, LEAP and NeuroQuant) in terms of two-year changes in MMSE, ADAS-Cog and CDR-SB. We modeled the effect of algorithm, test-retest variability and cut-point on sample size, screen fail rates and trial cost and duration. Results HCV-based patient selection yielded not only reduced sample sizes (by ~40–60%) but also lower trial costs (by ~30–40%) across a wide range of cut-points. Overall, the dependence on the cut-point value was similar for the three clinical instruments considered. Conclusion These results provide a guide to the choice of HCV cut-point for aMCI clinical trials, allowing an informed trade-off between statistical and practical considerations. PMID:24211008

  5. A General Algorithm for Reusing Krylov Subspace Information. I. Unsteady Navier-Stokes

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Vuik, C.; Lucas, Peter; vanGijzen, Martin; Bijl, Hester

    2010-01-01

    A general algorithm is developed that reuses available information to accelerate the iterative convergence of linear systems with multiple right-hand sides A x = b (sup i), which are commonly encountered in steady or unsteady simulations of nonlinear equations. The algorithm is based on the classical GMRES algorithm with eigenvector enrichment but also includes a Galerkin projection preprocessing step and several novel Krylov subspace reuse strategies. The new approach is applied to a set of test problems, including an unsteady turbulent airfoil, and is shown in some cases to provide significant improvement in computational efficiency relative to baseline approaches.

  6. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  7. Understanding Algorithms in Different Presentations

    ERIC Educational Resources Information Center

    Csernoch, Mária; Biró, Piroska; Abari, Kálmán; Máth, János

    2015-01-01

    Within the framework of the Testing Algorithmic and Application Skills project we tested first year students of Informatics at the beginning of their tertiary education. We were focusing on the students' level of understanding in different programming environments. In the present paper we provide the results from the University of Debrecen, the…

  8. Localization algorithm for acoustic emission

    NASA Astrophysics Data System (ADS)

    Salinas, V.; Vargas, Y.; Ruzzante, J.; Gaete, L.

    2010-01-01

    In this paper, an iterative algorithm for localization of acoustic emission (AE) source is presented. The main advantage of the system is that it is independent of the 'ability' in the determination of signal level to triggering the signal by the researcher. The system was tested in cylindrical samples with an AE localized in a known position; the precision in the source determination was of about 2 mm, better than the precision obtained with classic localization algorithms (˜1 cm).

  9. Corrective Action Investigation Plan for Corrective Action Unit 5: Landfills, Nevada Test Site, Nevada (Rev. No.: 0) includes Record of Technical Change No. 1 (dated 9/17/2002)

    SciTech Connect

    IT Corporation, Las Vegas, NV

    2002-05-28

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Operations Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 5 under the Federal Facility Agreement and Consent Order. Corrective Action Unit 5 consists of eight Corrective Action Sites (CASs): 05-15-01, Sanitary Landfill; 05-16-01, Landfill; 06-08-01, Landfill; 06-15-02, Sanitary Landfill; 06-15-03, Sanitary Landfill; 12-15-01, Sanitary Landfill; 20-15-01, Landfill; 23-15-03, Disposal Site. Located between Areas 5, 6, 12, 20, and 23 of the Nevada Test Site (NTS), CAU 5 consists of unlined landfills used in support of disposal operations between 1952 and 1992. Large volumes of solid waste were produced from the projects which used the CAU 5 landfills. Waste disposed in these landfills may be present without appropriate controls (i.e., use restrictions, adequate cover) and hazardous and/or radioactive constituents may be present at concentrations and locations that could potentially pose a threat to human health and/or the environment. During the 1992 to 1995 time frame, the NTS was used for various research and development projects including nuclear weapons testing. Instead of managing solid waste at one or two disposal sites, the practice on the NTS was to dispose of solid waste in the vicinity of the project. A review of historical documentation, process knowledge, personal interviews, and inferred activities associated with this CAU identified the following as potential contaminants of concern: volatile organic compounds, semivolatile organic compounds, polychlorinated biphenyls, pesticides, petroleum hydrocarbons (diesel- and gasoline-range organics), Resource Conservation and Recovery Act Metals, plus nickel and zinc. A two-phase approach has been selected to collect information and generate data to satisfy needed resolution criteria

  10. Corrective Action Investigation Plan for Corrective Action Unit 165: Areas 25 and 26 Dry Well and Washdown Areas, Nevada Test Site, Nevada (including Record of Technical Change Nos. 1, 2, and 3) (January 2002, Rev. 0)

    SciTech Connect

    U.S. Department of Energy, National Nuclear Security Administration Nevada Operations Office

    2002-01-09

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Operations Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 165 under the Federal Facility Agreement and Consent Order. Corrective Action Unit 165 consists of eight Corrective Action Sites (CASs): CAS 25-20-01, Lab Drain Dry Well; CAS 25-51-02, Dry Well; CAS 25-59-01, Septic System; CAS 26-59-01, Septic System; CAS 25-07-06, Train Decontamination Area; CAS 25-07-07, Vehicle Washdown; CAS 26-07-01, Vehicle Washdown Station; and CAS 25-47-01, Reservoir and French Drain. All eight CASs are located in the Nevada Test Site, Nevada. Six of these CASs are located in Area 25 facilities and two CASs are located in Area 26 facilities. The eight CASs at CAU 165 consist of dry wells, septic systems, decontamination pads, and a reservoir. The six CASs in Area 25 are associated with the Nuclear Rocket Development Station that operated from 1958 to 1973. The two CASs in Area 26 are associated with facilities constructed for Project Pluto, a series of nuclear reactor tests conducted between 1961 to 1964 to develop a nuclear-powered ramjet engine. Based on site history, the scope of this plan will be a two-phased approach to investigate the possible presence of hazardous and/or radioactive constituents at concentrations that could potentially pose a threat to human health and the environment. The Phase I analytical program for most CASs will include volatile organic compounds, semivolatile organic compounds, Resource Conservation and Recovery Act metals, total petroleum hydrocarbons, polychlorinated biphenyls, and radionuclides. If laboratory data obtained from the Phase I investigation indicates the presence of contaminants of concern, the process will continue with a Phase II investigation to define the extent of contamination. Based on the results of

  11. The Relegation Algorithm

    NASA Astrophysics Data System (ADS)

    Deprit, André; Palacián, Jesúus; Deprit, Etienne

    2001-03-01

    The relegation algorithm extends the method of normalization by Lie transformations. Given a Hamiltonian that is a power series ℋ = ℋ0+ ɛℋ1+ ... of a small parameter ɛ, normalization constructs a map which converts the principal part ℋ0into an integral of the transformed system — relegation does the same for an arbitrary function ℋ[G]. If the Lie derivative induced by ℋ[G] is semi-simple, a double recursion produces the generator of the relegating transformation. The relegation algorithm is illustrated with an elementary example borrowed from galactic dynamics; the exercise serves as a standard against which to test software implementations. Relegation is also applied to the more substantial example of a Keplerian system perturbed by radiation pressure emanating from a rotating source.

  12. Cubit Adaptive Meshing Algorithm Library

    2004-09-01

    CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMAL’s triangle meshing uses a 3D space advancing front method, the quadmore » meshing algorithm is based upon Sandia’s patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.« less

  13. Corrective Action Investigation Plan for Corrective Action Unit 214: Bunkers and Storage Areas Nevada Test Site, Nevada: Revision 0, Including Record of Technical Change No. 1 and No. 2

    SciTech Connect

    U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office

    2003-05-16

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 214 under the Federal Facility Agreement and Consent Order. Located in Areas 5, 11, and 25 of the Nevada Test Site, CAU 214 consists of nine Corrective Action Sites (CASs): 05-99-01, Fallout Shelters; 11-22-03, Drum; 25-99-12, Fly Ash Storage; 25-23-01, Contaminated Materials; 25-23-19, Radioactive Material Storage; 25-99-18, Storage Area; 25-34-03, Motor Dr/Gr Assembly (Bunker); 25-34-04, Motor Dr/Gr Assembly (Bunker); and 25-34-05, Motor Dr/Gr Assembly (Bunker). These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives (CAAs). The suspected contaminants and critical analyte s for CAU 214 include oil (total petroleum hydrocarbons-diesel-range organics [TPH-DRO], polychlorinated biphenyls [PCBs]), pesticides (chlordane, heptachlor, 4,4-DDT), barium, cadmium, chronium, lubricants (TPH-DRO, TPH-gasoline-range organics [GRO]), and fly ash (arsenic). The land-use zones where CAU 214 CASs are located dictate that future land uses will be limited to nonresidential (i.e., industrial) activities. The results of this field investigation will support a defensible evaluation of viable corrective action alternatives that will be presented in the corrective action decision document.

  14. Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.

    2005-01-01

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.

  15. SeaWiFS Science Algorithm Flow Chart

    NASA Technical Reports Server (NTRS)

    Darzi, Michael

    1998-01-01

    This flow chart describes the baseline science algorithms for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Data Processing System (SDPS). As such, it includes only processing steps used in the generation of the operational products that are archived by NASA's Goddard Space Flight Center (GSFC) Distributed Active Archive Center (DAAC). It is meant to provide the reader with a basic understanding of the scientific algorithm steps applied to SeaWiFS data. It does not include non-science steps, such as format conversions, and places the greatest emphasis on the geophysical calculations of the level-2 processing. Finally, the flow chart reflects the logic sequences and the conditional tests of the software so that it may be used to evaluate the fidelity of the implementation of the scientific algorithm. In many cases however, the chart may deviate from the details of the software implementation so as to simplify the presentation.

  16. LAHS: A novel harmony search algorithm based on learning automata

    NASA Astrophysics Data System (ADS)

    Enayatifar, Rasul; Yousefi, Moslem; Abdullah, Abdul Hanan; Darus, Amer Nordin

    2013-12-01

    This study presents a learning automata-based harmony search (LAHS) for unconstrained optimization of continuous problems. The harmony search (HS) algorithm performance strongly depends on the fine tuning of its parameters, including the harmony consideration rate (HMCR), pitch adjustment rate (PAR) and bandwidth (bw). Inspired by the spur-in-time responses in the musical improvisation process, learning capabilities are employed in the HS to select these parameters based on spontaneous reactions. An extensive numerical investigation is conducted on several well-known test functions, and the results are compared with the HS algorithm and its prominent variants, including the improved harmony search (IHS), global-best harmony search (GHS) and self-adaptive global-best harmony search (SGHS). The numerical results indicate that the LAHS is more efficient in finding optimum solutions and outperforms the existing HS algorithm variants.

  17. A novel pseudoderivative-based mutation operator for real-coded adaptive genetic algorithms

    PubMed Central

    Kanwal, Maxinder S; Ramesh, Avinash S; Huang, Lauren A

    2013-01-01

    Recent development of large databases, especially those in genetics and proteomics, is pushing the development of novel computational algorithms that implement rapid and accurate search strategies. One successful approach has been to use artificial intelligence and methods, including pattern recognition (e.g. neural networks) and optimization techniques (e.g. genetic algorithms). The focus of this paper is on optimizing the design of genetic algorithms by using an adaptive mutation rate that is derived from comparing the fitness values of successive generations. We propose a novel pseudoderivative-based mutation rate operator designed to allow a genetic algorithm to escape local optima and successfully continue to the global optimum. Once proven successful, this algorithm can be implemented to solve real problems in neurology and bioinformatics. As a first step towards this goal, we tested our algorithm on two 3-dimensional surfaces with multiple local optima, but only one global optimum, as well as on the N-queens problem, an applied problem in which the function that maps the curve is implicit. For all tests, the adaptive mutation rate allowed the genetic algorithm to find the global optimal solution, performing significantly better than other search methods, including genetic algorithms that implement fixed mutation rates. PMID:24627784

  18. Impact of an integrated treatment algorithm based on platelet function testing and clinical risk assessment: results of the TRIAGE Patients Undergoing Percutaneous Coronary Interventions To Improve Clinical Outcomes Through Optimal Platelet Inhibition study.

    PubMed

    Chandrasekhar, Jaya; Baber, Usman; Mehran, Roxana; Aquino, Melissa; Sartori, Samantha; Yu, Jennifer; Kini, Annapoorna; Sharma, Samin; Skurk, Carsten; Shlofmitz, Richard A; Witzenbichler, Bernhard; Dangas, George

    2016-08-01

    Assessment of platelet reactivity alone for thienopyridine selection with percutaneous coronary intervention (PCI) has not been associated with improved outcomes. In TRIAGE, a prospective multicenter observational pilot study we sought to evaluate the benefit of an integrated algorithm combining clinical risk and platelet function testing to select type of thienopyridine in patients undergoing PCI. Patients on chronic clopidogrel therapy underwent platelet function testing prior to PCI using the VerifyNow assay to determine high on treatment platelet reactivity (HTPR, ≥230 P2Y12 reactivity units or PRU). Based on both PRU and clinical (ischemic and bleeding) risks, patients were switched to prasugrel or continued on clopidogrel per the study algorithm. The primary endpoints were (i) 1-year major adverse cardiovascular events (MACE) composite of death, non-fatal myocardial infarction, or definite or probable stent thrombosis; and (ii) major bleeding, Bleeding Academic Research Consortium type 2, 3 or 5. Out of 318 clopidogrel treated patients with a mean age of 65.9 ± 9.8 years, HTPR was noted in 33.3 %. Ninety (28.0 %) patients overall were switched to prasugrel and 228 (72.0 %) continued clopidogrel. The prasugrel group had fewer smokers and more patients with heart failure. At 1-year MACE occurred in 4.4 % of majority HTPR patients on prasugrel versus 3.5 % of primarily non-HTPR patients on clopidogrel (p = 0.7). Major bleeding (5.6 vs 7.9 %, p = 0.47) was numerically higher with clopidogrel compared with prasugrel. Use of the study clinical risk algorithm for choice and intensity of thienopyridine prescription following PCI resulted in similar ischemic outcomes in HTPR patients receiving prasugrel and primarily non-HTPR patients on clopidogrel without an untoward increase in bleeding with prasugrel. However, the study was prematurely terminated and these findings are therefore hypothesis generating. PMID:27100112

  19. Experimental validation of clock synchronization algorithms

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.; Graham, R. Lynn

    1992-01-01

    The objective of this work is to validate mathematically derived clock synchronization theories and their associated algorithms through experiment. Two theories are considered, the Interactive Convergence Clock Synchronization Algorithm and the Midpoint Algorithm. Special clock circuitry was designed and built so that several operating conditions and failure modes (including malicious failures) could be tested. Both theories are shown to predict conservative upper bounds (i.e., measured values of clock skew were always less than the theory prediction). Insight gained during experimentation led to alternative derivations of the theories. These new theories accurately predict the behavior of the clock system. It is found that a 100 percent penalty is paid to tolerate worst-case failures. It is also shown that under optimal conditions (with minimum error and no failures) the clock skew can be as much as three clock ticks. Clock skew grows to six clock ticks when failures are present. Finally, it is concluded that one cannot rely solely on test procedures or theoretical analysis to predict worst-case conditions.

  20. Algorithms for Contact in a Mulitphysics Environment

    2001-12-19

    Many codes require either a contact capability or a need to determine geometric proximity of non-connected topological entities (which is a subset of what contact requires). ACME is a library to provide services to determine contact forces and/or geometric proximity interactions. This includes generic capabilities such as determining points in Cartesian volumes, finding faces in Cartesian volumes, etc. ACME can be run in single or multi-processor mode (the basic algorithms have been tested up tomore » 4500 processors).« less

  1. A speech recognition system based on hybrid wavelet network including a fuzzy decision support system

    NASA Astrophysics Data System (ADS)

    Jemai, Olfa; Ejbali, Ridha; Zaied, Mourad; Ben Amar, Chokri

    2015-02-01

    This paper aims at developing a novel approach for speech recognition based on wavelet network learnt by fast wavelet transform (FWN) including a fuzzy decision support system (FDSS). Our contributions reside in, first, proposing a novel learning algorithm for speech recognition based on the fast wavelet transform (FWT) which has many advantages compared to other algorithms and in which major problems of the previous works to compute connection weights were solved. They were determined by a direct solution which requires computing matrix inversion, which may be intensive. However, the new algorithm was realized by the iterative application of FWT to compute connection weights. Second, proposing a new classification way for this speech recognition system. It operated a human reasoning mode employing a FDSS to compute similarity degrees between test and training signals. Extensive empirical experiments were conducted to compare the proposed approach with other approaches. Obtained results show that the new speech recognition system has a better performance than previously established ones.

  2. A retrodictive stochastic simulation algorithm

    SciTech Connect

    Vaughan, T.G. Drummond, P.D.; Drummond, A.J.

    2010-05-20

    In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.

  3. Evaluating super resolution algorithms

    NASA Astrophysics Data System (ADS)

    Kim, Youn Jin; Park, Jong Hyun; Shin, Gun Shik; Lee, Hyun-Seung; Kim, Dong-Hyun; Park, Se Hyeok; Kim, Jaehyun

    2011-01-01

    This study intends to establish a sound testing and evaluation methodology based upon the human visual characteristics for appreciating the image restoration accuracy; in addition to comparing the subjective results with predictions by some objective evaluation methods. In total, six different super resolution (SR) algorithms - such as iterative back-projection (IBP), robust SR, maximum a posteriori (MAP), projections onto convex sets (POCS), a non-uniform interpolation, and frequency domain approach - were selected. The performance comparison between the SR algorithms in terms of their restoration accuracy was carried out through both subjectively and objectively. The former methodology relies upon the paired comparison method that involves the simultaneous scaling of two stimuli with respect to image restoration accuracy. For the latter, both conventional image quality metrics and color difference methods are implemented. Consequently, POCS and a non-uniform interpolation outperformed the others for an ideal situation, while restoration based methods appear more accurate to the HR image in a real world case where any prior information about the blur kernel is remained unknown. However, the noise-added-image could not be restored successfully by any of those methods. The latest International Commission on Illumination (CIE) standard color difference equation CIEDE2000 was found to predict the subjective results accurately and outperformed conventional methods for evaluating the restoration accuracy of those SR algorithms.

  4. Stability of Bareiss algorithm

    NASA Astrophysics Data System (ADS)

    Bojanczyk, Adam W.; Brent, Richard P.; de Hoog, F. R.

    1991-12-01

    In this paper, we present a numerical stability analysis of Bareiss algorithm for solving a symmetric positive definite Toeplitz system of linear equations. We also compare Bareiss algorithm with Levinson algorithm and conclude that the former has superior numerical properties.

  5. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  6. Field Testing of LIDAR-Assisted Feedforward Control Algorithms for Improved Speed Control and Fatigue Load Reduction on a 600-kW Wind Turbine: Preprint

    SciTech Connect

    Kumar, Avishek A.; Bossanyi, Ervin A.; Scholbrock, Andrew K.; Fleming, Paul; Boquet, Mathieu; Krishnamurthy, Raghu

    2015-12-14

    A severe challenge in controlling wind turbines is ensuring controller performance in the presence of a stochastic and unknown wind field, relying on the response of the turbine to generate control actions. Recent technologies such as LIDAR, allow sensing of the wind field before it reaches the rotor. In this work a field-testing campaign to test LIDAR Assisted Control (LAC) has been undertaken on a 600-kW turbine using a fixed, five-beam LIDAR system. The campaign compared the performance of a baseline controller to four LACs with progressively lower levels of feedback using 35 hours of collected data.

  7. Fast algorithms for combustion kinetics calculations: A comparison

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    1984-01-01

    To identify the fastest algorithm currently available for the numerical integration of chemical kinetic rate equations, several algorithms were examined. Findings to date are summarized. The algorithms examined include two general-purpose codes EPISODE and LSODE and three special-purpose (for chemical kinetic calculations) codes CHEMEQ, CRK1D, and GCKP84. In addition, an explicit Runge-Kutta-Merson differential equation solver (IMSL Routine DASCRU) is used to illustrate the problems associated with integrating chemical kinetic rate equations by a classical method. Algorithms were applied to two test problems drawn from combustion kinetics. These problems included all three combustion regimes: induction, heat release and equilibration. Variations of the temperature and species mole fraction are given with time for test problems 1 and 2, respectively. Both test problems were integrated over a time interval of 1 ms in order to obtain near-equilibration of all species and temperature. Of the codes examined in this study, only CREK1D and GCDP84 were written explicitly for integrating exothermic, non-isothermal combustion rate equations. These therefore have built-in procedures for calculating the temperature.

  8. The Economic Benefits of Personnel Selection Using Ability Tests: A State of the Art Review Including a Detailed Analysis of the Dollar Benefit of U.S. Employment Service Placements and a Critique of the Low-Cutoff Method of Test Use. USES Test Research Report No. 47.

    ERIC Educational Resources Information Center

    Hunter, John E.

    The economic impact of optimal selection using ability tests is far higher than is commonly known. For small organizations, dollar savings from higher productivity can run into millions of dollars a year. This report estimates the potential savings to the Federal Government as an employer as being 15.61 billion dollars per year if tests were given…

  9. SUBSURFACE RESIDENCE TIMES AS AN ALGORITHM FOR AQUIFER SENSITIVITY MAPPING: TESTING THE CONCEPT WITH ANALYTIC ELEMENT GROUND WATER MODELS IN THE CONTENTNEA CREEK BASIN, NORTH CAROLINA, USA

    EPA Science Inventory

    The objective of this research is to test the utility of simple functions of spatially integrated and temporally averaged ground water residence times in shallow "groundwatersheds" with field observations and more detailed computer simulations. The residence time of water in the...

  10. Enhancing Orthographic Competencies and Reducing Domain-Specific Test Anxiety: The Systematic Use of Algorithmic and Self-Instructional Task Formats in Remedial Spelling Training

    ERIC Educational Resources Information Center

    Faber, Gunter

    2010-01-01

    In this study the effects of a remedial spelling training approach were evaluated, which systematically combines certain visualization and verbalization methods to foster students' spelling knowledge and strategy use. Several achievement and test anxiety data from three measurement times were analyzed. All students displayed severe spelling…

  11. Comparative analysis of PSO algorithms for PID controller tuning

    NASA Astrophysics Data System (ADS)

    Štimac, Goranka; Braut, Sanjin; Žigulić, Roberto

    2014-09-01

    The active magnetic bearing(AMB) suspends the rotating shaft and maintains it in levitated position by applying controlled electromagnetic forces on the rotor in radial and axial directions. Although the development of various control methods is rapid, PID control strategy is still the most widely used control strategy in many applications, including AMBs. In order to tune PID controller, a particle swarm optimization(PSO) method is applied. Therefore, a comparative analysis of particle swarm optimization(PSO) algorithms is carried out, where two PSO algorithms, namely (1) PSO with linearly decreasing inertia weight(LDW-PSO), and (2) PSO algorithm with constriction factor approach(CFA-PSO), are independently tested for different PID structures. The computer simulations are carried out with the aim of minimizing the objective function defined as the integral of time multiplied by the absolute value of error(ITAE). In order to validate the performance of the analyzed PSO algorithms, one-axis and two-axis radial rotor/active magnetic bearing systems are examined. The results show that PSO algorithms are effective and easily implemented methods, providing stable convergence and good computational efficiency of different PID structures for the rotor/AMB systems. Moreover, the PSO algorithms prove to be easily used for controller tuning in case of both SISO and MIMO system, which consider the system delay and the interference among the horizontal and vertical rotor axes.

  12. A High Precision Terahertz Wave Image Reconstruction Algorithm

    PubMed Central

    Guo, Qijia; Chang, Tianying; Geng, Guoshuai; Jia, Chengyan; Cui, Hong-Liang

    2016-01-01

    With the development of terahertz (THz) technology, the applications of this spectrum have become increasingly wide-ranging, in areas such as non-destructive testing, security applications and medical scanning, in which one of the most important methods is imaging. Unlike remote sensing applications, THz imaging features sources of array elements that are almost always supposed to be spherical wave radiators, including single antennae. As such, well-developed methodologies such as Range-Doppler Algorithm (RDA) are not directly applicable in such near-range situations. The Back Projection Algorithm (BPA) can provide products of high precision at the the cost of a high computational burden, while the Range Migration Algorithm (RMA) sacrifices the quality of images for efficiency. The Phase-shift Migration Algorithm (PMA) is a good alternative, the features of which combine both of the classical algorithms mentioned above. In this research, it is used for mechanical scanning, and is extended to array imaging for the first time. In addition, the performances of PMA are studied in detail in contrast to BPA and RMA. It is demonstrated in our simulations and experiments described herein that the algorithm can reconstruct images with high precision. PMID:27455269

  13. Algorithm to assess causality after individual adverse events following immunizations.

    PubMed

    Halsey, Neal A; Edwards, Kathryn M; Dekker, Cornelia L; Klein, Nicola P; Baxter, Roger; Larussa, Philip; Marchant, Colin; Slade, Barbara; Vellozzi, Claudia

    2012-08-24

    Assessing individual reports of adverse events following immunizations (AEFI) can be challenging. Most published reviews are based on expert opinions, but the methods and logic used to arrive at these opinions are neither well described nor understood by many health care providers and scientists. We developed a standardized algorithm to assist in collecting and interpreting data, and to help assess causality after individual AEFI. Key questions that should be asked during the assessment of AEFI include: Is the diagnosis of the AEFI correct? Does clinical or laboratory evidence exist that supports possible causes for the AEFI other than the vaccine in the affected individual? Is there a known causal association between the AEFI and the vaccine? Is there strong evidence against a causal association? Is there a specific laboratory test implicating the vaccine in the pathogenesis? An algorithm can assist with addressing these questions in a standardized, transparent manner which can be tracked and reassessed if additional information becomes available. Examples in this document illustrate the process of using the algorithm to determine causality. As new epidemiologic and clinical data become available, the algorithm and guidelines will need to be modified. Feedback from users of the algorithm will be invaluable in this process. We hope that this algorithm approach can assist with educational efforts to improve the collection of key information on AEFI and provide a platform for teaching about causality assessment. PMID:22507656

  14. A High Precision Terahertz Wave Image Reconstruction Algorithm.

    PubMed

    Guo, Qijia; Chang, Tianying; Geng, Guoshuai; Jia, Chengyan; Cui, Hong-Liang

    2016-01-01

    With the development of terahertz (THz) technology, the applications of this spectrum have become increasingly wide-ranging, in areas such as non-destructive testing, security applications and medical scanning, in which one of the most important methods is imaging. Unlike remote sensing applications, THz imaging features sources of array elements that are almost always supposed to be spherical wave radiators, including single antennae. As such, well-developed methodologies such as Range-Doppler Algorithm (RDA) are not directly applicable in such near-range situations. The Back Projection Algorithm (BPA) can provide products of high precision at the the cost of a high computational burden, while the Range Migration Algorithm (RMA) sacrifices the quality of images for efficiency. The Phase-shift Migration Algorithm (PMA) is a good alternative, the features of which combine both of the classical algorithms mentioned above. In this research, it is used for mechanical scanning, and is extended to array imaging for the first time. In addition, the performances of PMA are studied in detail in contrast to BPA and RMA. It is demonstrated in our simulations and experiments described herein that the algorithm can reconstruct images with high precision. PMID:27455269

  15. RADFLO physics and algorithms

    SciTech Connect

    Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.

    1995-09-01

    This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.

  16. Algorithm for navigated ESS.

    PubMed

    Baudoin, T; Grgić, M V; Zadravec, D; Geber, G; Tomljenović, D; Kalogjera, L

    2013-12-01

    ENT navigation has given new opportunities in performing Endoscopic Sinus Surgery (ESS) and improving surgical outcome of the patients` treatment. ESS assisted by a navigation system could be called Navigated Endoscopic Sinus Surgery (NESS). As it is generally accepted that the NESS should be performed only in cases of complex anatomy and pathology, it has not yet been established as a state-of-the-art procedure and thus not used on a daily basis. This paper presents an algorithm for use of a navigation system for basic ESS in the treatment of chronic rhinosinusitis (CRS). The algorithm includes five units that should be highlighted using a navigation system. They are as follows: 1) nasal vestibule unit, 2) OMC unit, 3) anterior ethmoid unit, 4) posterior ethmoid unit, and 5) sphenoid unit. Each unit has a shape of a triangular pyramid and consists of at least four reference points or landmarks. As many landmarks as possible should be marked when determining one of the five units. Navigated orientation in each unit should always precede any surgical intervention. The algorithm should improve the learning curve of trainees and enable surgeons to use the navigation system routinely and systematically. PMID:24260766

  17. Short Time Exposure (STE) test in conjunction with Bovine Corneal Opacity and Permeability (BCOP) assay including histopathology to evaluate correspondence with the Globally Harmonized System (GHS) eye irritation classification of textile dyes.

    PubMed

    Oliveira, Gisele Augusto Rodrigues; Ducas, Rafael do Nascimento; Teixeira, Gabriel Campos; Batista, Aline Carvalho; Oliveira, Danielle Palma; Valadares, Marize Campos

    2015-09-01

    Eye irritation evaluation is mandatory for predicting health risks in consumers exposed to textile dyes. The two dyes, Reactive Orange 16 (RO16) and Reactive Green 19 (RG19) are classified as Category 2A (irritating to eyes) based on the UN Globally Harmonized System for classification (UN GHS), according to the Draize test. On the other hand, animal welfare considerations and the enforcement of a new regulation in the EU are drawing much attention in reducing or replacing animal experiments with alternative methods. This study evaluated the eye irritation of the two dyes RO16 and RG19 by combining the Short Time Exposure (STE) and the Bovine Corneal Opacity and Permeability (BCOP) assays and then comparing them with in vivo data from the GHS classification. The STE test (first level screening) categorized both dyes as GHS Category 1 (severe irritant). In the BCOP, dye RG19 was also classified as GHS Category 1 while dye RO16 was classified as GHS no prediction can be made. Both dyes caused damage to the corneal tissue as confirmed by histopathological analysis. Our findings demonstrated that the STE test did not contribute to arriving at a better conclusion about the eye irritation potential of the dyes when used in conjunction with the BCOP test. Adding the histopathology to the BCOP test could be an appropriate tool for a more meaningful prediction of the eye irritation potential of dyes. PMID:26026500

  18. Intra-and-Inter Species Biomass Prediction in a Plantation Forest: Testing the Utility of High Spatial Resolution Spaceborne Multispectral RapidEye Sensor and Advanced Machine Learning Algorithms

    PubMed Central

    Dube, Timothy; Mutanga, Onisimo; Adam, Elhadi; Ismail, Riyad

    2014-01-01

    The quantification of aboveground biomass using remote sensing is critical for better understanding the role of forests in carbon sequestration and for informed sustainable management. Although remote sensing techniques have been proven useful in assessing forest biomass in general, more is required to investigate their capabilities in predicting intra-and-inter species biomass which are mainly characterised by non-linear relationships. In this study, we tested two machine learning algorithms, Stochastic Gradient Boosting (SGB) and Random Forest (RF) regression trees to predict intra-and-inter species biomass using high resolution RapidEye reflectance bands as well as the derived vegetation indices in a commercial plantation. The results showed that the SGB algorithm yielded the best performance for intra-and-inter species biomass prediction; using all the predictor variables as well as based on the most important selected variables. For example using the most important variables the algorithm produced an R2 of 0.80 and RMSE of 16.93 t·ha−1 for E. grandis; R2 of 0.79, RMSE of 17.27 t·ha−1 for P. taeda and R2 of 0.61, RMSE of 43.39 t·ha−1 for the combined species data sets. Comparatively, RF yielded plausible results only for E. dunii (R2 of 0.79; RMSE of 7.18 t·ha−1). We demonstrated that although the two statistical methods were able to predict biomass accurately, RF produced weaker results as compared to SGB when applied to combined species dataset. The result underscores the relevance of stochastic models in predicting biomass drawn from different species and genera using the new generation high resolution RapidEye sensor with strategically positioned bands. PMID:25140631

  19. Algorithmic causets

    NASA Astrophysics Data System (ADS)

    Bolognesi, Tommaso

    2011-07-01

    In the context of quantum gravity theories, several researchers have proposed causal sets as appropriate discrete models of spacetime. We investigate families of causal sets obtained from two simple models of computation - 2D Turing machines and network mobile automata - that operate on 'high-dimensional' supports, namely 2D arrays of cells and planar graphs, respectively. We study a number of quantitative and qualitative emergent properties of these causal sets, including dimension, curvature and localized structures, or 'particles'. We show how the possibility to detect and separate particles from background space depends on the choice between a global or local view at the causal set. Finally, we spot very rare cases of pseudo-randomness, or deterministic chaos; these exhibit a spontaneous phenomenon of 'causal compartmentation' that appears as a prerequisite for the occurrence of anything of physical interest in the evolution of spacetime.

  20. Test plan: Sealing of the Disturbed Rock Zone (DRZ), including Marker Bed 139 (MB139) and the overlying halite, below the repository horizon, at the Waste Isolation Pilot Plant. [Cementitious grout into fractured WIPP rock

    SciTech Connect

    Ahrens, E.H.

    1992-05-01

    This test plan describes activities intended to demonstrate equipment and techniques for producing, injecting, and evaluating microfine cementitious grout. The grout will be injected in fractured rock located below the repository horizon at the Waste Isolation Pilot Plant (WIPP). These data are intended to support the development of the Alcove Gas Barrier System (AGBS), the design of upcoming, large-scale seal tests, and ongoing laboratory evaluations of grouting efficacy. Degradation of the grout will be studied in experiments conducted in parallel with the underground grouting experiment.