Science.gov

Sample records for algorithmic procedure including

  1. In-Trail Procedure (ITP) Algorithm Design

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar A.; Siminiceanu, Radu I.

    2007-01-01

    The primary objective of this document is to provide a detailed description of the In-Trail Procedure (ITP) algorithm, which is part of the Airborne Traffic Situational Awareness In-Trail Procedure (ATSA-ITP) application. To this end, the document presents a high level description of the ITP Algorithm and a prototype implementation of this algorithm in the programming language C.

  2. Internal labelling problem: an algorithmic procedure

    NASA Astrophysics Data System (ADS)

    Campoamor-Stursberg, Rutwig

    2011-01-01

    Combining the decomposition of Casimir operators induced by the embedding of a subalgebra into a semisimple Lie algebra with the properties of commutators of subgroup scalars, an analytical algorithm for the computation of missing label operators with the commutativity requirement is proposed. Two new criteria for subgroups scalars to commute are given. The algorithm is completed with a recursive method to construct orthonormal bases of states. As examples to illustrate the procedure, four labelling problems are explicitly studied.

  3. Using an admittance algorithm for bone drilling procedures.

    PubMed

    Accini, Fernando; Díaz, Iñaki; Gil, Jorge Juan

    2016-01-01

    Bone drilling is a common procedure in many types of surgeries, including orthopedic, neurological and otologic surgeries. Several technologies and control algorithms have been developed to help the surgeon automatically stop the drill before it goes through the boundary of the tissue being drilled. However, most of them rely on thrust force and cutting torque to detect bone layer transitions which has many drawbacks that affect the reliability of the process. This paper describes in detail a bone-drilling algorithm based only on the position control of the drill bit that overcomes such problems and presents additional advantages. The implication of each component of the algorithm in the drilling procedure is analyzed and the efficacy of the algorithm is experimentally validated with two types of bones.

  4. A computational procedure for multibody systems including flexible beam dynamics

    NASA Technical Reports Server (NTRS)

    Downer, J. D.; Park, K. C.; Chiou, J. C.

    1990-01-01

    A computational procedure suitable for the solution of equations of motions for flexible multibody systems has been developed. The flexible beams are modeled using a fully nonlinear theory which accounts for both finite rotations and large deformations. The present formulation incorporates physical measures of conjugate Cauchy stress and covariant strain increments. As a consequence, the beam model can easily be interfaced with real-time strain measurements and feedback control systems. A distinct feature of the present work is the computational preservation of total energy for undamped systems; this is obtained via an objective strain increment/stress update procedure combined with an energy-conserving time integration algorithm which contains an accurate update of angular orientations. The procedure is demonstrated via several example problems.

  5. Simulation of Accident Sequences Including Emergency Operating Procedures

    SciTech Connect

    Queral, Cesar; Exposito, Antonio; Hortal, Javier

    2004-07-01

    Operator actions play an important role in accident sequences. However, design analysis (Safety Analysis Report, SAR) seldom includes consideration of operator actions, although they are required by compulsory Emergency Operating Procedures (EOP) to perform some checks and actions from the very beginning of the accident. The basic aim of the project is to develop a procedure validation system which consists of the combination of three elements: a plant transient simulation code TRETA (a C based modular program) developed by the CSN, a computerized procedure system COPMA-III (Java technology based program) developed by the OECD-Halden Reactor Project and adapted for simulation with the contribution of our group and a software interface that provides the communication between COPMA-III and TRETA. The new combined system is going to be applied in a pilot study in order to analyze sequences initiated by secondary side breaks in a Pressurized Water Reactors (PWR) plant. (authors)

  6. Advances in pleural disease management including updated procedural coding.

    PubMed

    Haas, Andrew R; Sterman, Daniel H

    2014-08-01

    Over 1.5 million pleural effusions occur in the United States every year as a consequence of a variety of inflammatory, infectious, and malignant conditions. Although rarely fatal in isolation, pleural effusions are often a marker of a serious underlying medical condition and contribute to significant patient morbidity, quality-of-life reduction, and mortality. Pleural effusion management centers on pleural fluid drainage to relieve symptoms and to investigate pleural fluid accumulation etiology. Many recent studies have demonstrated important advances in pleural disease management approaches for a variety of pleural fluid etiologies, including malignant pleural effusion, complicated parapneumonic effusion and empyema, and chest tube size. The last decade has seen greater implementation of real-time imaging assistance for pleural effusion management and increasing use of smaller bore percutaneous chest tubes. This article will briefly review recent pleural effusion management literature and update the latest changes in common procedural terminology billing codes as reflected in the changing landscape of imaging use and percutaneous approaches to pleural disease management.

  7. Benchmarking Procedures for High-Throughput Context Specific Reconstruction Algorithms

    PubMed Central

    Pacheco, Maria P.; Pfau, Thomas; Sauter, Thomas

    2016-01-01

    Recent progress in high-throughput data acquisition has shifted the focus from data generation to processing and understanding of how to integrate collected information. Context specific reconstruction based on generic genome scale models like ReconX or HMR has the potential to become a diagnostic and treatment tool tailored to the analysis of specific individuals. The respective computational algorithms require a high level of predictive power, robustness and sensitivity. Although multiple context specific reconstruction algorithms were published in the last 10 years, only a fraction of them is suitable for model building based on human high-throughput data. Beside other reasons, this might be due to problems arising from the limitation to only one metabolic target function or arbitrary thresholding. This review describes and analyses common validation methods used for testing model building algorithms. Two major methods can be distinguished: consistency testing and comparison based testing. The first is concerned with robustness against noise, e.g., missing data due to the impossibility to distinguish between the signal and the background of non-specific binding of probes in a microarray experiment, and whether distinct sets of input expressed genes corresponding to i.e., different tissues yield distinct models. The latter covers methods comparing sets of functionalities, comparison with existing networks or additional databases. We test those methods on several available algorithms and deduce properties of these algorithms that can be compared with future developments. The set of tests performed, can therefore serve as a benchmarking procedure for future algorithms. PMID:26834640

  8. 78 FR 57639 - Request for Comments on Pediatric Planned Procedure Algorithm

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-19

    ... Procedure Algorithm AGENCY: Agency for Healthcare Research and Quality (AHRQ), HHS. ACTION: Notice of request for comments on pediatric planned procedure algorithm from the members of the public. SUMMARY... from the public on an algorithm for identifying pediatric planned procedures as part of the...

  9. Substructure procedure for including tile flexibility in stress analysis of shuttle thermal protection system

    NASA Technical Reports Server (NTRS)

    Giles, G. L.

    1980-01-01

    A substructure procedure to include the flexibility of the tile in the stress analysis of the shuttle thermal protection system (TPS) is described. In this procedure, the TPS is divided into substructures of (1) the tile which is modeled by linear finite elements and (2) the SIP which is modeled as a nonlinear continuum. This procedure was applied for loading cases of uniform pressure, uniform moment, and an aerodynamic shock on various tile thicknesses. The ratios of through-the-thickness stresses in the SIP which were calculated using a flexible tile compared to using a rigid tile were found to be less than 1.05 for the cases considered.

  10. A Re-Usable Algorithm for Teaching Procedural Skills.

    ERIC Educational Resources Information Center

    Jones, Mark K.; And Others

    The design of a re-usable instructional algorithm for computer based instruction (CBI) is described. The prototype is implemented on IBM PC compatibles running the Windows(TM) graphical environment, using the prototyping tool ToolBook(TM). The algorithm is designed to reduce development and life cycle costs for CBI by providing an authoring…

  11. Should Title 24 Ventilation Requirements Be Amended to include an Indoor Air Quality Procedure?

    SciTech Connect

    Dutton, Spencer M.; Mendell, Mark J.; Chan, Wanyu R.

    2013-05-13

    Minimum outdoor air ventilation rates (VRs) for buildings are specified in standards, including California?s Title 24 standards. The ASHRAE ventilation standard includes two options for mechanically-ventilated buildings ? a prescriptive ventilation rate procedure (VRP) that specifies minimum VRs that vary among occupancy classes, and a performance-based indoor air quality procedure (IAQP) that may result in lower VRs than the VRP, with associated energy savings, if IAQ meeting specified criteria can be demonstrated. The California Energy Commission has been considering the addition of an IAQP to the Title 24 standards. This paper, based on a review of prior data and new analyses of the IAQP, evaluates four future options for Title 24: no IAQP; adding an alternate VRP, adding an equivalent indoor air quality procedure (EIAQP), and adding an improved ASHRAE-like IAQP. Criteria were established for selecting among options, and feedback was obtained in a workshop of stakeholders. Based on this review, the addition of an alternate VRP is recommended. This procedure would allow lower minimum VRs if a specified set of actions were taken to maintain acceptable IAQ. An alternate VRP could also be a valuable supplement to ASHRAE?s ventilation standard.

  12. Procedures for Including Secondary Electron Emission in Numerical Simulations of Plasma-Insulator Interactions

    NASA Technical Reports Server (NTRS)

    Beyst, Brian; Rezvani, Ali; Young, Bin; Friauf, Robert J.

    1991-01-01

    Previous Monte Carlo simulations provide a data base for properties of secondary electron emission (SEE) from insulators and metals. Incident primary electrons are considered at energies up to 1200 eV. The behavior of secondary electrons is characterized by (1) yield vs. primary energy E(sub p), (2) distribution vs. secondary energy E(sub s), and (3) distribution vs. angle of emission theta. Special attention is paid to the low energy range E(sub p) up to 50 eV, where the number and energy of secondary electrons is limited by the finite band gap of the insulator. For primary energies above 50 eV the SEE yield curve can be conveniently parameterized by a Haffner formula. The energy distribution of secondary electrons is described by an empirical formula with average energy about 8.0 eV. The angular distribution of secondaries is slightly more peaked in the forward direction than the customary cos theta distribution. Empirical formulas and parameters are given for all yield and distribution curves. Procedures and algorithms are described for using these results to find the SEE yield, and then to choose the energy and angle of emergence of each secondary electron. These procedures can readily be incorporated into numerical simulations of plasma-solid surface interactions in low earth orbit.

  13. A Procedure for Empirical Initialization of Adaptive Testing Algorithms.

    ERIC Educational Resources Information Center

    van der Linden, Wim J.

    In constrained adaptive testing, the numbers of constraints needed to control the content of the tests can easily run into the hundreds. Proper initialization of the algorithm becomes a requirement because the presence of large numbers of constraints slows down the convergence of the ability estimator. In this paper, an empirical initialization of…

  14. An Evaluation of a Flight Deck Interval Management Algorithm Including Delayed Target Trajectories

    NASA Technical Reports Server (NTRS)

    Swieringa, Kurt A.; Underwood, Matthew C.; Barmore, Bryan; Leonard, Robert D.

    2014-01-01

    NASA's first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature air traffic management technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise timebased scheduling in the terminal airspace; Controller Managed Spacing (CMS), which provides controllers with decision support tools enabling precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain precise in-trail spacing. During high demand operations, TMA-TM may produce a schedule and corresponding aircraft trajectories that include delay to ensure that a particular aircraft will be properly spaced from other aircraft at each schedule waypoint. These delayed trajectories are not communicated to the automation onboard the aircraft, forcing the IM aircraft to use the published speeds to estimate the target aircraft's estimated time of arrival. As a result, the aircraft performing IM operations may follow an aircraft whose TMA-TM generated trajectories have substantial speed deviations from the speeds expected by the spacing algorithm. Previous spacing algorithms were not designed to handle this magnitude of uncertainty. A simulation was conducted to examine a modified spacing algorithm with the ability to follow aircraft flying delayed trajectories. The simulation investigated the use of the new spacing algorithm with various delayed speed profiles and wind conditions, as well as several other variables designed to simulate real-life variability. The results and conclusions of this study indicate that the new spacing algorithm generally exhibits good performance; however, some types of target aircraft speed profiles can cause the spacing algorithm to command less than optimal speed control behavior.

  15. Clustering algorithm evaluation and the development of a replacement for procedure 1. [for crop inventories

    NASA Technical Reports Server (NTRS)

    Lennington, R. K.; Johnson, J. K.

    1979-01-01

    An efficient procedure which clusters data using a completely unsupervised clustering algorithm and then uses labeled pixels to label the resulting clusters or perform a stratified estimate using the clusters as strata is developed. Three clustering algorithms, CLASSY, AMOEBA, and ISOCLS, are compared for efficiency. Three stratified estimation schemes and three labeling schemes are also considered and compared.

  16. ALGORITHMS AND PROGRAMS FOR STRONG GRAVITATIONAL LENSING IN KERR SPACE-TIME INCLUDING POLARIZATION

    SciTech Connect

    Chen, Bin; Maddumage, Prasad; Kantowski, Ronald; Dai, Xinyu; Baron, Eddie

    2015-05-15

    Active galactic nuclei (AGNs) and quasars are important astrophysical objects to understand. Recently, microlensing observations have constrained the size of the quasar X-ray emission region to be of the order of 10 gravitational radii of the central supermassive black hole. For distances within a few gravitational radii, light paths are strongly bent by the strong gravity field of the central black hole. If the central black hole has nonzero angular momentum (spin), then a photon’s polarization plane will be rotated by the gravitational Faraday effect. The observed X-ray flux and polarization will then be influenced significantly by the strong gravity field near the source. Consequently, linear gravitational lensing theory is inadequate for such extreme circumstances. We present simple algorithms computing the strong lensing effects of Kerr black holes, including the effects on polarization. Our algorithms are realized in a program “KERTAP” in two versions: MATLAB and Python. The key ingredients of KERTAP are a graphic user interface, a backward ray-tracing algorithm, a polarization propagator dealing with gravitational Faraday rotation, and algorithms computing observables such as flux magnification and polarization angles. Our algorithms can be easily realized in other programming languages such as FORTRAN, C, and C++. The MATLAB version of KERTAP is parallelized using the MATLAB Parallel Computing Toolbox and the Distributed Computing Server. The Python code was sped up using Cython and supports full implementation of MPI using the “mpi4py” package. As an example, we investigate the inclination angle dependence of the observed polarization and the strong lensing magnification of AGN X-ray emission. We conclude that it is possible to perform complex numerical-relativity related computations using interpreted languages such as MATLAB and Python.

  17. Algorithms and Programs for Strong Gravitational Lensing In Kerr Space-time Including Polarization

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kantowski, Ronald; Dai, Xinyu; Baron, Eddie; Maddumage, Prasad

    2015-05-01

    Active galactic nuclei (AGNs) and quasars are important astrophysical objects to understand. Recently, microlensing observations have constrained the size of the quasar X-ray emission region to be of the order of 10 gravitational radii of the central supermassive black hole. For distances within a few gravitational radii, light paths are strongly bent by the strong gravity field of the central black hole. If the central black hole has nonzero angular momentum (spin), then a photon’s polarization plane will be rotated by the gravitational Faraday effect. The observed X-ray flux and polarization will then be influenced significantly by the strong gravity field near the source. Consequently, linear gravitational lensing theory is inadequate for such extreme circumstances. We present simple algorithms computing the strong lensing effects of Kerr black holes, including the effects on polarization. Our algorithms are realized in a program “KERTAP” in two versions: MATLAB and Python. The key ingredients of KERTAP are a graphic user interface, a backward ray-tracing algorithm, a polarization propagator dealing with gravitational Faraday rotation, and algorithms computing observables such as flux magnification and polarization angles. Our algorithms can be easily realized in other programming languages such as FORTRAN, C, and C++. The MATLAB version of KERTAP is parallelized using the MATLAB Parallel Computing Toolbox and the Distributed Computing Server. The Python code was sped up using Cython and supports full implementation of MPI using the “mpi4py” package. As an example, we investigate the inclination angle dependence of the observed polarization and the strong lensing magnification of AGN X-ray emission. We conclude that it is possible to perform complex numerical-relativity related computations using interpreted languages such as MATLAB and Python.

  18. Best Estimate Radiation Flux Value-Added Procedure. Algorithm Operational Details and Explanations

    SciTech Connect

    Shi, Y.; Long, C. N.

    2002-10-01

    This document describes some specifics of the algorithm for best estimate evaluation of radiation fluxes at Southern Great Plains (SGP) Central Facility (CF). It uses the data available from the three co-located surface radiometer platforms at the SGP CF to automatically determine the best estimate of the irradiance measurements available. The Best Estimate Flux (BEFlux) value-added procedure (VAP) was previously named Best Estimate ShortWave (BESW) VAP, which included all of the broadband and spectral shortwave (SW) measurements for the SGP CF. In BESW, multiple measurements of the same quantities were handled simply by designating one as the primary measurement and using all others to merely fill in any gaps. Thus, this “BESW” is better termed “most continuous,” since no additional quality assessment was applied. We modified the algorithm in BESW to use the average of the closest two measurements as the best estimate when possible, if these measurements pass all quality assessment criteria. Furthermore, we included longwave (LW) fields in the best estimate evaluation to include all major components of the surface radiative energy budget, and renamed the VAP to Best Estimate Flux (BEFLUX1LONG).

  19. Best Estimate Radiation Flux Value-Added Procedure: Algorithm Operational Details and Explanations

    SciTech Connect

    Shi, Y; Long, CN

    2002-10-01

    This document describes some specifics of the algorithm for best estimate evaluation of radiation fluxes at Southern Great Plains (SGP) Central Facility (CF). It uses the data available from the three co-located surface radiometer platforms at the SGP CF to automatically determine the best estimate of the irradiance measurements available. The Best Estimate Flux (BEFlux) value-added procedure (VAP) was previously named Best Estimate ShortWave (BESW) VAP, which included all of the broadband and spectral shortwave (SW) measurements for the SGP CF. In BESW, multiple measurements of the same quantities were handled simply by designating one as the primary measurement and using all others to merely fill in any gaps. Thus, this “BESW” is better termed “most continuous,” since no additional quality assessment was applied. We modified the algorithm in BESW to use the average of the closest two measurements as the best estimate when possible, if these measurements pass all quality assessment criteria. Furthermore, we included longwave (LW) fields in the best estimate evaluation to include all major components of the surface radiative energy budget, and renamed the VAP to Best Estimate Flux (BEFLUX1LONG).

  20. 34 CFR 222.94 - What provisions must be included in a local educational agency's Indian policies and procedures?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 34 Education 1 2012-07-01 2012-07-01 false What provisions must be included in a local educational agency's Indian policies and procedures? 222.94 Section 222.94 Education Regulations of the Offices of... Indian Lands Indian Policies and Procedures § 222.94 What provisions must be included in a...

  1. Ohio Guidelines for the Identification of Children with Specific Learning Disabilities (Including Differentiated Referral Procedures).

    ERIC Educational Resources Information Center

    Cuyahoga Special Education Service Center, Maple Heights, OH.

    The guidelines focus on procedures for determining eligibility for services of children with specific learning disabilities. A 13-step process is delineated from the classroom teacher's response to individual learner needs through multifactored evaluation team function to annual review and reevaluation. Throughout the process, special emphasis is…

  2. Why McNemar's Procedure Needs to Be Included in the Business Statistics Curriculum

    ERIC Educational Resources Information Center

    Berenson, Mark L.; Koppel, Nicole B.

    2005-01-01

    In business research situations it is often of interest to examine the differences in the responses in repeated measurements of the same subjects or from among matched or paired subjects. A simple and useful procedure for comparing differences between proportions in two related samples was devised by McNemar (1947) nearly 60 years ago. Although…

  3. An enhanced bacterial foraging algorithm approach for optimal power flow problem including FACTS devices considering system loadability.

    PubMed

    Belwin Edward, J; Rajasekar, N; Sathiyasekar, K; Senthilnathan, N; Sarjila, R

    2013-09-01

    Obtaining optimal power flow solution is a strenuous task for any power system engineer. The inclusion of FACTS devices in the power system network adds to its complexity. The dual objective of OPF with fuel cost minimization along with FACTS device location for IEEE 30 bus is considered and solved using proposed Enhanced Bacterial Foraging algorithm (EBFA). The conventional Bacterial Foraging Algorithm (BFA) has the difficulty of optimal parameter selection. Hence, in this paper, BFA is enhanced by including Nelder-Mead (NM) algorithm for better performance. A MATLAB code for EBFA is developed and the problem of optimal power flow with inclusion of FACTS devices is solved. After several run with different initial values, it is found that the inclusion of FACTS devices such as SVC and TCSC in the network reduces the generation cost along with increased voltage stability limits. It is also observed that, the proposed algorithm requires lesser computational time compared to earlier proposed algorithms.

  4. 77 FR 73053 - Comment Request for Information Collection on Administrative Procedures Including Form MA 8-7...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-07

    ... Procedures Including Form MA 8-7, Extension Without Revisions AGENCY: Employment and Training Administration... collection of data consistent with 20 CFR 601, including Form MA 8-7, which expires June 30, 2013. DATES.... The information transmitted by Form MA 8-7 is used by the Secretary to make findings (as specified...

  5. 45 CFR 309.105 - What procedures governing child support guidelines must a Tribe or Tribal organization include in...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... governing child support guidelines must a Tribe or Tribal organization include in a Tribal IV-D plan? (a) A... 45 Public Welfare 2 2013-10-01 2012-10-01 true What procedures governing child support guidelines must a Tribe or Tribal organization include in a Tribal IV-D plan? 309.105 Section 309.105...

  6. 45 CFR 309.105 - What procedures governing child support guidelines must a Tribe or Tribal organization include in...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... governing child support guidelines must a Tribe or Tribal organization include in a Tribal IV-D plan? (a) A... 45 Public Welfare 2 2012-10-01 2012-10-01 false What procedures governing child support guidelines must a Tribe or Tribal organization include in a Tribal IV-D plan? 309.105 Section 309.105...

  7. 45 CFR 309.105 - What procedures governing child support guidelines must a Tribe or Tribal organization include in...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... governing child support guidelines must a Tribe or Tribal organization include in a Tribal IV-D plan? (a) A... 45 Public Welfare 2 2014-10-01 2012-10-01 true What procedures governing child support guidelines must a Tribe or Tribal organization include in a Tribal IV-D plan? 309.105 Section 309.105...

  8. 45 CFR 309.105 - What procedures governing child support guidelines must a Tribe or Tribal organization include in...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... governing child support guidelines must a Tribe or Tribal organization include in a Tribal IV-D plan? (a) A... 45 Public Welfare 2 2011-10-01 2011-10-01 false What procedures governing child support guidelines must a Tribe or Tribal organization include in a Tribal IV-D plan? 309.105 Section 309.105...

  9. A simple procedure to include a free-form measurement capability to standard coordinate measurement machines

    NASA Astrophysics Data System (ADS)

    Schneider, Florian; Rascher, Rolf; Stamp, Richard; Smith, Gordon

    2013-09-01

    The modern optical industry requires objects with complex topographical structures. Free-form shaped objects are of large interest in many branches, especially for size reduced, modern lifestyle products like digital cameras. State of the art multi-axes-coordinate measurement machines (CMM), like the topographical measurement machine TII-3D, are by principle suitable to measure free-form shaped objects. The only limitation is the software package. This paper may illustrate a simple way to enhance coordinate measurement machines in order to add a free-form function. Next to a coordinate measurement machine, only a state of the art CAD† system and a simple piece of software are necessary. For this paper, the CAD software CREO‡ had been used. CREO enables the user to develop a 3D object in two different ways. With the first method, the user might design the shape by drawing one or more 2D sketches and put an envelope around. Using the second method, the user could define one or more formulas in the editor to describe the favoured surface. Both procedures lead to the required three-dimensional shape. However, further features of CREO enable the user to export the XYZ-coordinates of the created surface. A special designed software tool, developed with Matlab§, converts the XYZ-file into a measurement matrix which can be used as a reference file. Finally the result of the free-form measurement, carried out with a CMM, has to be loaded into the software tool and both files will be computed. The result is an error profile which provides the deviation between the measurement and the target-geometry.

  10. A Review of Optimisation Techniques for Layered Radar Materials Including the Genetic Algorithm

    DTIC Science & Technology

    2004-11-01

    Algorithm......................... 8 4.1.4 Optimisation of Jaumann Layers: Other methods (Finite Element, FDTD and Taguchi Methods ) ........................................................................ 11...DRDC Atlantic TM 2004 - 260 4.1.4 Optimisation of Jaumann Layers: Other methods (Finite Element, FDTD and Taguchi Methods ) Scattering...that the performance of these devices is not limited by resonant behaviour.43 The Taguchi method of optimization was used as a means of exploring

  11. 32 CFR Appendix B to Part 80 - Procedures for Special Educational Programs (Including Related Services) for Preschool Children...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (Including Related Services) for Preschool Children and Children With Disabilities (3-21 years Inclusive) B Appendix B to Part 80 National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE PERSONNEL... ARRANGEMENTS Pt. 80, App. B Appendix B to Part 80—Procedures for Special Educational Programs...

  12. 34 CFR 222.94 - What provisions must be included in a local educational agency's Indian policies and procedures?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false What provisions must be included in a local educational agency's Indian policies and procedures? 222.94 Section 222.94 Education Regulations of the Offices of the Department of Education OFFICE OF ELEMENTARY AND SECONDARY EDUCATION, DEPARTMENT OF...

  13. Learning algorithm in restricted Boltzmann machines using Kullback-Leibler importance estimation procedure

    NASA Astrophysics Data System (ADS)

    Yasuda, Muneki; Sakurai, Tetsuharu; Tanaka, Kazuyuki

    Restricted Boltzmann machines (RBMs) are bipartite structured statistical neural networks and consist of two layers. One of them is a layer of visible units and the other one is a layer of hidden units. In each layer, any units do not connect to each other. RBMs have high flexibility and rich structure and have been expected to applied to various applications, for example, image and pattern recognitions, face detections and so on. However, most of computational models in RBMs are intractable and often belong to the class of NP-hard problem. In this paper, in order to construct a practical learning algorithm for them, we employ the Kullback-Leibler Importance Estimation Procedure (KLIEP) to RBMs, and give a new scheme of practical approximate learning algorithm for RBMs based on the KLIEP.

  14. Cassini VIMS observations of the Galilean satellites including the VIMS calibration procedure

    USGS Publications Warehouse

    McCord, T.B.; Coradini, A.; Hibbitts, C.A.; Capaccioni, F.; Hansen, G.B.; Filacchione, G.; Clark, R.N.; Cerroni, P.; Brown, R.H.; Baines, K.H.; Bellucci, G.; Bibring, J.-P.; Buratti, B.J.; Bussoletti, E.; Combes, M.; Cruikshank, D.P.; Drossart, P.; Formisano, V.; Jaumann, R.; Langevin, Y.; Matson, D.L.; Nelson, R.M.; Nicholson, P.D.; Sicardy, B.; Sotin, C.

    2004-01-01

    The Visual and Infrared Mapping Spectrometer (VIMS) observed the Galilean satellites during the Cassini spacecraft's 2000/2001 flyby of Jupiter, providing compositional and thermal information about their surfaces. The Cassini spacecraft approached the jovian system no closer than about 126 Jupiter radii, about 9 million kilometers, at a phase angle of < 90 ??, resulting in only sub-pixel observations by VIMS of the Galilean satellites. Nevertheless, most of the spectral features discovered by the Near Infrared Mapping Spectrometer (NIMS) aboard the Galileo spacecraft during more than four years of observations have been identified in the VIMS data analyzed so far, including a possible 13C absorption. In addition, VIMS made observations in the visible part of the spectrum and at several new phase angles for all the Galilean satellites and the calculated phase functions are presented. In the process of analyzing these data, the VIMS radiometric and spectral calibrations were better determined in preparation for entry into the Saturn system. Treatment of these data is presented as an example of the VIMS data reduction, calibration and analysis process and a detailed explanation is given of the calibration process applied to the Jupiter data. ?? 2004 Elsevier Inc. All rights reserved.

  15. A procedure for testing the quality of LANDSAT atmospheric correction algorithms

    NASA Technical Reports Server (NTRS)

    Dias, L. A. V. (Principal Investigator); Vijaykumar, N. L.; Neto, G. C.

    1982-01-01

    There are two basic methods for testing the quality of an algorithm to minimize atmospheric effects on LANDSAT imagery: (1) test the results a posteriori, using ground truth or control points; (2) use a method based on image data plus estimation of additional ground and/or atmospheric parameters. A procedure based on the second method is described. In order to select the parameters, initially the image contrast is examined for a series of parameter combinations. The contrast improves for better corrections. In addition the correlation coefficient between two subimages, taken at different times, of the same scene is used for parameter's selection. The regions to be correlated should not have changed considerably in time. A few examples using this proposed procedure are presented.

  16. A Boundary Condition Relaxation Algorithm for Strongly Coupled, Ablating Flows Including Shape Change

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.; Johnston, Christopher O.

    2011-01-01

    Implementations of a model for equilibrium, steady-state ablation boundary conditions are tested for the purpose of providing strong coupling with a hypersonic flow solver. The objective is to remove correction factors or film cooling approximations that are usually applied in coupled implementations of the flow solver and the ablation response. Three test cases are considered - the IRV-2, the Galileo probe, and a notional slender, blunted cone launched at 10 km/s from the Earth's surface. A successive substitution is employed and the order of succession is varied as a function of surface temperature to obtain converged solutions. The implementation is tested on a specified trajectory for the IRV-2 to compute shape change under the approximation of steady-state ablation. Issues associated with stability of the shape change algorithm caused by explicit time step limits are also discussed.

  17. A Novel Spectral Data Processing Procedure on Multi-Object Fiber Spectral Data Based on 2-D Algorithms

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Ye, Z. F.; Xu, X.

    2016-01-01

    The data processing procedures currently used on most multi-object fiber spectroscopic telescopes, such as Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST), the Sloan Digital Sky Survey (SDSS), the Anglo-Australia Telescope (AAT), etc., are based on one-dimensional (1-D) algorithms. In this paper, LAMOST is taken as an example to display the proposed multi-object fiber spectral data processing procedure. In the using processing procedure on LAMOST, after the pretreatment process, the two-dimensional (2-D) observed raw data are extracted into 1-D intermediate data simply based on 1-D model. Then the subsequent key steps are all done by 1-D algorithms. However, this processing procedure is not in accord with the formation mechanism of the observed spectra. Therefore, it brings a considerable error in each step. To solve the problem, we propose a novel processing procedure that has not been used on LAMOST or other telescopes. The modules of the procedure are reordered, and the main steps are all based on 2-D algorithms. The principles of the core algorithms are explained in detail. Besides, some partial experimental results are shown to prove the effectiveness and superiority of the 2-D algorithms.

  18. BROMOCEA Code: An Improved Grand Canonical Monte Carlo/Brownian Dynamics Algorithm Including Explicit Atoms.

    PubMed

    Solano, Carlos J F; Pothula, Karunakar R; Prajapati, Jigneshkumar D; De Biase, Pablo M; Noskov, Sergei Yu; Kleinekathöfer, Ulrich

    2016-05-10

    All-atom molecular dynamics simulations have a long history of applications studying ion and substrate permeation across biological and artificial pores. While offering unprecedented insights into the underpinning transport processes, MD simulations are limited in time-scales and ability to simulate physiological membrane potentials or asymmetric salt solutions and require substantial computational power. While several approaches to circumvent all of these limitations were developed, Brownian dynamics simulations remain an attractive option to the field. The main limitation, however, is an apparent lack of protein flexibility important for the accurate description of permeation events. In the present contribution, we report an extension of the Brownian dynamics scheme which includes conformational dynamics. To achieve this goal, the dynamics of amino-acid residues was incorporated into the many-body potential of mean force and into the Langevin equations of motion. The developed software solution, called BROMOCEA, was applied to ion transport through OmpC as a test case. Compared to fully atomistic simulations, the results show a clear improvement in the ratio of permeating anions and cations. The present tests strongly indicate that pore flexibility can enhance permeation properties which will become even more important in future applications to substrate translocation.

  19. A Procedure to Determine the Coordinated Chromium and Calcium Isotopic Composition of Astromaterials Including the Chelyabinsk Meteorite

    NASA Technical Reports Server (NTRS)

    Tappa, M. J.; Mills, R. D.; Ware, B.; Simon, J. I.

    2014-01-01

    The isotopic compositions of elements are often used to characterize nucelosynthetic contributions in early Solar System objects. Coordinated multiple middle-mass elements with differing volatilities may provide information regarding the location of condensation of early Solar System solids. Here we detail new procedures that we have developed to make high-precision multi-isotope measurements of chromium and calcium using thermal ionization mass spectrometry, and characterize a suite of chondritic and terrestrial material including two fragments of the Chelyabinsk LL-chondrite.

  20. A cross-validation procedure for stopping the EM algorithm and deconvolution of neutron depth profiling spectra

    SciTech Connect

    Coakley, K.J. )

    1991-02-01

    The iterative EM algorithm is used to deconvolve neutron depth profiling spectra. Because of statistical noise in the data, artifacts in the estimated particle emission rate profile appear after too many iterations of the EM algorithm. To avoid artifacts, the EM algorithm is stopped using a cross-validation procedure. The data are split into two independent halves. The EM algorithm is applied to one half of the data to get an estimate of the emission rates. The algorithm is stopped when the conditional likelihood of the other half of the data passes through its maximum. The roles of the two halves of the data are then switched to get a second estimate of the emission rates. The two estimates are then averaged.

  1. A Spatio-Temporal Algorithmic Procedure for Environmental Policymaking in the Municipality of Arkalochori in the Greek Island of Crete

    NASA Astrophysics Data System (ADS)

    Batzias, F. A.; Sidiras, D. K.; Giannopoulos, Ch.; Spetsidis, I.

    2009-08-01

    This work deals with a methodological framework designed/developed under the form of a spatio-temporal algorithmic procedure for environmental policymaking at local level. The procedure includes 25 activity stages and 9 decision nodes, putting emphasis on (i) mapping on GIS layers water supply/demand and modeling of aquatic pollution coming from point and non-point sources, (ii) environmental monitoring by periodically measuring the main pollutants in situ and in the laboratory, (iii) design of environmental projects, decomposition of them into sub-projects and combination of the latter to form attainable alternatives, (iv) multicriteria ranking of alternatives, according to a modified Delphi method, by using as criteria the expected environmental benefit, the attitude of inhabitants, the priority within the programme of regional development, the capital required for the investment and the operating cost, and (v) knowledge Base (KB) operation/enrichment, functioning in combination with a data mining mechanism to extract knowledge/information/data from external Bases. An implementation is presented referring to the Municipality of Arkalochori in the Greek island of Crete.

  2. A procedure for the reliability improvement of the oblique ionograms automatic scaling algorithm

    NASA Astrophysics Data System (ADS)

    Ippolito, Alessandro; Scotto, Carlo; Sabbagh, Dario; Sgrigna, Vittorio; Maher, Phillip

    2016-05-01

    A procedure made by the combined use of the Oblique Ionogram Automatic Scaling Algorithm (OIASA) and Autoscala program is presented. Using Martyn's equivalent path theorem, 384 oblique soundings from a high-quality data set have been converted into vertical ionograms and analyzed by Autoscala program. The ionograms pertain to the radio link between Curtin W.A. (CUR) and Alice Springs N.T. (MTE), Australia, geographical coordinates (17.60°S; 123.82°E) and (23.52°S; 133.68°E), respectively. The critical frequency foF2 values extracted from the converted vertical ionograms by Autoscala were then compared with the foF2 values derived from the maximum usable frequencies (MUFs) provided by OIASA. A quality factor Q for the MUF values autoscaled by OIASA has been identified. Q represents the difference between the foF2 value scaled by Autoscala from the converted vertical ionogram and the foF2 value obtained applying the secant law to the MUF provided by OIASA. Using the receiver operating characteristic curve, an appropriate threshold level Qt was chosen for Q to improve the performance of OIASA.

  3. Numerical simulation for horizontal subsurface flow constructed wetlands: A short review including geothermal effects and solution bounding in biodegradation procedures

    NASA Astrophysics Data System (ADS)

    Liolios, K.; Tsihrintzis, V.; Angelidis, P.; Georgiev, K.; Georgiev, I.

    2016-10-01

    Current developments on modeling of groundwater flow and contaminant transport and removal in the porous media of Horizontal Subsurface Flow Constructed Wetlands (HSF CWs) are first reviewed in a short way. The two usual environmental engineering approaches, the black-box and the process-based one, are briefly presented. Next, recent research results obtained by using these two approaches are briefly discussed as application examples, where emphasis is given to the evaluation of the optimal design and operation parameters concerning HSF CWs. For the black-box approach, the use of Artificial Neural Networks is discussed for the formulation of models, which predict the removal performance of HSF CWs. A novel mathematical prove is presented, which concerns the dependence of the first-order removal coefficient on the Temperature and the Hydraulic Residence Time. For the process-based approach, an application example is first discussed which concerns procedures to evaluate the optimal range of values for the removal coefficient, dependent on either the Temperature or the Hydraulic Residence Time. This evaluation is based on simulating available experimental results of pilot-scale units operated in Democritus University of Thrace, Xanthi, Greece. Further, in a second example, a novel enlargement of the system of Partial Differential Equations is presented, in order to include geothermal effects. Finally, in a third example, the case of parameters uncertainty concerning biodegradation procedures is considered and the use of upper and a novel approach is presented, which concerns the upper and the lower solution bound for the practical draft design of HSF CWs.

  4. Development of a computer algorithm for the analysis of variable-frequency AC drives: Case studies included

    NASA Technical Reports Server (NTRS)

    Kankam, M. David; Benjamin, Owen

    1991-01-01

    The development of computer software for performance prediction and analysis of voltage-fed, variable-frequency AC drives for space power applications is discussed. The AC drives discussed include the pulse width modulated inverter (PWMI), a six-step inverter and the pulse density modulated inverter (PDMI), each individually connected to a wound-rotor induction motor. Various d-q transformation models of the induction motor are incorporated for user-selection of the most applicable model for the intended purpose. Simulation results of selected AC drives correlate satisfactorily with published results. Future additions to the algorithm are indicated. These improvements should enhance the applicability of the computer program to the design and analysis of space power systems.

  5. The ATAMM procedure model for concurrent processing of large grained control and signal processing algorithms

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1988-01-01

    An overview is presented of a model for describing data and control flow associated with the execution of large-grained, decision-free algorithms in a special distributed computer environment. The ATAMM (Algorithm-To-Architecture Mapping Model) model provides a basis for relating an algorithm to its execution in a dataflow multicomputer environment. The ATAMM model features a marked graph Petri net description of the algorithm behavior with regard to both data and control flow. The model provides an analytical basis for calculating performance bounds on throughput characteristics which are demonstrated here.

  6. Parameter Trending, Geolocation Quality Control and the Procedures to Support Preparation of Next Versions of the TRMM Reprocessing Algorithm

    NASA Technical Reports Server (NTRS)

    Stocker, Erich Franz

    2004-01-01

    TRMM has been an imminently successful mission from an engineering standpoint but even more from a science standpoint. An important part of this science success has been the careful quality control of the TRMM standard products. This paper will present the quality monitoring efforts that the TRMM Science Data and Information System (TSDIS) conducts on a routine basis. The paper will detail parameter trending, geolocation quality control and the procedures to support the preparation of next versions of the algorithm used for reprocessing.

  7. Fast mode decision algorithm in MPEG-2 to H.264/AVC transcoding including group of picture structure conversion

    NASA Astrophysics Data System (ADS)

    Lee, Kangjun; Jeon, Gwanggil; Jeong, Jechang

    2009-05-01

    The H.264/AVC baseline profile is used in many applications, including digital multimedia broadcasting, Internet protocol television, and storage devices, while the MPEG-2 main profile is widely used in applications, such as high-definition television and digital versatile disks. The MPEG-2 main profile supports B pictures for bidirectional motion prediction. Therefore, transcoding the MPEG-2 main profile to the H.264/AVC baseline is necessary for universal multimedia access. In the cascaded pixel domain transcoder architecture, the calculation of the rate distortion cost as part of the mode decision process in the H.264/AVC encoder requires extremely complex computations. To reduce the complexity inherent in the implementation of a real-time transcoder, we propose a fast mode decision algorithm based on complexity information from the reference region that is used for motion compensation. In this study, an adaptive mode decision process was used based on the modes assigned to the reference regions. Simulation results indicated that a significant reduction in complexity was achieved without significant degradation of video quality.

  8. Retinoids: Literature Review and Suggested Algorithm for Use Prior to Facial Resurfacing Procedures

    PubMed Central

    Buchanan, Patrick J; Gilman, Robert H

    2016-01-01

    Vitamin A-containing products have been used topically since the early 1940s to treat various skin conditions. To date, there are four generations of retinoids, a family of Vitamin A-containing compounds. Tretinoin, all-trans-retinoic acid, is a first-generation, naturally occurring, retinoid. It is available, commercially, as a gel or cream. The authors conducted a complete review of all studies, clinical- and basic science-based studies, within the literature involving tretinoin treatment recommendations for impending facial procedures. The literature currently lacks definitive recommendations for the use of tretinoin-containing products prior to undergoing facial procedures. Tretinoin pretreatment regimens vary greatly in terms of the strength of retinoid used, the length of the pre-procedure treatment, and the ideal time to stop treatment before the procedure. Based on the current literature and personal experience, the authors set forth a set of guidelines for the use of tretinoin prior to various facial procedures. PMID:27761082

  9. An Algorithm for Real-Time Optimal Photocurrent Estimation Including Transient Detection for Resource-Constrained Imaging Applications

    NASA Astrophysics Data System (ADS)

    Zemcov, Michael; Crill, Brendan; Ryan, Matthew; Staniszewski, Zak

    2016-06-01

    Mega-pixel charge-integrating detectors are common in near-IR imaging applications. Optimal signal-to-noise ratio estimates of the photocurrents, which are particularly important in the low-signal regime, are produced by fitting linear models to sequential reads of the charge on the detector. Algorithms that solve this problem have a long history, but can be computationally intensive. Furthermore, the cosmic ray background is appreciable for these detectors in Earth orbit, particularly above the Earth’s magnetic poles and the South Atlantic Anomaly, and on-board reduction routines must be capable of flagging affected pixels. In this paper, we present an algorithm that generates optimal photocurrent estimates and flags random transient charge generation from cosmic rays, and is specifically designed to fit on a computationally restricted platform. We take as a case study the Spectro-Photometer for the History of the Universe, Epoch of Reionization, and Ices Explorer (SPHEREx), a NASA Small Explorer astrophysics experiment concept, and show that the algorithm can easily fit in the resource-constrained environment of such a restricted platform. Detailed simulations of the input astrophysical signals and detector array performance are used to characterize the fitting routines in the presence of complex noise properties and charge transients. We use both Hubble Space Telescope Wide Field Camera-3 and Wide-field Infrared Survey Explorer to develop an empirical understanding of the susceptibility of near-IR detectors in low earth orbit and build a model for realistic cosmic ray energy spectra and rates. We show that our algorithm generates an unbiased estimate of the true photocurrent that is identical to that from a standard line fitting package, and characterize the rate, energy, and timing of both detected and undetected transient events. This algorithm has significant potential for imaging with charge-integrating detectors in astrophysics, earth science, and remote

  10. 32 CFR Appendix B to Part 80 - Procedures for Special Educational Programs (Including Related Services) for Preschool Children...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... instructions provided by the producers of the testing device. e. Administered in a manner so that no single.... Current physical status, including perceptual and motor abilities. e. Vocational transitional assessment... child's or child's current academic progress, including a statement of his or her learning style. d....

  11. Including health in transport policy agendas: the role of health impact assessment analyses and procedures in the European experience.

    PubMed Central

    Dora, Carlos; Racioppi, Francesca

    2003-01-01

    From the mid-1990s, research began to highlight the importance of a wide range of health impacts of transport policy decisions. The Third Ministerial Conference on Environment and Health adopted a Charter on Transport, Environment and Health based on four main components: bringing awareness of the nature, magnitude and costs of the health impacts of transport into intergovernmental processes; strengthening the arguments for integration of health into transport policies by developing in-depth analysis of the evidence; developing national case studies; and engaging ministries of environment, health and transport as well as intergovernmental and nongovernmental organizations. Negotiation of the Charter was based on two converging processes: the political process involved the interaction of stakeholders in transport, health and environment in Europe, which helped to frame the issues and the approaches to respond to them; the scientific process involved an international group of experts who produced state-of- the-art reviews of the health impacts resulting from transportation activities, identifying gaps in existing knowledge and methodological tools, specifying the policy implications of their findings, and suggesting possible targets for health improvements. Health arguments were used to strengthen environmental ones, clarify costs and benefits, and raise issues of health equity. The European experience shows that HIA can fulfil the need for simple procedures to be systematically applied to decisions regarding transport strategies at national, regional and local levels. Gaps were identified concerning models for quantifying health impacts and capacity building on how to use such tools. PMID:12894322

  12. Surgical accuracy of three-dimensional virtual planning: a pilot study of bimaxillary orthognathic procedures including maxillary segmentation.

    PubMed

    Stokbro, K; Aagaard, E; Torkov, P; Bell, R B; Thygesen, T

    2016-01-01

    This retrospective study evaluated the precision and positional accuracy of different orthognathic procedures following virtual surgical planning in 30 patients. To date, no studies of three-dimensional virtual surgical planning have evaluated the influence of segmentation on positional accuracy and transverse expansion. Furthermore, only a few have evaluated the precision and accuracy of genioplasty in placement of the chin segment. The virtual surgical plan was compared with the postsurgical outcome by using three linear and three rotational measurements. The influence of maxillary segmentation was analyzed in both superior and inferior maxillary repositioning. In addition, transverse surgical expansion was compared with the postsurgical expansion obtained. An overall, high degree of linear accuracy between planned and postsurgical outcomes was found, but with a large standard deviation. Rotational difference showed an increase in pitch, mainly affecting the maxilla. Segmentation had no significant influence on maxillary placement. However, a posterior movement was observed in inferior maxillary repositioning. A lack of transverse expansion was observed in the segmented maxilla independent of the degree of expansion.

  13. Round-off errors in cutting plane algorithms based on the revised simplex procedure

    NASA Technical Reports Server (NTRS)

    Moore, J. E.

    1973-01-01

    This report statistically analyzes computational round-off errors associated with the cutting plane approach to solving linear integer programming problems. Cutting plane methods require that the inverse of a sequence of matrices be computed. The problem basically reduces to one of minimizing round-off errors in the sequence of inverses. Two procedures for minimizing this problem are presented, and their influence on error accumulation is statistically analyzed. One procedure employs a very small tolerance factor to round computed values to zero. The other procedure is a numerical analysis technique for reinverting or improving the approximate inverse of a matrix. The results indicated that round-off accumulation can be effectively minimized by employing a tolerance factor which reflects the number of significant digits carried for each calculation and by applying the reinversion procedure once to each computed inverse. If 18 significant digits plus an exponent are carried for each variable during computations, then a tolerance value of 0.1 x 10 to the minus 12th power is reasonable.

  14. The classification and diagnostic algorithm for primary lymphatic dysplasia: an update from 2010 to include molecular findings.

    PubMed

    Connell, F C; Gordon, K; Brice, G; Keeley, V; Jeffery, S; Mortimer, P S; Mansour, S; Ostergaard, P

    2013-10-01

    Historically, primary lymphoedema was classified into just three categories depending on the age of onset of swelling; congenital, praecox and tarda. Developments in clinical phenotyping and identification of the genetic cause of some of these conditions have demonstrated that primary lymphoedema is highly heterogenous. In 2010, we introduced a new classification and diagnostic pathway as a clinical and research tool. This algorithm has been used to delineate specific primary lymphoedema phenotypes, facilitating the discovery of new causative genes. This article reviews the latest molecular findings and provides an updated version of the classification and diagnostic pathway based on this new knowledge.

  15. Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM☆

    PubMed Central

    López, J.D.; Litvak, V.; Espinosa, J.J.; Friston, K.; Barnes, G.R.

    2014-01-01

    The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy—an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. PMID:24041874

  16. A weighted reverse Cuthill-McKee procedure for finite element method algorithms to solve strongly anisotropic electrodynamic problems

    SciTech Connect

    Cristofolini, Andrea; Latini, Chiara; Borghi, Carlo A.

    2011-02-01

    This paper presents a technique for improving the convergence rate of a generalized minimum residual (GMRES) algorithm applied for the solution of a algebraic system produced by the discretization of an electrodynamic problem with a tensorial electrical conductivity. The electrodynamic solver considered in this work is a part of a magnetohydrodynamic (MHD) code in the low magnetic Reynolds number approximation. The code has been developed for the analysis of MHD interaction during the re-entry phase of a space vehicle. This application is a promising technique intensively investigated for the shock mitigation and the vehicle control in the higher layers of a planetary atmosphere. The medium in the considered application is a low density plasma, characterized by a tensorial conductivity. This is a result of the behavior of the free electric charges, which tend to drift in a direction perpendicular both to the electric field and to the magnetic field. In the given approximation, the electrodynamics is described by an elliptical partial differential equation, which is solved by means of a finite element approach. The linear system obtained by discretizing the problem is solved by means of a GMRES iterative method with an incomplete LU factorization threshold preconditioning. The convergence of the solver appears to be strongly affected by the tensorial characteristic of the conductivity. In order to deal with this feature, the bandwidth reduction in the coefficient matrix is considered and a novel technique is proposed and discussed. First, the standard reverse Cuthill-McKee (RCM) procedure has been applied to the problem. Then a modification of the RCM procedure (the weighted RCM procedure, WRCM) has been developed. In the last approach, the reordering is performed taking into account the relation between the mesh geometry and the magnetic field direction. In order to investigate the effectiveness of the methods, two cases are considered. The RCM and WRCM procedures

  17. Including State Excitation in the Fixed-Interval Smoothing Algorithm and Implementation of the Maneuver Detection Method Using Error Residuals

    DTIC Science & Technology

    1990-12-01

    N is taken as the first smoothed estimate, P, must be equal to P,,, at this last data point. This can be seen graphically in Figure 4. Meditch [Ref...D-A246 336 NAVAL POSTGRADUATE SCHOOL Monterey , California R AWDTIC ELECTIE THESIS INCLUDING STATE EXCITATION IN THE FIXED-INTERVAL SMOOTHING ...Filter, Smoothing , Noise Process, Maneuver Detection. 19 Abstract (continue on reverse f necessary and idcntify by block number) The effects of the state

  18. An integrated portfolio optimisation procedure based on data envelopment analysis, artificial bee colony algorithm and genetic programming

    NASA Astrophysics Data System (ADS)

    Hsu, Chih-Ming

    2014-12-01

    Portfolio optimisation is an important issue in the field of investment/financial decision-making and has received considerable attention from both researchers and practitioners. However, besides portfolio optimisation, a complete investment procedure should also include the selection of profitable investment targets and determine the optimal timing for buying/selling the investment targets. In this study, an integrated procedure using data envelopment analysis (DEA), artificial bee colony (ABC) and genetic programming (GP) is proposed to resolve a portfolio optimisation problem. The proposed procedure is evaluated through a case study on investing in stocks in the semiconductor sub-section of the Taiwan stock market for 4 years. The potential average 6-month return on investment of 9.31% from 1 November 2007 to 31 October 2011 indicates that the proposed procedure can be considered a feasible and effective tool for making outstanding investment plans, and thus making profits in the Taiwan stock market. Moreover, it is a strategy that can help investors to make profits even when the overall stock market suffers a loss.

  19. Optimum procedure for construction of spectral classification algorithms for medical diagnosis

    NASA Astrophysics Data System (ADS)

    Kendall, Catherine A.; Barr, Hugh; Shepherd, Neil; Stone, Nicholas

    2002-03-01

    Optical spectroscopic detection of early malignancy is becoming more widely accepted in academic circles, however much work remains to be done before full recognition by the medical community is achieved. The majority of published studies to date have demonstrated the potential of optical diagnosis techniques using small sample numbers in a selected patient population. Many are completed without a solid understanding of the shortcomings of histopathology, the 'gold standard' for cancer detection. For the development of a new technique to improve diagnosis it is vital that more rigorous protocols are employed in large-scale clinical trials. The prospect of utilizing NIR-Raman spectroscopy for the analysis of neoplastic gastrointestinal tissue has been thoroughly explored by a multi-disciplinary team including surgeons, pathologists, and spectroscopists. This study demonstrates the need for rigorous experimental protocols and histopathological analysis by a panel of expert pathologists. Measurements of tissue specimens from nine different pathological groups describing the full spectrum of disease in the oesophagus have been made. Only homogeneous samples with consensus pathology opinion were used to construct a training data set of Raman spectra. Models were constructed using multivariate analysis techniques and tested using cross-validation.

  20. Minimally invasive myotomy for the treatment of esophageal achalasia: evolution of the surgical procedure and the therapeutic algorithm.

    PubMed

    Bresadola, Vittorio; Feo, Carlo V

    2012-04-01

    Achalasia is a rare disease of the esophagus, characterized by the absence of peristalsis in the esophageal body and incomplete relaxation of the lower esophageal sphincter, which may be hypertensive. The cause of this disease is unknown; therefore, the aim of the therapy is to improve esophageal emptying by eliminating the outflow resistance caused by the lower esophageal sphincter. This goal can be accomplished either by pneumatic dilatation or surgical myotomy, which are the only long-term effective therapies for achalasia. Historically, pneumatic dilatation was preferred over surgical myotomy because of the morbidity associated with a thoracotomy or a laparotomy. However, with the development of minimally invasive techniques, the surgical approach has gained widespread acceptance among patients and gastroenterologists and, consequently, the role of surgery has changed. The aim of this study was to review the changes occurred in the surgical treatment of achalasia over the last 2 decades; specifically, the development of minimally invasive techniques with the evolution from a thoracoscopic approach without an antireflux procedure to a laparoscopic myotomy with a partial fundoplication, the changes in the length of the myotomy, and the modification of the therapeutic algorithm.

  1. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  2. 40 CFR 51.357 - Test procedures and standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... invalid test condition, unsafe conditions, fast pass/fail algorithms, or, in the case of the on-board... using approved fast pass or fast fail algorithms and multiple pass/fail algorithms may be used during the test cycle to eliminate false failures. The transient test procedure, including algorithms...

  3. Incremental Yield of Including Determine-TB LAM Assay in Diagnostic Algorithms for Hospitalized and Ambulatory HIV-Positive Patients in Kenya

    PubMed Central

    Ferlazzo, Gabriella; Bevilacqua, Paolo; Kirubi, Beatrice; Ardizzoni, Elisa; Wanjala, Stephen; Sitienei, Joseph; Bonnet, Maryline

    2017-01-01

    Background Determine-TB LAM assay is a urine point-of-care test useful for TB diagnosis in HIV-positive patients. We assessed the incremental diagnostic yield of adding LAM to algorithms based on clinical signs, sputum smear-microscopy, chest X-ray and Xpert MTB/RIF in HIV-positive patients with symptoms of pulmonary TB (PTB). Methods Prospective observational cohort of ambulatory (either severely ill or CD4<200cells/μl or with Body Mass Index<17Kg/m2) and hospitalized symptomatic HIV-positive adults in Kenya. Incremental diagnostic yield of adding LAM was the difference in the proportion of confirmed TB patients (positive Xpert or MTB culture) diagnosed by the algorithm with LAM compared to the algorithm without LAM. The multivariable mortality model was adjusted for age, sex, clinical severity, BMI, CD4, ART initiation, LAM result and TB confirmation. Results Among 474 patients included, 44.1% were severely ill, 69.6% had CD4<200cells/μl, 59.9% had initiated ART, 23.2% could not produce sputum. LAM, smear-microscopy, Xpert and culture in sputum were positive in 39.0% (185/474), 21.6% (76/352), 29.1% (102/350) and 39.7% (92/232) of the patients tested, respectively. Of 156 patients with confirmed TB, 65.4% were LAM positive. Of those classified as non-TB, 84.0% were LAM negative. Adding LAM increased the diagnostic yield of the algorithms by 36.6%, from 47.4% (95%CI:39.4–55.6) to 84.0% (95%CI:77.3–89.4%), when using clinical signs and X-ray; by 19.9%, from 62.2% (95%CI:54.1–69.8) to 82.1% (95%CI:75.1–87.7), when using clinical signs and microscopy; and by 13.4%, from 74.4% (95%CI:66.8–81.0) to 87.8% (95%CI:81.6–92.5), when using clinical signs and Xpert. LAM positive patients had an increased risk of 2-months mortality (aOR:2.7; 95%CI:1.5–4.9). Conclusion LAM should be included in TB diagnostic algorithms in parallel to microscopy or Xpert request for HIV-positive patients either ambulatory (severely ill or CD4<200cells/μl) or hospitalized. LAM

  4. The Local Minima Problem in Hierarchical Classes Analysis: An Evaluation of a Simulated Annealing Algorithm and Various Multistart Procedures

    ERIC Educational Resources Information Center

    Ceulemans, Eva; Van Mechelen, Iven; Leenen, Iwin

    2007-01-01

    Hierarchical classes models are quasi-order retaining Boolean decomposition models for N-way N-mode binary data. To fit these models to data, rationally started alternating least squares (or, equivalently, alternating least absolute deviations) algorithms have been proposed. Extensive simulation studies showed that these algorithms succeed quite…

  5. Applying a new procedure to assess the controls on aggregate stability - including soil parent material and soil organic carbon concentrations - at the landscape scale

    NASA Astrophysics Data System (ADS)

    Turner, Gren; Rawlins, Barry; Wragg, Joanna; Lark, Murray

    2014-05-01

    Aggregate stability is an important physical indicator of soil quality and influences the potential for erosive losses from the landscape, so methods are required to measure it rapidly and cost-effectively. Previously we demonstrated a novel method for quantifying the stability of soil aggregates using a laser granulometer (Rawlins et al., 2012). We have developed our method further to mimic field conditions more closely by incorporating a procedure for pre-wetting aggregates (for 30 minutes on a filter paper) prior to applying the test. The first measurement of particle-size distribution is made on the water stable aggregates after these have been added to circulating water (aggregate size range 1000 to 2000 µm). The second measurement is made on the disaggregated material after the circulating aggregates have been disrupted with ultrasound (sonication). We then compute the difference between the mean weight diameters (MWD) of these two size distributions; we refer to this value as the disaggregation reduction (DR; µm). Soils with more stable aggregates, which are resistant to both slaking and mechanical breakdown by the hydrodynamic forces during circulation, have larger values of DR. We made repeated analyses of DR using an aggregate reference material (RM; a paleosol with well-characterised disaggregation properties) and used this throughout our analyses to demonstrate our approach was reproducible. We applied our modified technique - and also the previous technique in which dry aggregates were used - to a set of 60 topsoil samples (depth 0-15 cm) from cultivated land across a large region (10 000 km2) of eastern England. We wished to investigate: (i) any differences in aggregate stability (DR measurements) using dry or pre-wet aggregates, and (ii) the dominant controls on the stability of aggregates in water using wet aggregates, including variations in mineralogy and soil organic carbon (SOC) content, and any interaction between them. The sixty soil

  6. A comparison of an optimised sequential extraction procedure and dilute acid leaching of elements in anoxic sediments, including the effects of oxidation on sediment metal partitioning.

    PubMed

    Larner, Bronwyn L; Palmer, Anne S; Seen, Andrew J; Townsend, Ashley T

    2008-02-11

    The effect of oxidation of anoxic sediment upon the extraction of 13 elements (Cd, Sn, Sb, Pb, Al, Cr, Mn, Fe, Co, Ni, Cu, Zn, As) using the optimised Community Bureau of Reference of the European Commission (BCR) sequential extraction procedure and a dilute acid partial extraction procedure (4h, 1 molL(-1) HCl) was investigated. Elements commonly associated with the sulfidic phase, Cd, Cu, Pb, Zn and Fe exhibited the most significant changes under the BCR sequential extraction procedure. Cd, Cu, Zn, and to a lesser extent Pb, were redistributed into the weak acid extractable fraction upon oxidation of the anoxic sediment and Fe was redistributed into the reducible fraction as expected, but an increase was also observed in the residual Fe. For the HCl partial extraction, sediments with moderate acid volatile sulfide (AVS) levels (1-100 micromolg(-1)) showed no significant difference in element partitioning following oxidation, whilst sediments containing high AVS levels (>100 micromolg(-1)) were significantly different with elevated concentrations of Cu and Sn noted in the partial extract following oxidation of the sediment. Comparison of the labile metals released using the BCR sequential extraction procedure (SigmaSteps 1-3) to labile metals extracted using the dilute HCl partial extraction showed that no method was consistently more aggressive than the other, with the HCl partial extraction extracting more Sn and Sb from the anoxic sediment than the BCR procedure, whilst the BCR procedure extracted more Cr, Co, Cu and As than the HCl extraction.

  7. Comparative efficacy and safety of the left versus right radial approach for percutaneous coronary procedures: a meta-analysis including 6870 patients.

    PubMed

    Xia, S L; Zhang, X B; Zhou, J S; Gao, X

    2015-08-01

    The radial approach is widely used in the treatment of patients with coronary artery disease. We conducted a meta-analysis of published results on the efficacy and safety of the left and right radial approaches in patients undergoing percutaneous coronary procedures. A systematic search of reference databases was conducted, and data from 14 randomized controlled trials involving 6870 participants were analyzed. The left radial approach was associated with significant reductions in fluoroscopy time [standardized mean difference (SMD)=-0.14, 95% confidence interval (CI)=-0.19 to -0.09; P<0.00001] and contrast volume (SMD=-0.07, 95%CI=-0.12 to -0.02; P=0.009). There were no significant differences in rate of procedural failure of the left and the right radial approaches [risk ratios (RR)=0.98; 95%CI=0.77-1.25; P=0.88] or procedural time (SMD=-0.05, 95%CI=0.17-0.06; P=0.38). Tortuosity of the subclavian artery (RR=0.27, 95%CI=0.14-0.50; P<0.0001) was reported more frequently with the right radial approach. A greater number of catheters were used with the left than with the right radial approach (SMD=0.25, 95%CI=0.04-0.46; P=0.02). We conclude that the left radial approach is as safe as the right radial approach, and that the left radial approach should be recommended for use in percutaneous coronary procedures, especially in percutaneous coronary angiograms.

  8. Improved Methodology for Surface and Atmospheric Soundings, Error Estimates, and Quality Control Procedures: the AIRS Science Team Version-6 Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Blaisdell, John; Iredell, Lena

    2014-01-01

    The AIRS Science Team Version-6 AIRS/AMSU retrieval algorithm is now operational at the Goddard DISC. AIRS Version-6 level-2 products are generated near real-time at the Goddard DISC and all level-2 and level-3 products are available starting from September 2002. This paper describes some of the significant improvements in retrieval methodology contained in the Version-6 retrieval algorithm compared to that previously used in Version-5. In particular, the AIRS Science Team made major improvements with regard to the algorithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the cloud clearing and retrieval procedures; and 3) derive error estimates and use them for Quality Control. Significant improvements have also been made in the generation of cloud parameters. In addition to the basic AIRS/AMSU mode, Version-6 also operates in an AIRS Only (AO) mode which produces results almost as good as those of the full AIRS/AMSU mode. This paper also demonstrates the improvements of some AIRS Version-6 and Version-6 AO products compared to those obtained using Version-5.

  9. Effects of deformable registration algorithms on the creation of statistical maps for preoperative targeting in deep brain stimulation procedures

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; D'Haese, Pierre-Francois; Dawant, Benoit M.

    2014-03-01

    Deep brain stimulation, which is used to treat various neurological disorders, involves implanting a permanent electrode into precise targets deep in the brain. Accurate pre-operative localization of the targets on pre-operative MRI sequence is challenging as these are typically located in homogenous regions with poor contrast. Population-based statistical atlases can assist with this process. Such atlases are created by acquiring the location of efficacious regions from numerous subjects and projecting them onto a common reference image volume using some normalization method. In previous work, we presented results concluding that non-rigid registration provided the best result for such normalization. However, this process could be biased by the choice of the reference image and/or registration approach. In this paper, we have qualitatively and quantitatively compared the performance of six recognized deformable registration methods at normalizing such data in poor contrasted regions onto three different reference volumes using a unique set of data from 100 patients. We study various metrics designed to measure the centroid, spread, and shape of the normalized data. This study leads to a total of 1800 deformable registrations and results show that statistical atlases constructed using different deformable registration methods share comparable centroids and spreads with marginal differences in their shape. Among the six methods being studied, Diffeomorphic Demons produces the largest spreads and centroids that are the furthest apart from the others in general. Among the three atlases, one atlas consistently outperforms the other two with smaller spreads for each algorithm. However, none of the differences in the spreads were found to be statistically significant, across different algorithms or across different atlases.

  10. 45 CFR 309.120 - What intergovernmental procedures must a Tribe or Tribal organization include in a Tribal IV-D plan?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...-D agencies; and (b) That the Tribe or Tribal organization will recognize child support orders issued... Tribal organization include in a Tribal IV-D plan? 309.120 Section 309.120 Public Welfare Regulations Relating to Public Welfare OFFICE OF CHILD SUPPORT ENFORCEMENT (CHILD SUPPORT ENFORCEMENT...

  11. 45 CFR 309.80 - What safeguarding procedures must a Tribe or Tribal organization include in a Tribal IV-D plan?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... organization include in a Tribal IV-D plan? 309.80 Section 309.80 Public Welfare Regulations Relating to Public Welfare OFFICE OF CHILD SUPPORT ENFORCEMENT (CHILD SUPPORT ENFORCEMENT PROGRAM), ADMINISTRATION FOR CHILDREN AND FAMILIES, DEPARTMENT OF HEALTH AND HUMAN SERVICES TRIBAL CHILD SUPPORT ENFORCEMENT...

  12. 45 CFR 309.120 - What intergovernmental procedures must a Tribe or Tribal organization include in a Tribal IV-D plan?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-D agencies; and (b) That the Tribe or Tribal organization will recognize child support orders issued... Tribal organization include in a Tribal IV-D plan? 309.120 Section 309.120 Public Welfare Regulations Relating to Public Welfare OFFICE OF CHILD SUPPORT ENFORCEMENT (CHILD SUPPORT ENFORCEMENT...

  13. 45 CFR 309.120 - What intergovernmental procedures must a Tribe or Tribal organization include in a Tribal IV-D plan?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...-D agencies; and (b) That the Tribe or Tribal organization will recognize child support orders issued... Tribal organization include in a Tribal IV-D plan? 309.120 Section 309.120 Public Welfare Regulations Relating to Public Welfare OFFICE OF CHILD SUPPORT ENFORCEMENT (CHILD SUPPORT ENFORCEMENT...

  14. Significance of including field non-uniformities such as the heel effect and beam scatter in the determination of the skin dose distribution during interventional fluoroscopic procedures

    NASA Astrophysics Data System (ADS)

    Rana, Vijay; Gill, Kamaljit; Rudin, Stephen; Bednarek, Daniel R.

    2012-03-01

    The current version of the real-time skin-dose-tracking system (DTS) we have developed assumes the exposure is contained within the collimated beam and is uniform except for inverse-square variation. This study investigates the significance of factors that contribute to beam non-uniformity such as the heel effect and backscatter from the patient to areas of the skin inside and outside the collimated beam. Dose-calibrated Gafchromic film (XR-RV3, ISP) was placed in the beam in the plane of the patient table at a position 15 cm tube-side of isocenter on a Toshiba Infinix C-Arm system. Separate exposures were made with the film in contact with a block of 20-cm solid water providing backscatter and with the film suspended in air without backscatter, both with and without the table in the beam. The film was scanned to obtain dose profiles and comparison of the profiles for the various conditions allowed a determination of field non-uniformity and backscatter contribution. With the solid-water phantom and with the collimator opened completely for the 20-cm mode, the dose profile decreased by about 40% on the anode side of the field. Backscatter falloff at the beam edge was about 10% from the center and extra-beam backscatter decreased slowly with distance from the field, being about 3% of the beam maximum at 6 cm from the edge. Determination of the magnitude of these factors will allow them to be included in the skin-dose-distribution calculation and should provide a more accurate determination of peak-skin dose for the DTS.

  15. Assessment of average of normals (AON) procedure for outlier-free datasets including qualitative values below limit of detection (LoD): an application within tumor markers such as CA 15-3, CA 125, and CA 19-9.

    PubMed

    Usta, Murat; Aral, Hale; Mete Çilingirtürk, Ahmet; Kural, Alev; Topaç, Ibrahim; Semerci, Tuna; Hicri Köseoğlu, Mehmet

    2016-11-01

    Average of normals (AON) is a quality control procedure that is sensitive only to systematic errors that can occur in an analytical process in which patient test results are used. The aim of this study was to develop an alternative model in order to apply the AON quality control procedure to datasets that include qualitative values below limit of detection (LoD). The reported patient test results for tumor markers, such as CA 15-3, CA 125, and CA 19-9, analyzed by two instruments, were retrieved from the information system over a period of 5 months, using the calibrator and control materials with the same lot numbers. The median as a measure of central tendency and the median absolute deviation (MAD) as a measure of dispersion were used for the complementary model of AON quality control procedure. The ubias values, which were determined for the bias component of the measurement uncertainty, were partially linked to the percentages of the daily median values of the test results that fall within the control limits. The results for these tumor markers, in which lower limits of reference intervals are not medically important for clinical diagnosis and management, showed that the AON quality control procedure, using the MAD around the median, can be applied for datasets including qualitative values below LoD.

  16. Quantum Algorithms

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.

  17. Parallel Algorithms and Patterns

    SciTech Connect

    Robey, Robert W.

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  18. Comments on "Including the effects of temperature-dependent opacities in the implicit Monte Carlo algorithm" by N.A. Gentile [J. Comput. Phys. 230 (2011) 5100-5114

    NASA Astrophysics Data System (ADS)

    Ghosh, Karabi

    2017-02-01

    We briefly comment on a paper by N.A. Gentile [J. Comput. Phys. 230 (2011) 5100-5114] in which the Fleck factor has been modified to include the effects of temperature-dependent opacities in the implicit Monte Carlo algorithm developed by Fleck and Cummings [1,2]. Instead of the Fleck factor, f = 1 / (1 + βcΔtσP), the author derived the modified Fleck factor g = 1 / (1 + βcΔtσP - min [σP‧ (a Tr4 - aT4) cΔt/ρCV, 0 ]) to be used in the Implicit Monte Carlo (IMC) algorithm in order to obtain more accurate solutions with much larger time steps. Here β = 4 aT3 / ρCV, σP is the Planck opacity and the derivative of Planck opacity w.r.t. the material temperature is σP‧ = dσP / dT.

  19. Pattern Search Ranking and Selection Algorithms for Mixed-Variable Optimization of Stochastic Systems

    DTIC Science & Technology

    2004-09-01

    optimization problems with stochastic objective functions and a mixture of design variable types. The generalized pattern search (GPS) class of algorithms is...provide computational enhancements to the basic algorithm. Im- plementation alternatives include the use of modern R&S procedures designed to provide...83 vii Page 4.3 Termination Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.4 Algorithm Design

  20. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  1. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  2. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  3. Quality control by HyperSpectral Imaging (HSI) in solid waste recycling: logics, algorithms and procedures

    NASA Astrophysics Data System (ADS)

    Bonifazi, Giuseppe; Serranti, Silvia

    2014-03-01

    In secondary raw materials and recycling sectors, the products quality represents, more and more, the key issue to pursuit in order to be competitive in a more and more demanding market, where quality standards and products certification play a preheminent role. These goals assume particular importance when recycling actions are applied. Recovered products, resulting from waste materials, and/or dismissed products processing, are, in fact, always seen with a certain suspect. An adequate response of the industry to the market can only be given through the utilization of equipment and procedures ensuring pure, high-quality production, and efficient work and cost. All these goals can be reached adopting not only more efficient equipment and layouts, but also introducing new processing logics able to realize a full control of the handled material flow streams fulfilling, at the same time, i) an easy management of the procedures, ii) an efficient use of the energy, iii) the definition and set up of reliable and robust procedures, iv) the possibility to implement network connectivity capabilities finalized to a remote monitoring and control of the processes and v) a full data storage, analysis and retrieving. Furthermore the ongoing legislation and regulation require the implementation of recycling infrastructure characterised by high resources efficiency and low environmental impacts, both aspects being strongly linked to the waste materials and/or dismissed products original characteristics. For these reasons an optimal recycling infrastructure design primarily requires a full knowledge of the characteristics of the input waste. What previously outlined requires the introduction of a new important concept to apply in solid waste recycling, the recycling-oriented characterization, that is the set of actions addressed to strategically determine selected attributes, in order to get goaloriented data on waste for the development, implementation or improvement of recycling

  4. The Training Effectiveness Algorithm.

    ERIC Educational Resources Information Center

    Cantor, Jeffrey A.

    1988-01-01

    Describes the Training Effectiveness Algorithm, a systematic procedure for identifying the cause of reported training problems which was developed for use in the U.S. Navy. A two-step review by subject matter experts is explained, and applications of the algorithm to other organizations and training systems are discussed. (Author/LRW)

  5. Algorithm for navigated ESS.

    PubMed

    Baudoin, T; Grgić, M V; Zadravec, D; Geber, G; Tomljenović, D; Kalogjera, L

    2013-12-01

    ENT navigation has given new opportunities in performing Endoscopic Sinus Surgery (ESS) and improving surgical outcome of the patients` treatment. ESS assisted by a navigation system could be called Navigated Endoscopic Sinus Surgery (NESS). As it is generally accepted that the NESS should be performed only in cases of complex anatomy and pathology, it has not yet been established as a state-of-the-art procedure and thus not used on a daily basis. This paper presents an algorithm for use of a navigation system for basic ESS in the treatment of chronic rhinosinusitis (CRS). The algorithm includes five units that should be highlighted using a navigation system. They are as follows: 1) nasal vestibule unit, 2) OMC unit, 3) anterior ethmoid unit, 4) posterior ethmoid unit, and 5) sphenoid unit. Each unit has a shape of a triangular pyramid and consists of at least four reference points or landmarks. As many landmarks as possible should be marked when determining one of the five units. Navigated orientation in each unit should always precede any surgical intervention. The algorithm should improve the learning curve of trainees and enable surgeons to use the navigation system routinely and systematically.

  6. Quantum algorithms: an overview

    NASA Astrophysics Data System (ADS)

    Montanaro, Ashley

    2016-01-01

    Quantum computers are designed to outperform standard computers by running quantum algorithms. Areas in which quantum algorithms can be applied include cryptography, search and optimisation, simulation of quantum systems and solving large systems of linear equations. Here we briefly survey some known quantum algorithms, with an emphasis on a broad overview of their applications rather than their technical details. We include a discussion of recent developments and near-term applications of quantum algorithms.

  7. Uses of clinical algorithms.

    PubMed

    Margolis, C Z

    1983-02-04

    The clinical algorithm (flow chart) is a text format that is specially suited for representing a sequence of clinical decisions, for teaching clinical decision making, and for guiding patient care. A representative clinical algorithm is described in detail; five steps for writing an algorithm and seven steps for writing a set of algorithms are outlined. Five clinical education and patient care uses of algorithms are then discussed, including a map for teaching clinical decision making and protocol charts for guiding step-by-step care of specific problems. Clinical algorithms are compared as to their clinical usefulness with decision analysis. Three objections to clinical algorithms are answered, including the one that they restrict thinking. It is concluded that methods should be sought for writing clinical algorithms that represent expert consensus. A clinical algorithm could then be written for any area of medical decision making that can be standardized. Medical practice could then be taught more effectively, monitored accurately, and understood better.

  8. Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    2004-01-01

    A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.

  9. Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    2005-01-01

    A genetic algorithm approach suitable for solving multi-objective problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding Pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the Pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide Pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.

  10. Memetic algorithm for community detection in networks.

    PubMed

    Gong, Maoguo; Fu, Bao; Jiao, Licheng; Du, Haifeng

    2011-11-01

    Community structure is one of the most important properties in networks, and community detection has received an enormous amount of attention in recent years. Modularity is by far the most used and best known quality function for measuring the quality of a partition of a network, and many community detection algorithms are developed to optimize it. However, there is a resolution limit problem in modularity optimization methods. In this study, a memetic algorithm, named Meme-Net, is proposed to optimize another quality function, modularity density, which includes a tunable parameter that allows one to explore the network at different resolutions. Our proposed algorithm is a synergy of a genetic algorithm with a hill-climbing strategy as the local search procedure. Experiments on computer-generated and real-world networks show the effectiveness and the multiresolution ability of the proposed method.

  11. Determination of the relative economic impact of different molecular-based laboratory algorithms for respiratory viral pathogen detection, including Pandemic (H1N1), using a secure web based platform

    PubMed Central

    2011-01-01

    Background During period of crisis, laboratory planners may be faced with a need to make operational and clinical decisions in the face of limited information. To avoid this dilemma, our laboratory utilizes a secure web based platform, Data Integration for Alberta Laboratories (DIAL) to make near real-time decisions. This manuscript utilizes the data collected by DIAL as well as laboratory test cost modeling to identify the relative economic impact of four proposed scenarios of testing for Pandemic H1N1 (2009) and other respiratory viral pathogens. Methods Historical data was collected from the two waves of the pandemic using DIAL. Four proposed molecular testing scenarios were generated: A) Luminex respiratory virus panel (RVP) first with/without US centers for Disease Control Influenza A Matrix gene assay (CDC-M), B) CDC-M first with/without RVP, C) RVP only, and D) CDC-M only. Relative cost estimates of different testing algorithm were generated from a review of historical costs in the lab and were based on 2009 Canadian dollars. Results Scenarios A and B had similar costs when the rate of influenza A was low (< 10%) with higher relative cost in Scenario A with increasing incidence. Scenario A provided more information about mixed respiratory virus infection as compared with Scenario B. Conclusions No one approach is applicable to all conditions. Testing costs will vary depending on the test volume, prevalence of influenza A strains, as well as other circulating viruses and a more costly algorithm involving a combination of different tests may be chosen to ensure that tests results are returned to the clinician in a quicker manner. Costing should not be the only consideration for determination of laboratory algorithms. PMID:21645365

  12. Single-cluster algorithm for the site-bond-correlated Ising model

    NASA Astrophysics Data System (ADS)

    Campos, P. R. A.; Onody, R. N.

    1997-12-01

    We extend the Wolff algorithm to include correlated spin interactions in diluted magnetic systems. This algorithm is applied to study the site-bond-correlated Ising model on a two-dimensional square lattice. We use a finite-size scaling procedure to obtain the phase diagram in the temperature-concentration space. We also have verified that the autocorrelation time diminishes in the presence of dilution and correlation, showing that the Wolff algorithm performs even better in such situations.

  13. Exact Algorithms for Coloring Graphs While Avoiding Monochromatic Cycles

    NASA Astrophysics Data System (ADS)

    Talla Nobibon, Fabrice; Hurkens, Cor; Leus, Roel; Spieksma, Frits C. R.

    We consider the problem of deciding whether a given directed graph can be vertex partitioned into two acyclic subgraphs. Applications of this problem include testing rationality of collective consumption behavior, a subject in micro-economics. We identify classes of directed graphs for which the problem is easy and prove that the existence of a constant factor approximation algorithm is unlikely for an optimization version which maximizes the number of vertices that can be colored using two colors while avoiding monochromatic cycles. We present three exact algorithms, namely an integer-programming algorithm based on cycle identification, a backtracking algorithm, and a branch-and-check algorithm. We compare these three algorithms both on real-life instances and on randomly generated graphs. We find that for the latter set of graphs, every algorithm solves instances of considerable size within few seconds; however, the CPU time of the integer-programming algorithm increases with the number of vertices in the graph while that of the two other procedures does not. For every algorithm, we also study empirically the transition from a high to a low probability of YES answer as function of a parameter of the problem. For real-life instances, the integer-programming algorithm fails to solve the largest instance after one hour while the other two algorithms solve it in about ten minutes.

  14. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.

    1990-01-01

    Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  15. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony

    1990-01-01

    The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  16. Pyroshock prediction procedures

    NASA Astrophysics Data System (ADS)

    Piersol, Allan G.

    2002-05-01

    Given sufficient effort, pyroshock loads can be predicted by direct analytical procedures using Hydrocodes that analytically model the details of the pyrotechnic explosion and its interaction with adjacent structures, including nonlinear effects. However, it is more common to predict pyroshock environments using empirical procedures based upon extensive studies of past pyroshock data. Various empirical pyroshock prediction procedures are discussed, including those developed by the Jet Propulsion Laboratory, Lockheed-Martin, and Boeing.

  17. Algorithmic Procedure for Finding Semantically Related Journals.

    ERIC Educational Resources Information Center

    Pudovkin, Alexander I.; Garfield, Eugene

    2002-01-01

    Using citations, papers and references as parameters a relatedness factor (RF) is computed for a series of journals. Sorting these journals by the RF produces a list of journals most closely related to a specified starting journal. The method appears to select a set of journals that are semantically most similar to the target journal. The…

  18. Bypass surgery in limb salvage: inflow procedures.

    PubMed

    Bismuth, Jean; Duran, Cassidy

    2013-04-01

    Proper management of lower-extremity inflow vessel disease is critical to the success of distal interventions. Aortobifemoral bypass is the most effective means of treating aortoiliac disease, but this invasive procedure is not always ideal for a patient population that often has diffuse vascular disease and multiple comorbidities. Technologic advances and increasing experience have fundamentally altered the management algorithm for lower-extremity vascular lesions, and endovascular options have become the first-line therapy for Trans-Atlantic Inter-Society Guidelines (TASC) class A and B lesions. In fact, an endovascular first approach is being endorsed even for highly complex TASC C and even TASC D lesions. Other alternatives include minimally invasive (laparoscopic or robotic) options or extra-anatomic bypass procedures. Inadequate outflow can compromise any inflow procedure, but inflow treatment failures are the crux of all limb salvage in patients with lower-extremity vascular disease.

  19. Algorithmic commonalities in the parallel environment

    NASA Technical Reports Server (NTRS)

    Mcanulty, Michael A.; Wainer, Michael S.

    1987-01-01

    The ultimate aim of this project was to analyze procedures from substantially different application areas to discover what is either common or peculiar in the process of conversion to the Massively Parallel Processor (MPP). Three areas were identified: molecular dynamic simulation, production systems (rule systems), and various graphics and vision algorithms. To date, only selected graphics procedures have been investigated. They are the most readily available, and produce the most visible results. These include simple polygon patch rendering, raycasting against a constructive solid geometric model, and stochastic or fractal based textured surface algorithms. Only the simplest of conversion strategies, mapping a major loop to the array, has been investigated so far. It is not entirely satisfactory.

  20. FOHI-D: An iterative Hirshfeld procedure including atomic dipoles

    SciTech Connect

    Geldof, D.; Blockhuys, F.; Van Alsenoy, C.; Krishtal, A.

    2014-04-14

    In this work, a new partitioning method based on the FOHI method (fractional occupation Hirshfeld-I method) will be discussed. The new FOHI-D method uses an iterative scheme in which both the atomic charge and atomic dipole are calculated self-consistently. In order to induce the dipole moment on the atom, an electric field is applied during the atomic SCF calculations. Based on two sets of molecules, the atomic charge and intrinsic atomic dipole moment of hydrogen and chlorine atoms are compared using the iterative Hirshfeld (HI) method, the iterative Stockholder atoms (ISA) method, the FOHI method, and the FOHI-D method. The results obtained are further analyzed as a function of the group electronegativity of Boyd et al. [J. Am. Chem. Soc. 110, 4182 (1988); Boyd et al., J. Am. Chem. Soc. 114, 1652 (1992)] and De Proft et al. [J. Phys. Chem. 97, 1826 (1993)]. The molecular electrostatic potential (ESP) based on the HI, ISA, FOHI, and FOHI-D charges is compared with the ab initio ESP. Finally, the effect of adding HI, ISA, FOHI, and FOHI-D atomic dipoles to the multipole expansion as a function of the precision of the ESP is analyzed.

  1. Refraction, including prisms.

    PubMed

    Hiatt, R L

    1991-02-01

    The literature in the past year on refraction is replete with several isolated but very important topics that have been of interest to strabismologists and refractionists for many decades. The refractive changes in scleral buckling procedures include an increase in axial length as well as an increase in myopia, as would be expected. Tinted lenses in dyslexia show little positive effect in the nonasthmatic patients in one study. The use of spectacles or bifocals as a way to control increase in myopia is refuted in another report. It has been shown that in accommodative esotropia not all patients will be able to escape the use of bifocals in the teenage years, even though surgery might be performed. The hope that disposable contact lenses would cut down on the instance of giant papillary conjunctivitis and keratitis has been given some credence, and the conventional theory that sclerosis alone is the cause of presbyopia is attacked. Also, gas permeable bifocal contact lenses are reviewed and the difficulties of correcting presbyopia by this method outlined. The practice of giving an aphakic less bifocal addition instead of a nonaphakic, based on the presumption of increased effective power, is challenged. In the review of prisms, the majority of articles concern prism adaption. The most significant report is that of the Prism Adaptation Study Research Group (Arch Ophthalmol 1990, 108:1248-1256), showing that acquired esotropia in particular has an increased incidence of stable and full corrections surgically in the prism adaptation group versus the control group.(ABSTRACT TRUNCATED AT 250 WORDS)

  2. Fast autodidactic adaptive equalization algorithms

    NASA Astrophysics Data System (ADS)

    Hilal, Katia

    Autodidactic equalization by adaptive filtering is addressed in a mobile radio communication context. A general method, using an adaptive stochastic gradient Bussgang type algorithm, to deduce two low cost computation algorithms is given: one equivalent to the initial algorithm and the other having improved convergence properties thanks to a block criteria minimization. Two start algorithms are reworked: the Godard algorithm and the decision controlled algorithm. Using a normalization procedure, and block normalization, the performances are improved, and their common points are evaluated. These common points are used to propose an algorithm retaining the advantages of the two initial algorithms. This thus inherits the robustness of the Godard algorithm and the precision and phase correction of the decision control algorithm. The work is completed by a study of the stable states of Bussgang type algorithms and of the stability of the Godard algorithms, initial and normalized. The simulation of these algorithms, carried out in a mobile radio communications context, and under severe conditions on the propagation channel, gave a 75% reduction in the number of samples required for the processing in relation with the initial algorithms. The improvement of the residual error was of a much lower return. These performances are close to making possible the use of autodidactic equalization in the mobile radio system.

  3. Procedural pediatric dermatology.

    PubMed

    Metz, Brandie J

    2013-04-01

    Due to many factors, including parental anxiety, a child's inability to understand the necessity of a procedure and a child's unwillingness to cooperate, it can be much more challenging to perform dermatologic procedures in children. This article reviews pre-procedural preparation of patients and parents, techniques for minimizing injection-related pain and optimal timing of surgical intervention. The risks and benefits of general anesthesia in the setting of pediatric dermatologic procedures are discussed. Additionally, the surgical approach to a few specific types of birthmarks is addressed.

  4. [Clinical algorithms in the treatment of status epilepticus in children].

    PubMed

    Zubcević, S; Buljina, A; Gavranović, M; Uzicanin, S; Catibusić, F

    1999-01-01

    The clinical algorithm is a text format that is specially suited for presenting a sequence of clinical decisions, for teaching clinical decision making, and for guiding patient care. Clinical algorithms are compared as to their clinical usefulness with decision analysis. We have tried to make clinical algorithm for managing status epilepticus in children that can be applicable to our conditions. Most of the algorithms that are made on this subject include drugs and procedures that are not available at our hospital. We identified performance requirement, defined the set of problems to be solved as well as who would solve them, developed drafts in several versions and put them in the discussion with experts in this field. Algorithm was tested and revised and graphical acceptability was achieved. In the algorithm we tried to clearly define how the clinician should make the decision and to be provided with appropriate feedback. In one year period of experience in working we found this algorithm very useful in managing status epilepticus in children, as well as in teaching young doctors the specifities of algorithms and this specific issue. Their feedback is that they find that it provides the framework for facilitating thinking about clinical problems. Sometimes we hear objection that algorithms may not apply to a specific patient. This objection is based on misunderstanding how algorithms are used and should be corrected by a proper explanation of their use. We conclude that methods should be sought for writing clinical algorithms that represent expert consensus. A clinical algorithm can then be written for many areas of medical decision making that can be standardized. Medical practice would then be presented to students more effectively, accurately and understood better.

  5. Algorithms Could Automate Cancer Diagnosis

    NASA Technical Reports Server (NTRS)

    Baky, A. A.; Winkler, D. G.

    1982-01-01

    Five new algorithms are a complete statistical procedure for quantifying cell abnormalities from digitized images. Procedure could be basis for automated detection and diagnosis of cancer. Objective of procedure is to assign each cell an atypia status index (ASI), which quantifies level of abnormality. It is possible that ASI values will be accurate and economical enough to allow diagnoses to be made quickly and accurately by computer processing of laboratory specimens extracted from patients.

  6. New Results in Astrodynamics Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Coverstone-Carroll, V.; Hartmann, J. W.; Williams, S. N.; Mason, W. J.

    1998-01-01

    Generic algorithms have gained popularity as an effective procedure for obtaining solutions to traditionally difficult space mission optimization problems. In this paper, a brief survey of the use of genetic algorithms to solve astrodynamics problems is presented and is followed by new results obtained from applying a Pareto genetic algorithm to the optimization of low-thrust interplanetary spacecraft missions.

  7. Performance evaluation of image processing algorithms on the GPU.

    PubMed

    Castaño-Díez, Daniel; Moser, Dominik; Schoenegger, Andreas; Pruggnaller, Sabine; Frangakis, Achilleas S

    2008-10-01

    The graphics processing unit (GPU), which originally was used exclusively for visualization purposes, has evolved into an extremely powerful co-processor. In the meanwhile, through the development of elaborate interfaces, the GPU can be used to process data and deal with computationally intensive applications. The speed-up factors attained compared to the central processing unit (CPU) are dependent on the particular application, as the GPU architecture gives the best performance for algorithms that exhibit high data parallelism and high arithmetic intensity. Here, we evaluate the performance of the GPU on a number of common algorithms used for three-dimensional image processing. The algorithms were developed on a new software platform called "CUDA", which allows a direct translation from C code to the GPU. The implemented algorithms include spatial transformations, real-space and Fourier operations, as well as pattern recognition procedures, reconstruction algorithms and classification procedures. In our implementation, the direct porting of C code in the GPU achieves typical acceleration values in the order of 10-20 times compared to a state-of-the-art conventional processor, but they vary depending on the type of the algorithm. The gained speed-up comes with no additional costs, since the software runs on the GPU of the graphics card of common workstations.

  8. Dental Procedures.

    PubMed

    Ramponi, Denise R

    2016-01-01

    Dental problems are a common complaint in emergency departments in the United States. There are a wide variety of dental issues addressed in emergency department visits such as dental caries, loose teeth, dental trauma, gingival infections, and dry socket syndrome. Review of the most common dental blocks and dental procedures will allow the practitioner the opportunity to make the patient more comfortable and reduce the amount of analgesia the patient will need upon discharge. Familiarity with the dental equipment, tooth, and mouth anatomy will help prepare the practitioner for to perform these dental procedures.

  9. A File Organization and Maintenance Procedure for Dynamic Document Collections

    ERIC Educational Resources Information Center

    Crouch, Donald B.

    1975-01-01

    Describes a clustering algorithm designed for dynamic data bases and presents an update procedure which maintains an effective document classification without reclustering. The effectiveness of the algorithms is demonstrated for a subset of the Cranfield collection. (Author)

  10. Verifying a Computer Algorithm Mathematically.

    ERIC Educational Resources Information Center

    Olson, Alton T.

    1986-01-01

    Presents an example of mathematics from an algorithmic point of view, with emphasis on the design and verification of this algorithm. The program involves finding roots for algebraic equations using the half-interval search algorithm. The program listing is included. (JN)

  11. The E-MS Algorithm: Model Selection with Incomplete Data.

    PubMed

    Jiang, Jiming; Nguyen, Thuan; Rao, J Sunil

    2015-04-04

    We propose a procedure associated with the idea of the E-M algorithm for model selection in the presence of missing data. The idea extends the concept of parameters to include both the model and the parameters under the model, and thus allows the model to be part of the E-M iterations. We develop the procedure, known as the E-MS algorithm, under the assumption that the class of candidate models is finite. Some special cases of the procedure are considered, including E-MS with the generalized information criteria (GIC), and E-MS with the adaptive fence (AF; Jiang et al. 2008). We prove numerical convergence of the E-MS algorithm as well as consistency in model selection of the limiting model of the E-MS convergence, for E-MS with GIC and E-MS with AF. We study the impact on model selection of different missing data mechanisms. Furthermore, we carry out extensive simulation studies on the finite-sample performance of the E-MS with comparisons to other procedures. The methodology is also illustrated on a real data analysis involving QTL mapping for an agricultural study on barley grains.

  12. Temperature Corrected Bootstrap Algorithm

    NASA Technical Reports Server (NTRS)

    Comiso, Joey C.; Zwally, H. Jay

    1997-01-01

    A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.

  13. An affine projection algorithm using grouping selection of input vectors

    NASA Astrophysics Data System (ADS)

    Shin, JaeWook; Kong, NamWoong; Park, PooGyeon

    2011-10-01

    This paper present an affine projection algorithm (APA) using grouping selection of input vectors. To improve the performance of conventional APA, the proposed algorithm adjusts the number of the input vectors using two procedures: grouping procedure and selection procedure. In grouping procedure, the some input vectors that have overlapping information for update is grouped using normalized inner product. Then, few input vectors that have enough information for for coefficient update is selected using steady-state mean square error (MSE) in selection procedure. Finally, the filter coefficients update using selected input vectors. The experimental results show that the proposed algorithm has small steady-state estimation errors comparing with the existing algorithms.

  14. Treatment algorithms in refractory partial epilepsy.

    PubMed

    Jobst, Barbara C

    2009-09-01

    An algorithm is a "step-by-step procedure for solving a problem or accomplishing some end....in a finite number of steps." (Merriam-Webster, 2009). Medical algorithms are decision trees to help with diagnostic and therapeutic decisions. For the treatment of epilepsy there is no generally accepted treatment algorithm, as individual epilepsy centers follow different diagnostic and therapeutic guidelines. This article presents two algorithms to guide decisions in the treatment of refractory partial epilepsy. The treatment algorithm describes a stepwise diagnostic and therapeutic approach to intractable medial temporal and neocortical epilepsy. The surgical algorithm guides decisions in the surgical treatment of neocortical epilepsy.

  15. Pump apparatus including deconsolidator

    DOEpatents

    Sonwane, Chandrashekhar; Saunders, Timothy; Fitzsimmons, Mark Andrew

    2014-10-07

    A pump apparatus includes a particulate pump that defines a passage that extends from an inlet to an outlet. A duct is in flow communication with the outlet. The duct includes a deconsolidator configured to fragment particle agglomerates received from the passage.

  16. Minimally invasive procedures

    PubMed Central

    Baltayiannis, Nikolaos; Michail, Chandrinos; Lazaridis, George; Anagnostopoulos, Dimitrios; Baka, Sofia; Mpoukovinas, Ioannis; Karavasilis, Vasilis; Lampaki, Sofia; Papaiwannou, Antonis; Karavergou, Anastasia; Kioumis, Ioannis; Pitsiou, Georgia; Katsikogiannis, Nikolaos; Tsakiridis, Kosmas; Rapti, Aggeliki; Trakada, Georgia; Zissimopoulos, Athanasios; Zarogoulidis, Konstantinos

    2015-01-01

    Minimally invasive procedures, which include laparoscopic surgery, use state-of-the-art technology to reduce the damage to human tissue when performing surgery. Minimally invasive procedures require small “ports” from which the surgeon inserts thin tubes called trocars. Carbon dioxide gas may be used to inflate the area, creating a space between the internal organs and the skin. Then a miniature camera (usually a laparoscope or endoscope) is placed through one of the trocars so the surgical team can view the procedure as a magnified image on video monitors in the operating room. Specialized equipment is inserted through the trocars based on the type of surgery. There are some advanced minimally invasive surgical procedures that can be performed almost exclusively through a single point of entry—meaning only one small incision, like the “uniport” video-assisted thoracoscopic surgery (VATS). Not only do these procedures usually provide equivalent outcomes to traditional “open” surgery (which sometimes require a large incision), but minimally invasive procedures (using small incisions) may offer significant benefits as well: (I) faster recovery; (II) the patient remains for less days hospitalized; (III) less scarring and (IV) less pain. In our current mini review we will present the minimally invasive procedures for thoracic surgery. PMID:25861610

  17. Self-adaptive incremental Newton-Raphson algorithms

    NASA Technical Reports Server (NTRS)

    Padovan, J.

    1980-01-01

    Multilevel self-adaptive Newton-Raphson type strategies are developed to improve the solution efficiency of nonlinear finite element simulations of statically loaded structures. The overall strategy involves three basic levels. The first level involves preliminary solution tunneling via primative operators. Secondly, the solution is constantly monitored via quality/convergence/nonlinearity tests. Lastly, the third level involves self-adaptive algorithmic update procedures aimed at improving the convergence characteristics of the Newton-Raphson strategy. Numerical experiments are included to illustrate the results of the procedure.

  18. Unstructured mesh quality assessment and upwind Euler solution algorithm validation

    NASA Astrophysics Data System (ADS)

    Woodard, Paul R.; Batina, John T.; Yang, Henry T. Y.

    1994-05-01

    Quality assessment procedures are described for two and three dimensional unstructured meshes. The procedures include measurement of minimum angles, element aspect ratios, stretching, and element skewness. Meshes about the ONERA M6 wing and the Boeing 747 transport configuration are generated using an advancing front method grid generation package of programs. Solutions of the Euler equations for these meshes are obtained at low angle of attack, transonic conditions. Results for these cases, obtained as part of a validation study, investigate accuracy of an implicit upwind Euler solution algorithm.

  19. Optical modulator including grapene

    DOEpatents

    Liu, Ming; Yin, Xiaobo; Zhang, Xiang

    2016-06-07

    The present invention provides for a one or more layer graphene optical modulator. In a first exemplary embodiment the optical modulator includes an optical waveguide, a nanoscale oxide spacer adjacent to a working region of the waveguide, and a monolayer graphene sheet adjacent to the spacer. In a second exemplary embodiment, the optical modulator includes at least one pair of active media, where the pair includes an oxide spacer, a first monolayer graphene sheet adjacent to a first side of the spacer, and a second monolayer graphene sheet adjacent to a second side of the spacer, and at least one optical waveguide adjacent to the pair.

  20. Surgical Procedures Needed to Eradicate Infection in Knee Septic Arthritis.

    PubMed

    Dave, Omkar H; Patel, Karan A; Andersen, Clark R; Carmichael, Kelly D

    2016-01-01

    Septic arthritis of the knee is encountered on a regular basis by orthopedists and nonorthopedists. No established therapeutic algorithm exists for septic arthritis of the knee, and there is much variability in management. This study assessed the number of surgical procedures, arthroscopic or open, required to eradicate infection. The study was a retrospective analysis of 79 patients who were treated for septic knee arthritis from 1995 to 2011. Patients who were included in the study had native septic knee arthritis that had resolved with treatment consisting of irrigation and debridement, either open or arthroscopic. Logistic regression analysis was used to explore the relation between the interval between onset of symptoms and index surgery and the use of arthroscopy and the need for multiple procedures. Fifty-two patients met the inclusion criteria, and 53% were male, with average follow-up of 7.2 years (range, 1-16.2 years). Arthroscopic irrigation and debridement was performed in 70% of cases. On average, successful treatment required 1.3 procedures (SD, 0.6; range, 1-4 procedures). A significant relation (P=.012) was found between time from presentation to surgery and the need for multiple procedures. With arthroscopic irrigation and debridement, most patients with septic knee arthritis require only 1 surgical procedure to eradicate infection. The need for multiple procedures increases with time from onset of symptoms to surgery.

  1. Scheduling algorithms

    NASA Astrophysics Data System (ADS)

    Wolfe, William J.; Wood, David; Sorensen, Stephen E.

    1996-12-01

    This paper discusses automated scheduling as it applies to complex domains such as factories, transportation, and communications systems. The window-constrained-packing problem is introduced as an ideal model of the scheduling trade offs. Specific algorithms are compared in terms of simplicity, speed, and accuracy. In particular, dispatch, look-ahead, and genetic algorithms are statistically compared on randomly generated job sets. The conclusion is that dispatch methods are fast and fairly accurate; while modern algorithms, such as genetic and simulate annealing, have excessive run times, and are too complex to be practical.

  2. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  3. YAMPA: Yet Another Matching Pursuit Algorithm for compressive sensing

    NASA Astrophysics Data System (ADS)

    Lodhi, Muhammad A.; Voronin, Sergey; Bajwa, Waheed U.

    2016-05-01

    State-of-the-art sparse recovery methods often rely on the restricted isometry property for their theoretical guarantees. However, they cannot explicitly incorporate metrics such as restricted isometry constants within their recovery procedures due to the computational intractability of calculating such metrics. This paper formulates an iterative algorithm, termed yet another matching pursuit algorithm (YAMPA), for recovery of sparse signals from compressive measurements. YAMPA differs from other pursuit algorithms in that: (i) it adapts to the measurement matrix using a threshold that is explicitly dependent on two computable coherence metrics of the matrix, and (ii) it does not require knowledge of the signal sparsity. Performance comparisons of YAMPA against other matching pursuit and approximate message passing algorithms are made for several types of measurement matrices. These results show that while state-of-the-art approximate message passing algorithms outperform other algorithms (including YAMPA) in the case of well-conditioned random matrices, they completely break down in the case of ill-conditioned measurement matrices. On the other hand, YAMPA and comparable pursuit algorithms not only result in reasonable performance for well-conditioned matrices, but their performance also degrades gracefully for ill-conditioned matrices. The paper also shows that YAMPA uniformly outperforms other pursuit algorithms for the case of thresholding parameters chosen in a clairvoyant fashion. Further, when combined with a simple and fast technique for selecting thresholding parameters in the case of ill-conditioned matrices, YAMPA outperforms other pursuit algorithms in the regime of low undersampling, although some of these algorithms can outperform YAMPA in the regime of high undersampling in this setting.

  4. Local flow management/profile descent algorithm. Fuel-efficient, time-controlled profiles for the NASA TSRV airplane

    NASA Technical Reports Server (NTRS)

    Groce, J. L.; Izumi, K. H.; Markham, C. H.; Schwab, R. W.; Thompson, J. L.

    1986-01-01

    The Local Flow Management/Profile Descent (LFM/PD) algorithm designed for the NASA Transport System Research Vehicle program is described. The algorithm provides fuel-efficient altitude and airspeed profiles consistent with ATC restrictions in a time-based metering environment over a fixed ground track. The model design constraints include accommodation of both published profile descent procedures and unpublished profile descents, incorporation of fuel efficiency as a flight profile criterion, operation within the performance capabilities of the Boeing 737-100 airplane with JT8D-7 engines, and conformity to standard air traffic navigation and control procedures. Holding and path stretching capabilities are included for long delay situations.

  5. Component evaluation testing and analysis algorithms.

    SciTech Connect

    Hart, Darren M.; Merchant, Bion John

    2011-10-01

    The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.

  6. Classification procedure in limited angle tomography system

    SciTech Connect

    Chlewicki, W.; Baniukiewicz, P.; Chady, T.; Brykalski, A.

    2011-06-23

    In this work we propose the use of limited angle reconstruction algorithms combined with a procedure for defect detection and feature evaluation in three dimensions. The procedure consists of the following steps: acquisition of the X-ray projections, approximated limited angle 3D image reconstruction, and image preprocessing and classification.

  7. A parallel algorithm for the non-symmetric eigenvalue problem

    SciTech Connect

    Dongarra, J.; Sidani, M. |

    1991-12-01

    This paper describes a parallel algorithm for computing the eigenvalues and eigenvectors of a non-symmetric matrix. The algorithm is based on a divide-and-conquer procedure and uses an iterative refinement technique.

  8. Including Jews in Multiculturalism.

    ERIC Educational Resources Information Center

    Langman, Peter F.

    1995-01-01

    Discusses reasons for the lack of attention to Jews as an ethnic minority within multiculturalism both by Jews and non-Jews; why Jews and Jewish issues need to be included; and addresses some of the issues involved in the ethical treatment of Jewish clients. (Author)

  9. Motion Cueing Algorithm Development: New Motion Cueing Program Implementation and Tuning

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    A computer program has been developed for the purpose of driving the NASA Langley Research Center Visual Motion Simulator (VMS). This program includes two new motion cueing algorithms, the optimal algorithm and the nonlinear algorithm. A general description of the program is given along with a description and flowcharts for each cueing algorithm, and also descriptions and flowcharts for subroutines used with the algorithms. Common block variable listings and a program listing are also provided. The new cueing algorithms have a nonlinear gain algorithm implemented that scales each aircraft degree-of-freedom input with a third-order polynomial. A description of the nonlinear gain algorithm is given along with past tuning experience and procedures for tuning the gain coefficient sets for each degree-of-freedom to produce the desired piloted performance. This algorithm tuning will be needed when the nonlinear motion cueing algorithm is implemented on a new motion system in the Cockpit Motion Facility (CMF) at the NASA Langley Research Center.

  10. Quarantine document system indexing procedure

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Quarantine Document System (QDS) is described including the indexing procedures and thesaurus of indexing terms. The QDS consists of these functional elements: acquisition, cataloging, indexing, storage, and retrieval. A complete listing of the collection, and the thesaurus are included.

  11. Procedural knowledge

    NASA Technical Reports Server (NTRS)

    Georgeff, Michael P.; Lansky, Amy L.

    1986-01-01

    Much of commonsense knowledge about the real world is in the form of procedures or sequences of actions for achieving particular goals. In this paper, a formalism is presented for representing such knowledge using the notion of process. A declarative semantics for the representation is given, which allows a user to state facts about the effects of doing things in the problem domain of interest. An operational semantics is also provided, which shows how this knowledge can be used to achieve particular goals or to form intentions regarding their achievement. Given both semantics, the formalism additionally serves as an executable specification language suitable for constructing complex systems. A system based on this formalism is described, and examples involving control of an autonomous robot and fault diagnosis for NASA's Space Shuttle are provided.

  12. Numerical Boundary Condition Procedures

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Topics include numerical procedures for treating inflow and outflow boundaries, steady and unsteady discontinuous surfaces, far field boundaries, and multiblock grids. In addition, the effects of numerical boundary approximations on stability, accuracy, and convergence rate of the numerical solution are discussed.

  13. 47 CFR 1.9005 - Included services.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Included services. 1.9005 Section 1.9005 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Spectrum Leasing Scope and Authority § 1.9005 Included services. The spectrum leasing policies and rules of this subpart apply to...

  14. 47 CFR 1.9005 - Included services.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 1 2013-10-01 2013-10-01 false Included services. 1.9005 Section 1.9005 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Grants by Random Selection Spectrum Leasing Scope and Authority § 1.9005 Included services. The spectrum leasing policies and rules...

  15. 47 CFR 1.9005 - Included services.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 1 2012-10-01 2012-10-01 false Included services. 1.9005 Section 1.9005 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Grants by Random Selection Spectrum Leasing Scope and Authority § 1.9005 Included services. The spectrum leasing policies and rules...

  16. 47 CFR 1.9005 - Included services.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Included services. 1.9005 Section 1.9005 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Spectrum Leasing Scope and Authority § 1.9005 Included services. The spectrum leasing policies and rules of this subpart apply to...

  17. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  18. Object-oriented algorithmic laboratory for ordering sparse matrices

    SciTech Connect

    Kumfert, Gary Karl

    2000-05-01

    We focus on two known NP-hard problems that have applications in sparse matrix computations: the envelope/wavefront reduction problem and the fill reduction problem. Envelope/wavefront reducing orderings have a wide range of applications including profile and frontal solvers, incomplete factorization preconditioning, graph reordering for cache performance, gene sequencing, and spatial databases. Fill reducing orderings are generally limited to--but an inextricable part of--sparse matrix factorization. Our major contribution to this field is the design of new and improved heuristics for these NP-hard problems and their efficient implementation in a robust, cross-platform, object-oriented software package. In this body of research, we (1) examine current ordering algorithms, analyze their asymptotic complexity, and characterize their behavior in model problems, (2) introduce new and improved algorithms that address deficiencies found in previous heuristics, (3) implement an object-oriented library of these algorithms in a robust, modular fashion without significant loss of efficiency, and (4) extend our algorithms and software to address both generalized and constrained problems. We stress that the major contribution is the algorithms and the implementation; the whole being greater than the sum of its parts. The initial motivation for implementing our algorithms in object-oriented software was to manage the inherent complexity. During our research came the realization that the object-oriented implementation enabled new possibilities augmented algorithms that would not have been as natural to generalize from a procedural implementation. Some extensions are constructed from a family of related algorithmic components, thereby creating a poly-algorithm that can adapt its strategy to the properties of the specific problem instance dynamically. Other algorithms are tailored for special constraints by aggregating algorithmic components and having them collaboratively

  19. Coding for urologic office procedures.

    PubMed

    Dowling, Robert A; Painter, Mark

    2013-11-01

    This article summarizes current best practices for documenting, coding, and billing common office-based urologic procedures. Topics covered include general principles, basic and advanced urologic coding, creation of medical records that support compliant coding practices, bundled codes and unbundling, global periods, modifiers for procedure codes, when to bill for evaluation and management services during the same visit, coding for supplies, and laboratory and radiology procedures pertinent to urology practice. Detailed information is included for the most common urology office procedures, and suggested resources and references are provided. This information is of value to physicians, office managers, and their coding staff.

  20. New correction procedures for the fast field program which extend its range

    NASA Technical Reports Server (NTRS)

    West, M.; Sack, R. A.

    1990-01-01

    A fast field program (FFP) algorithm was developed based on the method of Lee et al., for the prediction of sound pressure level from low frequency, high intensity sources. In order to permit accurate predictions at distances greater than 2 km, new correction procedures have had to be included in the algorithm. Certain functions, whose Hankel transforms can be determined analytically, are subtracted from the depth dependent Green's function. The distance response is then obtained as the sum of these transforms and the Fast Fourier Transformation (FFT) of the residual k dependent function. One procedure, which permits the elimination of most complex exponentials, has allowed significant changes in the structure of the FFP algorithm, which has resulted in a substantial reduction in computation time.

  1. Nutritional therapies (including fosteum).

    PubMed

    Nieves, Jeri W

    2009-03-01

    Nutrition is important in promoting bone health and in managing an individual with low bone mass or osteoporosis. In adult women and men, known losses of bone mass and microarchitecture occur, and nutrition can help minimize these losses. In every patient, a healthy diet with adequate protein, fruits, vegetables, calcium, and vitamin D is required to maintain bone health. Recent reports on nutritional remedies for osteoporosis have highlighted the importance of calcium in youth and continued importance in conjunction with vitamin D as the population ages. It is likely that a calcium intake of 1200 mg/d is ideal, and there are some concerns about excessive calcium intakes. However, vitamin D intake needs to be increased in most populations. The ability of soy products, particularly genistein aglycone, to provide skeletal benefit has been recently studied, including some data that support a new medical food marketed as Fosteum (Primus Pharmaceuticals, Scottsdale, AZ).

  2. Performance of a cavity-method-based algorithm for the prize-collecting Steiner tree problem on graphs

    NASA Astrophysics Data System (ADS)

    Biazzo, Indaco; Braunstein, Alfredo; Zecchina, Riccardo

    2012-08-01

    We study the behavior of an algorithm derived from the cavity method for the prize-collecting steiner tree (PCST) problem on graphs. The algorithm is based on the zero temperature limit of the cavity equations and as such is formally simple (a fixed point equation resolved by iteration) and distributed (parallelizable). We provide a detailed comparison with state-of-the-art algorithms on a wide range of existing benchmarks, networks, and random graphs. Specifically, we consider an enhanced derivative of the Goemans-Williamson heuristics and the dhea solver, a branch and cut integer linear programming based approach. The comparison shows that the cavity algorithm outperforms the two algorithms in most large instances both in running time and quality of the solution. Finally we prove a few optimality properties of the solutions provided by our algorithm, including optimality under the two postprocessing procedures defined in the Goemans-Williamson derivative and global optimality in some limit cases.

  3. A cloud masking algorithm for EARLINET lidar systems

    NASA Astrophysics Data System (ADS)

    Binietoglou, Ioannis; Baars, Holger; D'Amico, Giuseppe; Nicolae, Doina

    2015-04-01

    Cloud masking is an important first step in any aerosol lidar processing chain as most data processing algorithms can only be applied on cloud free observations. Up to now, the selection of a cloud-free time interval for data processing is typically performed manually, and this is one of the outstanding problems for automatic processing of lidar data in networks such as EARLINET. In this contribution we present initial developments of a cloud masking algorithm that permits the selection of the appropriate time intervals for lidar data processing based on uncalibrated lidar signals. The algorithm is based on a signal normalization procedure using the range of observed values of lidar returns, designed to work with different lidar systems with minimal user input. This normalization procedure can be applied to measurement periods of only few hours, even if no suitable cloud-free interval exists, and thus can be used even when only a short period of lidar measurements is available. Clouds are detected based on a combination of criteria including the magnitude of the normalized lidar signal and time-space edge detection performed using the Sobel operator. In this way the algorithm avoids misclassification of strong aerosol layers as clouds. Cloud detection is performed using the highest available time and vertical resolution of the lidar signals, allowing the effective detection of low-level clouds (e.g. cumulus humilis). Special attention is given to suppress false cloud detection due to signal noise that can affect the algorithm's performance, especially during day-time. In this contribution we present the details of algorithm, the effect of lidar characteristics (space-time resolution, available wavelengths, signal-to-noise ratio) to detection performance, and highlight the current strengths and limitations of the algorithm using lidar scenes from different lidar systems in different locations across Europe.

  4. Problem solving with genetic algorithms and Splicer

    NASA Technical Reports Server (NTRS)

    Bayer, Steven E.; Wang, Lui

    1991-01-01

    Genetic algorithms are highly parallel, adaptive search procedures (i.e., problem-solving methods) loosely based on the processes of population genetics and Darwinian survival of the fittest. Genetic algorithms have proven useful in domains where other optimization techniques perform poorly. The main purpose of the paper is to discuss a NASA-sponsored software development project to develop a general-purpose tool for using genetic algorithms. The tool, called Splicer, can be used to solve a wide variety of optimization problems and is currently available from NASA and COSMIC. This discussion is preceded by an introduction to basic genetic algorithm concepts and a discussion of genetic algorithm applications.

  5. Algorithm for Identifying Erroneous Rain-Gauge Readings

    NASA Technical Reports Server (NTRS)

    Rickman, Doug

    2005-01-01

    An algorithm analyzes rain-gauge data to identify statistical outliers that could be deemed to be erroneous readings. Heretofore, analyses of this type have been performed in burdensome manual procedures that have involved subjective judgements. Sometimes, the analyses have included computational assistance for detecting values falling outside of arbitrary limits. The analyses have been performed without statistically valid knowledge of the spatial and temporal variations of precipitation within rain events. In contrast, the present algorithm makes it possible to automate such an analysis, makes the analysis objective, takes account of the spatial distribution of rain gauges in conjunction with the statistical nature of spatial variations in rainfall readings, and minimizes the use of arbitrary criteria. The algorithm implements an iterative process that involves nonparametric statistics.

  6. Listless zerotree image compression algorithm

    NASA Astrophysics Data System (ADS)

    Lian, Jing; Wang, Ke

    2006-09-01

    In this paper, an improved zerotree structure and a new coding procedure are adopted, which improve the reconstructed image qualities. Moreover, the lists in SPIHT are replaced by flag maps, and lifting scheme is adopted to realize wavelet transform, which lowers the memory requirements and speeds up the coding process. Experimental results show that the algorithm is more effective and efficient compared with SPIHT.

  7. Environmental Test Screening Procedure

    NASA Technical Reports Server (NTRS)

    Zeidler, Janet

    2000-01-01

    This procedure describes the methods to be used for environmental stress screening (ESS) of the Lightning Mapper Sensor (LMS) lens assembly. Unless otherwise specified, the procedures shall be completed in the order listed, prior to performance of the Acceptance Test Procedure (ATP). The first unit, S/N 001, will be subjected to the Qualification Vibration Levels, while the remainder will be tested at the Operational Level. Prior to ESS, all units will undergo Pre-ESS Functional Testing that includes measuring the on-axis and plus or minus 0.95 full field Modulation Transfer Function and Back Focal Length. Next, all units will undergo ESS testing, and then Acceptance testing per PR 460.

  8. Power spectral estimation algorithms

    NASA Technical Reports Server (NTRS)

    Bhatia, Manjit S.

    1989-01-01

    Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.

  9. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  10. Approximation algorithms

    PubMed Central

    Schulz, Andreas S.; Shmoys, David B.; Williamson, David P.

    1997-01-01

    Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science. PMID:9370525

  11. Algorithms and the Teaching of Grammar.

    ERIC Educational Resources Information Center

    Edwards, K. Ffoulkes

    1967-01-01

    The construction of algorithms to present grammatical rules is advocated on the basis of clarity and ease of memorization. Algorithmic procedure is demonstrated for the introduction of subordinate clauses by conjunctions in German, and the formation of plural nouns in English. (AF)

  12. Computational and performance aspects of PCA-based face-recognition algorithms.

    PubMed

    Moon, H; Phillips, P J

    2001-01-01

    Algorithms based on principal component analysis (PCA) form the basis of numerous studies in the psychological and algorithmic face-recognition literature. PCA is a statistical technique and its incorporation into a face-recognition algorithm requires numerous design decisions. We explicitly state the design decisions by introducing a generic modular PCA-algorithm. This allows us to investigate these decisions, including those not documented in the literature. We experimented with different implementations of each module, and evaluated the different implementations using the September 1996 FERET evaluation protocol (the de facto standard for evaluating face-recognition algorithms). We experimented with (i) changing the illumination normalization procedure; (ii) studying effects on algorithm performance of compressing images with JPEG and wavelet compression algorithms; (iii) varying the number of eigenvectors in the representation; and (iv) changing the similarity measure in the classification process. We performed two experiments. In the first experiment, we obtained performance results on the standard September 1996 FERET large-gallery image sets. In the second experiment, we examined the variability in algorithm performance on different sets of facial images. The study was performed on 100 randomly generated image sets (galleries) of the same size. Our two most significant results are (i) changing the similarity measure produced the greatest change in performance, and (ii) that difference in performance of +/- 10% is needed to distinguish between algorithms.

  13. Algorithms and Libraries

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This exploratory study initiated our inquiry into algorithms and applications that would benefit by latency tolerant approach to algorithm building, including the construction of new algorithms where appropriate. In a multithreaded execution, when a processor reaches a point where remote memory access is necessary, the request is sent out on the network and a context--switch occurs to a new thread of computation. This effectively masks a long and unpredictable latency due to remote loads, thereby providing tolerance to remote access latency. We began to develop standards to profile various algorithm and application parameters, such as the degree of parallelism, granularity, precision, instruction set mix, interprocessor communication, latency etc. These tools will continue to develop and evolve as the Information Power Grid environment matures. To provide a richer context for this research, the project also focused on issues of fault-tolerance and computation migration of numerical algorithms and software. During the initial phase we tried to increase our understanding of the bottlenecks in single processor performance. Our work began by developing an approach for the automatic generation and optimization of numerical software for processors with deep memory hierarchies and pipelined functional units. Based on the results we achieved in this study we are planning to study other architectures of interest, including development of cost models, and developing code generators appropriate to these architectures.

  14. Optimization of the double dosimetry algorithm for interventional cardiologists

    NASA Astrophysics Data System (ADS)

    Chumak, Vadim; Morgun, Artem; Bakhanova, Elena; Voloskiy, Vitalii; Borodynchik, Elena

    2014-11-01

    A double dosimetry method is recommended in interventional cardiology (IC) to assess occupational exposure; yet currently there is no common and universal algorithm for effective dose estimation. In this work, flexible and adaptive algorithm building methodology was developed and some specific algorithm applicable for typical irradiation conditions of IC procedures was obtained. It was shown that the obtained algorithm agrees well with experimental measurements and is less conservative compared to other known algorithms.

  15. The Xmath Integration Algorithm

    ERIC Educational Resources Information Center

    Bringslid, Odd

    2009-01-01

    The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…

  16. Improved Contact Algorithms for Implicit FE Simulation of Sheet Forming

    NASA Astrophysics Data System (ADS)

    Zhuang, S.; Lee, M. G.; Keum, Y. T.; Wagoner, R. H.

    2007-05-01

    Implicit finite element simulations of sheet forming processes do not always converge, particularly for complex tool geometries and rapidly changing contact. The SHEET-3 program exhibits remarkable stability and strong convergence by use of its special N-CFS algorithm and a sheet normal defined by the mesh, but these features alone do not always guarantee convergence and accuracy. An improved contact capability within the N-CFS algorithm is formulated taking into account sheet thickness within the framework of shell elements. Two imaginary surfaces offset from the mid-plane of shell elements are implemented along the mesh normal direction. An efficient contact searching algorithm based on the mesh-patch tool description is formulated along the mesh normal direction. The contact search includes a general global searching procedure and a new local searching procedure enforcing the contact condition along the mesh normal direction. The processes of unconstrained cylindrical bending and drawing through a drawbead are simulated to verify the accuracy and convergence of the improved contact algorithm.

  17. A subzone reconstruction algorithm for efficient staggered compatible remapping

    SciTech Connect

    Starinshak, D.P. Owen, J.M.

    2015-09-01

    Staggered-grid Lagrangian hydrodynamics algorithms frequently make use of subzonal discretization of state variables for the purposes of improved numerical accuracy, generality to unstructured meshes, and exact conservation of mass, momentum, and energy. For Arbitrary Lagrangian–Eulerian (ALE) methods using a geometric overlay, it is difficult to remap subzonal variables in an accurate and efficient manner due to the number of subzone–subzone intersections that must be computed. This becomes prohibitive in the case of 3D, unstructured, polyhedral meshes. A new procedure is outlined in this paper to avoid direct subzonal remapping. The new algorithm reconstructs the spatial profile of a subzonal variable using remapped zonal and nodal representations of the data. The reconstruction procedure is cast as an under-constrained optimization problem. Enforcing conservation at each zone and node on the remapped mesh provides the set of equality constraints; the objective function corresponds to a quadratic variation per subzone between the values to be reconstructed and a set of target reference values. Numerical results for various pure-remapping and hydrodynamics tests are provided. Ideas for extending the algorithm to staggered-grid radiation-hydrodynamics are discussed as well as ideas for generalizing the algorithm to include inequality constraints.

  18. Combined procedures in laparoscopic surgery.

    PubMed

    Wadhwa, Atul; Chowbey, Pradeep K; Sharma, Anil; Khullar, Rajesh; Soni, Vandana; Baijal, Manish

    2003-12-01

    With advancements in minimal access surgery, combined laparoscopic procedures are now being performed for treating coexisting abdominal pathologies at the same surgery. In our center, we performed 145 combined surgical procedures from January 1999 to December 2002. Of the 145 procedures, 130 were combined laparoscopic/endoscopic procedures and 15 were open procedures combined with endoscopic procedures. The combination included laparoscopic cholecystectomy, various hernia repairs, and gynecological procedures like hysterectomy, salpingectomy, ovarian cystectomy, tubal ligation, urological procedures, fundoplication, splenectomy, hemicolectomy, and cystogastrostomy. In the same period, 40 patients who had undergone laparoscopic cholecystectomy and 40 patients who had undergone ventral hernia repair were randomly selected for comparison of intraoperative outcomes with a combined procedure group. All the combined surgical procedures were performed successfully. The most common procedure was laparoscopic cholecystectomy with another endoscopic procedure in 129 patients. The mean operative time was 100 minutes (range 30-280 minutes). The longest time was taken for the patient who had undergone laparoscopic splenectomy with renal transplant (280 minutes). The mean hospital stay was 3.2 days (range 1-21 days). The pain experienced in the postoperative period measured on the visual analogue scale ranged from 2 to 5 with a mean of 3.1. Of 145 patients who underwent combined surgical procedures, 5 patients developed fever in the immediate postoperative period, 7 patients had port site hematoma, 5 patients developed wound sepsis, and 10 patients had urinary retention. As long as the basic surgical principles and indications for combined procedures are adhered to, more patients with concomitant pathologies can enjoy the benefit of minimal access surgery. Minimal access surgery is feasible and appears to have several advantages in simultaneous management of two different

  19. Training for advanced endoscopic procedures.

    PubMed

    Feurer, Matthew E; Draganov, Peter V

    2016-06-01

    Advanced endoscopy has evolved from diagnostic ERCP to an ever-increasing array of therapeutic procedures including EUS with FNA, ablative therapies, deep enteroscopy, luminal stenting, endoscopic suturing and endoscopic mucosal resection among others. As these procedures have become increasingly more complex, the risk of potential complications has also risen. Training in advanced endoscopy involves more than obtaining a minimum number of therapeutic procedures. The means of assessing a trainee's competence level and ability to practice independently continues to be a matter of debate. The use of quality indicators to measure performance levels may be beneficial as more advanced techniques and procedures become available.

  20. Improved Chaff Solution Algorithm

    DTIC Science & Technology

    2009-03-01

    Programme de démonstration de technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré...technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré pour déterminer automatiquement...0Z4 2. SECURITY CLASSIFICATION (Overall security classification of the document including special warning terms if applicable .) UNCLASSIFIED

  1. FORTRAN Algorithm for Image Processing

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Hull, David R.

    1987-01-01

    FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.

  2. Proposed first-generation WSQ bit allocation procedure

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1993-09-08

    The Wavelet/Scalar Quantization (WSQ) gray-scale fingerprint image compression algorithm involves a symmetric wavelet transform (SWT) image decomposition followed by uniform scalar quantization of each subband. The algorithm is adaptive insofar as the bin widths for the scalar quantizers are image-specific and are included in the compressed image format. Since the decoder requires only the actual bin width values -- but not the method by which they were computed -- the standard allows for future refinements of the WSQ algorithm by improving the method used to select the scalar quantizer bin widths. This report proposes a bit allocation procedure for use with the first-generation WSQ encoder. In previous work a specific formula is provided for the relative sizes of the scalar quantizer bin widths in terms of the variances of the SWT subbands. An explicit specification for the constant of proportionality, q, that determines the absolute bin widths was not given. The actual compression ratio produced by the WSQ algorithm will generally vary from image to image depending on the amount of coding gain obtained by the run-length and Huffman coding, stages of the algorithm, but testing performed by the FBI established that WSQ compression produces archival quality images at compression ratios of around 20 to 1. The bit allocation procedure described in this report possesses a control parameter, r, that can be set by the user to achieve a predetermined amount of lossy compression, effectively giving the user control over the amount of distortion introduced by quantization noise. The variability observed in final compression ratios is thus due only to differences in lossless coding gain from image to image, chiefly a result of the varying amounts of blank background surrounding the print area in the images. Experimental results are presented that demonstrate the proposed method`s effectiveness.

  3. Experimental validation of clock synchronization algorithms

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.; Graham, R. Lynn

    1992-01-01

    The objective of this work is to validate mathematically derived clock synchronization theories and their associated algorithms through experiment. Two theories are considered, the Interactive Convergence Clock Synchronization Algorithm and the Midpoint Algorithm. Special clock circuitry was designed and built so that several operating conditions and failure modes (including malicious failures) could be tested. Both theories are shown to predict conservative upper bounds (i.e., measured values of clock skew were always less than the theory prediction). Insight gained during experimentation led to alternative derivations of the theories. These new theories accurately predict the behavior of the clock system. It is found that a 100 percent penalty is paid to tolerate worst-case failures. It is also shown that under optimal conditions (with minimum error and no failures) the clock skew can be as much as three clock ticks. Clock skew grows to six clock ticks when failures are present. Finally, it is concluded that one cannot rely solely on test procedures or theoretical analysis to predict worst-case conditions.

  4. Synthesis of Greedy Algorithms Using Dominance Relations

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2010-01-01

    Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.

  5. Kernel Affine Projection Algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Weifeng; Príncipe, José C.

    2008-12-01

    The combination of the famed kernel trick and affine projection algorithms (APAs) yields powerful nonlinear extensions, named collectively here, KAPA. This paper is a follow-up study of the recently introduced kernel least-mean-square algorithm (KLMS). KAPA inherits the simplicity and online nature of KLMS while reducing its gradient noise, boosting performance. More interestingly, it provides a unifying model for several neural network techniques, including kernel least-mean-square algorithms, kernel adaline, sliding-window kernel recursive-least squares (KRLS), and regularization networks. Therefore, many insights can be gained into the basic relations among them and the tradeoff between computation complexity and performance. Several simulations illustrate its wide applicability.

  6. A hybrid algorithm with GA and DAEM

    NASA Astrophysics Data System (ADS)

    Wan, HongJie; Deng, HaoJiang; Wang, XueWei

    2013-03-01

    Although the expectation-maximization (EM) algorithm has been widely used for finding maximum likelihood estimation of parameters in probabilistic models, it has the problem of trapping by local maxima. To overcome this problem, the deterministic annealing EM (DAEM) algorithm was once proposed and had achieved better performance than EM algorithm, but it is not very effective at avoiding local maxima. In this paper, a solution is proposed by integrating GA and DAEM into one procedure to further improve the solution quality. The population based search of genetic algorithm will produce different solutions and thus can increase the search space of DAEM. Therefore, the proposed algorithm will reach better solution than just using DAEM. The algorithm retains the property of DAEM and gets the better solution by genetic operation. Experiment results on Gaussian mixture model parameter estimation demonstrate that the proposed algorithm can achieve better performance.

  7. Revised Unfilling Procedure for Solid Lithium Lenses

    SciTech Connect

    Leveling, A.; /Fermilab

    2003-06-03

    A procedure for unfilling used lithium lenses to has been described in Pbar Note 664. To date, the procedure has been used to disassemble lenses 20, 21, 17, 18, and 16. As a result of this work, some parts of the original procedure were found to be time consuming and ineffective. Modifications to the original procedure have been made to streamline the process and are discussed in this note. The revised procedure is included in this note.

  8. Optimizing remediation of an unconfined aquifer using a hybrid algorithm.

    PubMed

    Hsiao, Chin-Tsai; Chang, Liang-Cheng

    2005-01-01

    We present a novel hybrid algorithm, integrating a genetic algorithm (GA) and constrained differential dynamic programming (CDDP), to achieve remediation planning for an unconfined aquifer. The objective function includes both fixed and dynamic operation costs. GA determines the primary structure of the proposed algorithm, and a chromosome therein implemented by a series of binary digits represents a potential network design. The time-varying optimal operation cost associated with the network design is computed by the CDDP, in which is embedded a numerical transport model. Several computational approaches, including a chromosome bookkeeping procedure, are implemented to alleviate computational loading. Additionally, case studies that involve fixed and time-varying operating costs for confined and unconfined aquifers, respectively, are discussed to elucidate the effectiveness of the proposed algorithm. Simulation results indicate that the fixed costs markedly affect the optimal design, including the number and locations of the wells. Furthermore, the solution obtained using the confined approximation for an unconfined aquifer may be infeasible, as determined by an unconfined simulation.

  9. HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN

    EPA Science Inventory

    While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...

  10. PSC algorithm description

    NASA Technical Reports Server (NTRS)

    Nobbs, Steven G.

    1995-01-01

    An overview of the performance seeking control (PSC) algorithm and details of the important components of the algorithm are given. The onboard propulsion system models, the linear programming optimization, and engine control interface are described. The PSC algorithm receives input from various computers on the aircraft including the digital flight computer, digital engine control, and electronic inlet control. The PSC algorithm contains compact models of the propulsion system including the inlet, engine, and nozzle. The models compute propulsion system parameters, such as inlet drag and fan stall margin, which are not directly measurable in flight. The compact models also compute sensitivities of the propulsion system parameters to change in control variables. The engine model consists of a linear steady state variable model (SSVM) and a nonlinear model. The SSVM is updated with efficiency factors calculated in the engine model update logic, or Kalman filter. The efficiency factors are used to adjust the SSVM to match the actual engine. The propulsion system models are mathematically integrated to form an overall propulsion system model. The propulsion system model is then optimized using a linear programming optimization scheme. The goal of the optimization is determined from the selected PSC mode of operation. The resulting trims are used to compute a new operating point about which the optimization process is repeated. This process is continued until an overall (global) optimum is reached before applying the trims to the controllers.

  11. Non-intrusive parameter identification procedure user's guide

    NASA Technical Reports Server (NTRS)

    Hanson, G. D.; Jewell, W. F.

    1983-01-01

    Written in standard FORTRAN, NAS is capable of identifying linear as well as nonlinear relations between input and output parameters; the only restriction is that the input/output relation be linear with respect to the unknown coefficients of the estimation equations. The output of the identification algorithm can be specified to be in either the time domain (i.e., the estimation equation coefficients) or in the frequency domain (i.e., a frequency response of the estimation equation). The frame length ("window") over which the identification procedure is to take place can be specified to be any portion of the input time history, thereby allowing the freedom to start and stop the identification procedure within a time history. There also is an option which allows a sliding window, which gives a moving average over the time history. The NAS software also includes the ability to identify several assumed solutions simultaneously for the same or different input data.

  12. A universal symmetry detection algorithm.

    PubMed

    Maurer, Peter M

    2015-01-01

    Research on symmetry detection focuses on identifying and detecting new types of symmetry. The paper presents an algorithm that is capable of detecting any type of permutation-based symmetry, including many types for which there are no existing algorithms. General symmetry detection is library-based, but symmetries that can be parameterized, (i.e. total, partial, rotational, and dihedral symmetry), can be detected without using libraries. In many cases it is faster than existing techniques. Furthermore, it is simpler than most existing techniques, and can easily be incorporated into existing software. The algorithm can also be used with virtually any type of matrix-based symmetry, including conjugate symmetry.

  13. Efficient estimation algorithms for a satellite-aided search and rescue mission

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Garza-Robles, R.

    1977-01-01

    It has been suggested to establish a search and rescue orbiting satellite system as a means for locating distress signals from downed aircraft, small boats, and overland expeditions. Emissions from Emergency Locator Transmitters (ELT), now available in most U.S. aircraft are to be utilized in the positioning procedure. A description is presented of a set of Doppler navigation algorithms for extracting ELT position coordinates from Doppler data. The algorithms have been programmed for a small computing machine and the resulting system has successfully processed both real and simulated Doppler data. A software system for solving the Doppler navigation problem must include an orbit propagator, a first guess algorithm, and an algorithm for estimating longitude and latitude from Doppler data. Each of these components is considered.

  14. Self-adaptive predictor-corrector algorithm for static nonlinear structural analysis

    NASA Technical Reports Server (NTRS)

    Padovan, J.

    1981-01-01

    A multiphase selfadaptive predictor corrector type algorithm was developed. This algorithm enables the solution of highly nonlinear structural responses including kinematic, kinetic and material effects as well as pro/post buckling behavior. The strategy involves three main phases: (1) the use of a warpable hyperelliptic constraint surface which serves to upperbound dependent iterate excursions during successive incremental Newton Ramphson (INR) type iterations; (20 uses an energy constraint to scale the generation of successive iterates so as to maintain the appropriate form of local convergence behavior; (3) the use of quality of convergence checks which enable various self adaptive modifications of the algorithmic structure when necessary. The restructuring is achieved by tightening various conditioning parameters as well as switch to different algorithmic levels to improve the convergence process. The capabilities of the procedure to handle various types of static nonlinear structural behavior are illustrated.

  15. Development and Testing of Data Mining Algorithms for Earth Observation

    NASA Technical Reports Server (NTRS)

    Glymour, Clark

    2005-01-01

    The new algorithms developed under this project included a principled procedure for classification of objects, events or circumstances according to a target variable when a very large number of potential predictor variables is available but the number of cases that can be used for training a classifier is relatively small. These "high dimensional" problems require finding a minimal set of variables -called the Markov Blanket-- sufficient for predicting the value of the target variable. An algorithm, the Markov Blanket Fan Search, was developed, implemented and tested on both simulated and real data in conjunction with a graphical model classifier, which was also implemented. Another algorithm developed and implemented in TETRAD IV for time series elaborated on work by C. Granger and N. Swanson, which in turn exploited some of our earlier work. The algorithms in question learn a linear time series model from data. Given such a time series, the simultaneous residual covariances, after factoring out time dependencies, may provide information about causal processes that occur more rapidly than the time series representation allow, so called simultaneous or contemporaneous causal processes. Working with A. Monetta, a graduate student from Italy, we produced the correct statistics for estimating the contemporaneous causal structure from time series data using the TETRAD IV suite of algorithms. Two economists, David Bessler and Kevin Hoover, have independently published applications using TETRAD style algorithms to the same purpose. These implementations and algorithmic developments were separately used in two kinds of studies of climate data: Short time series of geographically proximate climate variables predicting agricultural effects in California, and longer duration climate measurements of temperature teleconnections.

  16. The global Minmax k-means algorithm.

    PubMed

    Wang, Xiaoyan; Bai, Yanping

    2016-01-01

    The global k-means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable initial positions, and employs k-means to minimize the sum of the intra-cluster variances. However the global k-means algorithm sometimes results singleton clusters and the initial positions sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k-means algorithm. In this paper, we modified the global k-means algorithm to eliminate the singleton clusters at first, and then we apply MinMax k-means clustering error method to global k-means algorithm to overcome the effect of bad initialization, proposed the global Minmax k-means algorithm. The proposed clustering method is tested on some popular data sets and compared to the k-means algorithm, the global k-means algorithm and the MinMax k-means algorithm. The experiment results show our proposed algorithm outperforms other algorithms mentioned in the paper.

  17. NOSS altimeter algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.

    1982-01-01

    A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.

  18. Bariatric Surgery Procedures

    MedlinePlus

    ... Center Access to Care Toolkit EHB Access Toolkit Bariatric Surgery Procedures Bariatric surgical procedures cause weight loss by ... minimally invasive techniques (laparoscopic surgery). The most common bariatric surgery procedures are gastric bypass, sleeve gastrectomy, adjustable gastric ...

  19. Evaluation of Genetic Algorithm Concepts Using Model Problems. Part 2; Multi-Objective Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.

    2003-01-01

    A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of simple model problems. Several new features including a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all optimization problems attempted. The binning algorithm generally provides pareto front quality enhancements and moderate convergence efficiency improvements for most of the model problems. The gene-space transformation procedure provides a large convergence efficiency enhancement for problems with non-convoluted pareto fronts and a degradation in efficiency for problems with convoluted pareto fronts. The most difficult problems --multi-mode search spaces with a large number of genes and convoluted pareto fronts-- require a large number of function evaluations for GA convergence, but always converge.

  20. Interventional radiology neck procedures.

    PubMed

    Zabala Landa, R M; Korta Gómez, I; Del Cura Rodríguez, J L

    2016-05-01

    Ultrasonography has become extremely useful in the evaluation of masses in the head and neck. It enables us to determine the anatomic location of the masses as well as the characteristics of the tissues that compose them, thus making it possible to orient the differential diagnosis toward inflammatory, neoplastic, congenital, traumatic, or vascular lesions, although it is necessary to use computed tomography or magnetic resonance imaging to determine the complete extension of certain lesions. The growing range of interventional procedures, mostly guided by ultrasonography, now includes biopsies, drainages, infiltrations, sclerosing treatments, and tumor ablation.

  1. An ROLAP Aggregation Algorithm with the Rules Being Specified

    NASA Astrophysics Data System (ADS)

    Zhengqiu, Weng; Tai, Kuang; Lina, Zhang

    This paper introduces the base theory of data warehouse and ROLAP, and presents a new kind of ROLAP aggregation algorithm, which has calculation algorithms. It covers the shortage of low accuracy of traditional aggregation algorithm that aggregates only by addition. The ROLAP aggregation with calculation algorithm which can aggregate according to business rules improves accuracy. And key designs and procedures are presented. Compared with the traditional method, its efficiency is displayed in an experiment.

  2. Abstract models for the synthesis of optimization algorithms.

    NASA Technical Reports Server (NTRS)

    Meyer, G. G. L.; Polak, E.

    1971-01-01

    Systematic approach to the problem of synthesis of optimization algorithms. Abstract models for algorithms are developed which guide the inventive process toward ?conceptual' algorithms which may consist of operations that are inadmissible in a practical method. Once the abstract models are established a set of methods for converting ?conceptual' algorithms falling into the class defined by the abstract models into ?implementable' iterative procedures is presented.

  3. Using DFX for Algorithm Evaluation

    SciTech Connect

    Beiriger, J.I.; Funkhouser, D.R.; Young, C.J.

    1998-10-20

    Evaluating whether or not a new seismic processing algorithm can improve the performance of the operational system can be problematic: it maybe difficult to isolate the comparable piece of the operational system it maybe necessary to duplicate ancillary timctions; and comparing results to the tuned, full-featured operational system maybe an unsat- isfactory basis on which to draw conclusions. Algorithm development and evaluation in an environment that more closely resembles the operational system can be achieved by integrating the algorithm with the custom user library of the Detection and Feature Extraction (DFX) code, developed by Science Applications kternational Corporation. This integration gives the seismic researcher access to all of the functionality of DFX, such as database access, waveform quality control, and station-specific tuning, and provides a more meaningfid basis for evaluation. The goal of this effort is to make the DFX environment more accessible to seismic researchers for algorithm evalua- tion. Typically, anew algorithm will be developed as a C-language progmm with an ASCII test parameter file. The integration process should allow the researcher to focus on the new algorithm developmen~ with minimum attention to integration issues. Customizing DFX, however, requires soflsvare engineering expertise, knowledge of the Scheme and C programming languages, and familiarity with the DFX source code. We use a C-language spatial coherence processing algorithm with a parameter and recipe file to develop a general process for integrating and evaluating a new algorithm in the DFX environment. To aid in configuring and managing the DFX environment, we develop a simple parameter management tool. We also identifi and examine capabilities that could simplify the process further, thus reducing the barriers facing researchers in using DFX..These capabilities include additional parameter manage- ment features, a Scheme-language template for algorithm testing, a

  4. Evaluation of Mechanical Losses in Piezoelectric Plates using Genetic algorithm

    NASA Astrophysics Data System (ADS)

    Arnold, F. J.; Gonçalves, M. S.; Massaro, F. R.; Martins, P. S.

    Numerical methods are used for the characterization of piezoelectric ceramics. A procedure based on genetic algorithm is applied to find the physical coefficients and mechanical losses. The coefficients are estimated from a minimum scoring of cost function. Electric impedances are calculated from Mason's model including mechanical losses constant and dependent on frequency as a linear function. The results show that the electric impedance percentage error in the investigated interval of frequencies decreases when mechanical losses depending on frequency are inserted in the model. A more accurate characterization of the piezoelectric ceramics mechanical losses should be considered as frequency dependent.

  5. 34 CFR 674.43 - Billing procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false Billing procedures. 674.43 Section 674.43 Education..., DEPARTMENT OF EDUCATION FEDERAL PERKINS LOAN PROGRAM Due Diligence § 674.43 Billing procedures. (a) The term billing procedures, as used in this subpart, includes that series of actions routinely performed to...

  6. 34 CFR 674.43 - Billing procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 3 2011-07-01 2011-07-01 false Billing procedures. 674.43 Section 674.43 Education..., DEPARTMENT OF EDUCATION FEDERAL PERKINS LOAN PROGRAM Due Diligence § 674.43 Billing procedures. (a) The term billing procedures, as used in this subpart, includes that series of actions routinely performed to...

  7. Medical Service Clinical Laboratory Procedures--Bacteriology.

    ERIC Educational Resources Information Center

    Department of the Army, Washington, DC.

    This manual presents laboratory procedures for the differentiation and identification of disease agents from clinical materials. Included are procedures for the collection of specimens, preparation of culture media, pure culture methods, cultivation of the microorganisms in natural and simulated natural environments, and procedures in…

  8. Consent procedures in pediatric biobanks

    PubMed Central

    Giesbertz, Noor AA; Bredenoord, Annelien L; van Delden, Johannes JM

    2015-01-01

    The inclusion of children's samples in biobanks brings forward specific ethical issues. Guidelines indicate that children should be involved in the consent procedure. It is, however, unclear how to allocate an appropriate role for children. Knowledge of current practice will be helpful in addressing this issue. Therefore, we conducted an international multiple-case study on the child's role in consent procedures in pediatric biobanks. Four biobanks were included: (1) LifeLines, (2) Prevention and Incidence of Asthma and Mite Allergy (PIAMA), (3) Young-HUNT3 and (4) the Oxford Radcliffe Biobank contribution to the Children's Cancer and Leukaemia Group tissue bank (ORB/CCLG). Four themes linked to the child's role in the consent procedure emerged from the multiple-case study: (1) motives to involve the child, (2) informing the child, (3) the role of dissent, assent and consent and (4) voluntariness of children to participate. We conclude that biobank characteristics influence the biobank's motives to include children in the consent procedure. Moreover, the motives to include children influence how the children are involved in the consent procedure, and the extent to which children are able to make voluntary decisions as part of the consent procedure. This insight is valuable when designing pediatric biobank governance. PMID:25537361

  9. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  10. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  11. 77 FR 56698 - Air Traffic Procedures Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-13

    ... practices for standardization, revision, clarification, and upgrading of terminology and procedures. DATES... air traffic control procedures and practices for standardization, revision, clarification, and upgrading of terminology and procedures. It will also include: 1. Approval of Minutes; 2. Submission...

  12. The Conceptual Design Algorithm of Inland LNG Barges

    NASA Astrophysics Data System (ADS)

    Łozowicka, Dorota; Kaup, Magdalena

    2017-03-01

    The article concerns the problem of inland waterways transport of LNG. Its aim is to present the algorithm of conceptual design of inland barges for LNG transport, intended for exploitation on European waterways. The article describes the areas where LNG barges exist, depending on the allowable operating parameters on the waterways. It presents existing architectural and construction solutions of barges for inland LNG transport, as well as the necessary equipment, due to the nature of cargo. Then the article presents the procedure of the conceptual design of LNG barges, including navigation restrictions and functional and economic criteria. The conceptual design algorithm of LGN barges, presented in the article, allows to preliminary design calculations, on the basis of which, are obtained the main dimensions and parameters of unit, depending on the transport task and the class of inland waterways, on which the transport will be realized.

  13. 48 CFR 2805.503-70 - Procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Acquisition Planning PUBLICIZING CONTRACT ACTIONS Paid Advertisements 2805.503-70 Procedures. (a) Agency... includes the names of newspapers or journals concerned, frequency and dates of proposed...

  14. Progress on the development of automated data analysis algorithms and software for ultrasonic inspection of composites

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Coughlin, Chris; Forsyth, David S.; Welter, John T.

    2014-02-01

    Progress is presented on the development and implementation of automated data analysis (ADA) software to address the burden in interpreting ultrasonic inspection data for large composite structures. The automated data analysis algorithm is presented in detail, which follows standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. ADA processing results are presented for test specimens that include inserted materials and discontinuities produced under poor manufacturing conditions.

  15. Solar Eclipse Monitoring for Solar Energy Applications Using the Solar and Moon Position Algorithms

    SciTech Connect

    Reda, I.

    2010-03-01

    This report includes a procedure for implementing an algorithm (described by Jean Meeus) to calculate the moon's zenith angle with uncertainty of +/-0.001 degrees and azimuth angle with uncertainty of +/-0.003 degrees. The step-by-step format presented here simplifies the complicated steps Meeus describes to calculate the Moon's position, and focuses on the Moon instead of the planets and stars. It also introduces some changes to accommodate for solar radiation applications.

  16. Modular algorithm concept evaluation tool (MACET) sensor fusion algorithm testbed

    NASA Astrophysics Data System (ADS)

    Watson, John S.; Williams, Bradford D.; Talele, Sunjay E.; Amphay, Sengvieng A.

    1995-07-01

    Target acquisition in a high clutter environment in all-weather at any time of day represents a much needed capability for the air-to-surface strike mission. A considerable amount of the research at the Armament Directorate at Wright Laboratory, Advanced Guidance Division WL/MNG, has been devoted to exploring various seeker technologies, including multi-spectral sensor fusion, that may yield a cost efficient system with these capabilities. Critical elements of any such seekers are the autonomous target acquisition and tracking algorithms. These algorithms allow the weapon system to operate independently and accurately in realistic battlefield scenarios. In order to assess the performance of the multi-spectral sensor fusion algorithms being produced as part of the seeker technology development programs, the Munition Processing Technology Branch of WL/MN is developing an algorithm testbed. This testbed consists of the Irma signature prediction model, data analysis workstations, such as the TABILS Analysis and Management System (TAMS), and the Modular Algorithm Concept Evaluation Tool (MACET) algorithm workstation. All three of these components are being enhanced to accommodate multi-spectral sensor fusion systems. MACET is being developed to provide a graphical interface driven simulation by which to quickly configure algorithm components and conduct performance evaluations. MACET is being developed incrementally with each release providing an additional channel of operation. To date MACET 1.0, a passive IR algorithm environment, has been delivered. The second release, MACET 1.1 is presented in this paper using the MMW/IR data from the Advanced Autonomous Dual Mode Seeker (AADMS) captive flight demonstration. Once completed, the delivered software from past algorithm development efforts will be converted to the MACET library format, thereby providing an on-line database of the algorithm research conducted to date.

  17. A fast optimization algorithm for multicriteria intensity modulated proton therapy planning

    SciTech Connect

    Chen Wei; Craft, David; Madden, Thomas M.; Zhang, Kewu; Kooy, Hanne M.; Herman, Gabor T.

    2010-09-15

    Purpose: To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. Methods: The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. Results: The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK's interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. Conclusions: The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.

  18. A retrodictive stochastic simulation algorithm

    SciTech Connect

    Vaughan, T.G. Drummond, P.D.; Drummond, A.J.

    2010-05-20

    In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.

  19. Final Technical Report "Multiscale Simulation Algorithms for Biochemical Systems"

    SciTech Connect

    Petzold, Linda R.

    2012-10-25

    Biochemical systems are inherently multiscale and stochastic. In microscopic systems formed by living cells, the small numbers of reactant molecules can result in dynamical behavior that is discrete and stochastic rather than continuous and deterministic. An analysis tool that respects these dynamical characteristics is the stochastic simulation algorithm (SSA, Gillespie, 1976), a numerical simulation procedure that is essentially exact for chemical systems that are spatially homogeneous or well stirred. Despite recent improvements, as a procedure that simulates every reaction event, the SSA is necessarily inefficient for most realistic problems. There are two main reasons for this, both arising from the multiscale nature of the underlying problem: (1) stiffness, i.e. the presence of multiple timescales, the fastest of which are stable; and (2) the need to include in the simulation both species that are present in relatively small quantities and should be modeled by a discrete stochastic process, and species that are present in larger quantities and are more efficiently modeled by a deterministic differential equation (or at some scale in between). This project has focused on the development of fast and adaptive algorithms, and the fun- damental theory upon which they must be based, for the multiscale simulation of biochemical systems. Areas addressed by this project include: (1) Theoretical and practical foundations for ac- celerated discrete stochastic simulation (tau-leaping); (2) Dealing with stiffness (fast reactions) in an efficient and well-justified manner in discrete stochastic simulation; (3) Development of adaptive multiscale algorithms for spatially homogeneous discrete stochastic simulation; (4) Development of high-performance SSA algorithms.

  20. 42 CFR 493.1251 - Standard: Procedure manual.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Procedure manual. 493.1251 Section 493... Systems § 493.1251 Standard: Procedure manual. (a) A written procedure manual for all tests, assays, and.... (b) The procedure manual must include the following when applicable to the test procedure:...

  1. Deep Attack Map Exercise (DAME) Game Rules and Operating Procedures

    DTIC Science & Technology

    1983-02-01

    wargame for use in the Close Combat (Heavy) Mission Area Analysis. Using a map board, a set of computer algorithms , and manual rules, the wargame...logistics, and command and control. This report documents the rules, operating procedures, and computer algorithms which were used in the map game. The

  2. A proof of convergence of the concave-convex procedure using Zangwill's theory.

    PubMed

    Sriperumbudur, Bharath K; Lanckriet, Gert R G

    2012-06-01

    The concave-convex procedure (CCCP) is an iterative algorithm that solves d.c. (difference of convex functions) programs as a sequence of convex programs. In machine learning, CCCP is extensively used in many learning algorithms, including sparse support vector machines (SVMs), transductive SVMs, and sparse principal component analysis. Though CCCP is widely used in many applications, its convergence behavior has not gotten a lot of specific attention. Yuille and Rangarajan analyzed its convergence in their original paper; however, we believe the analysis is not complete. The convergence of CCCP can be derived from the convergence of the d.c. algorithm (DCA), proposed in the global optimization literature to solve general d.c. programs, whose proof relies on d.c. duality. In this note, we follow a different reasoning and show how Zangwill's global convergence theory of iterative algorithms provides a natural framework to prove the convergence of CCCP. This underlines Zangwill's theory as a powerful and general framework to deal with the convergence issues of iterative algorithms, after also being used to prove the convergence of algorithms like expectation-maximization and generalized alternating minimization. In this note, we provide a rigorous analysis of the convergence of CCCP by addressing two questions: When does CCCP find a local minimum or a stationary point of the d.c. program under consideration? and when does the sequence generated by CCCP converge? We also present an open problem on the issue of local convergence of CCCP.

  3. Improved algorithm for calculating the Chandrasekhar function

    NASA Astrophysics Data System (ADS)

    Jablonski, A.

    2013-02-01

    Theoretical models of electron transport in condensed matter require an effective source of the Chandrasekhar H(x,omega) function. A code providing the H(x,omega) function has to be both accurate and very fast. The current revision of the code published earlier [A. Jablonski, Comput. Phys. Commun. 183 (2012) 1773] decreased the running time, averaged over different pairs of arguments x and omega, by a factor of more than 20. The decrease of the running time in the range of small values of the argument x, less than 0.05, is even more pronounced, reaching a factor of 30. The accuracy of the current code is not affected, and is typically better than 12 decimal places. New version program summaryProgram title: CHANDRAS_v2 Catalogue identifier: AEMC_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMC_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 976 No. of bytes in distributed program, including test data, etc.: 11416 Distribution format: tar.gz Programming language: Fortran 90 Computer: Any computer with a Fortran 90 compiler Operating system: Windows 7, Windows XP, Unix/Linux RAM: 0.7 MB Classification: 2.4, 7.2 Catalogue identifier of previous version: AEMC_v1_0 Journal reference of previous version: Comput. Phys. Commun. 183 (2012) 1773 Does the new version supersede the old program: Yes Nature of problem: An attempt has been made to develop a subroutine that calculates the Chandrasekhar function with high accuracy, of at least 10 decimal places. Simultaneously, this subroutine should be very fast. Both requirements stem from the theory of electron transport in condensed matter. Solution method: Two algorithms were developed, each based on a different integral representation of the Chandrasekhar function. The final algorithm is edited by mixing these two

  4. Proper bibeta ROC model: algorithm, software, and performance evaluation

    NASA Astrophysics Data System (ADS)

    Chen, Weijie; Hu, Nan

    2016-03-01

    Semi-parametric models are often used to fit data collected in receiver operating characteristic (ROC) experiments to obtain a smooth ROC curve and ROC parameters for statistical inference purposes. The proper bibeta model as recently proposed by Mossman and Peng enjoys several theoretical properties. In addition to having explicit density functions for the latent decision variable and an explicit functional form of the ROC curve, the two parameter bibeta model also has simple closed-form expressions for true-positive fraction (TPF), false-positive fraction (FPF), and the area under the ROC curve (AUC). In this work, we developed a computational algorithm and R package implementing this model for ROC curve fitting. Our algorithm can deal with any ordinal data (categorical or continuous). To improve accuracy, efficiency, and reliability of our software, we adopted several strategies in our computational algorithm including: (1) the LABROC4 categorization to obtain the true maximum likelihood estimation of the ROC parameters; (2) a principled approach to initializing parameters; (3) analytical first-order and second-order derivatives of the likelihood function; (4) an efficient optimization procedure (the L-BFGS algorithm in the R package "nlopt"); and (5) an analytical delta method to estimate the variance of the AUC. We evaluated the performance of our software with intensive simulation studies and compared with the conventional binormal and the proper binormal-likelihood-ratio models developed at the University of Chicago. Our simulation results indicate that our software is highly accurate, efficient, and reliable.

  5. A general construction for parallelizing Metropolis−Hastings algorithms

    PubMed Central

    Calderhead, Ben

    2014-01-01

    Markov chain Monte Carlo methods (MCMC) are essential tools for solving many modern-day statistical and computational problems; however, a major limitation is the inherently sequential nature of these algorithms. In this paper, we propose a natural generalization of the Metropolis−Hastings algorithm that allows for parallelizing a single chain using existing MCMC methods. We do so by proposing multiple points in parallel, then constructing and sampling from a finite-state Markov chain on the proposed points such that the overall procedure has the correct target density as its stationary distribution. Our approach is generally applicable and straightforward to implement. We demonstrate how this construction may be used to greatly increase the computational speed and statistical efficiency of a variety of existing MCMC methods, including Metropolis-Adjusted Langevin Algorithms and Adaptive MCMC. Furthermore, we show how it allows for a principled way of using every integration step within Hamiltonian Monte Carlo methods; our approach increases robustness to the choice of algorithmic parameters and results in increased accuracy of Monte Carlo estimates with little extra computational cost. PMID:25422442

  6. A Frequency-Domain Substructure System Identification Algorithm

    NASA Technical Reports Server (NTRS)

    Blades, Eric L.; Craig, Roy R., Jr.

    1996-01-01

    A new frequency-domain system identification algorithm is presented for system identification of substructures, such as payloads to be flown aboard the Space Shuttle. In the vibration test, all interface degrees of freedom where the substructure is connected to the carrier structure are either subjected to active excitation or are supported by a test stand with the reaction forces measured. The measured frequency-response data is used to obtain a linear, viscous-damped model with all interface-degree of freedom entries included. This model can then be used to validate analytical substructure models. This procedure makes it possible to obtain not only the fixed-interface modal data associated with a Craig-Bampton substructure model, but also the data associated with constraint modes. With this proposed algorithm, multiple-boundary-condition tests are not required, and test-stand dynamics is accounted for without requiring a separate modal test or finite element modeling of the test stand. Numerical simulations are used in examining the algorithm's ability to estimate valid reduced-order structural models. The algorithm's performance when frequency-response data covering narrow and broad frequency bandwidths is used as input is explored. Its performance when noise is added to the frequency-response data and the use of different least squares solution techniques are also examined. The identified reduced-order models are also compared for accuracy with other test-analysis models and a formulation for a Craig-Bampton test-analysis model is also presented.

  7. Development of administrative data algorithms to identify patients with critical limb ischemia.

    PubMed

    Bekwelem, Wobo; Bengtson, Lindsay G S; Oldenburg, Niki C; Winden, Tamara J; Keo, Hong H; Hirsch, Alan T; Duval, Sue

    2014-12-01

    Administrative data have been used to identify patients with various diseases, yet no prior study has determined the utility of International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM)-based codes to identify CLI patients. CLI cases (n=126), adjudicated by a vascular specialist, were carefully defined and enrolled in a hospital registry. Controls were frequency matched to cases on age, sex and admission date in a 2:1 ratio. ICD-9-CM codes for all patients were extracted. Algorithms were developed using frequency distributions of these codes, risk factors and procedures prevalent in CLI. The sensitivity for each algorithm was calculated and applied within the hospital system to identify CLI patients not included in the registry. Sensitivity ranged from 0.29 to 0.92. An algorithm based on diagnosis and procedure codes exhibited the best overall performance (sensitivity of 0.92). Each algorithm had differing CLI identification characteristics based on patient location. Administrative data can be used to identify CLI patients within a health system. The algorithms, developed from these data, can serve as a tool to facilitate clinical care, research, quality improvement, and population surveillance.

  8. A unified treatment of some iterative algorithms in signal processing and image reconstruction

    NASA Astrophysics Data System (ADS)

    Byrne, Charles

    2004-02-01

    Let T be a (possibly nonlinear) continuous operator on Hilbert space {\\cal H} . If, for some starting vector x, the orbit sequence {Tkx,k = 0,1,...} converges, then the limit z is a fixed point of T; that is, Tz = z. An operator N on a Hilbert space {\\cal H} is nonexpansive (ne) if, for each x and y in {\\cal H} , \\[ \\| Nx-Ny\\| \\leq \\| x-y\\|. \\] Even when N has fixed points the orbit sequence {Nkx} need not converge; consider the example N = -I, where I denotes the identity operator. However, for any \\alpha \\in (0,1) the iterative procedure defined by \\[ x^{k+1}=(1-\\alpha)x^k+\\alpha Nx^k \\] converges (weakly) to a fixed point of N whenever such points exist. This is the Krasnoselskii-Mann (KM) approach to finding fixed points of ne operators. A wide variety of iterative procedures used in signal processing and image reconstruction and elsewhere are special cases of the KM iterative procedure, for particular choices of the ne operator N. These include the Gerchberg-Papoulis method for bandlimited extrapolation, the SART algorithm of Anderson and Kak, the Landweber and projected Landweber algorithms, simultaneous and sequential methods for solving the convex feasibility problem, the ART and Cimmino methods for solving linear systems of equations, the CQ algorithm for solving the split feasibility problem and Dolidze's procedure for the variational inequality problem for monotone operators.

  9. Algorithm Animation with Galant.

    PubMed

    Stallmann, Matthias F

    2017-01-01

    Although surveys suggest positive student attitudes toward the use of algorithm animations, it is not clear that they improve learning outcomes. The Graph Algorithm Animation Tool, or Galant, challenges and motivates students to engage more deeply with algorithm concepts, without distracting them with programming language details or GUIs. Even though Galant is specifically designed for graph algorithms, it has also been used to animate other algorithms, most notably sorting algorithms.

  10. Chiari malformation Type I surgery in pediatric patients. Part 1: validation of an ICD-9-CM code search algorithm

    PubMed Central

    Ladner, Travis R.; Greenberg, Jacob K.; Guerrero, Nicole; Olsen, Margaret A.; Shannon, Chevis N.; Yarbrough, Chester K.; Piccirillo, Jay F.; Anderson, Richard C. E.; Feldstein, Neil A.; Wellons, John C.; Smyth, Matthew D.; Park, Tae Sung; Limbrick, David D.

    2016-01-01

    Objective Administrative billing data may facilitate large-scale assessments of treatment outcomes for pediatric Chiari malformation Type I (CM-I). Validated International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) code algorithms for identifying CM-I surgery are critical prerequisites for such studies but are currently only available for adults. The objective of this study was to validate two ICD-9-CM code algorithms using hospital billing data to identify pediatric patients undergoing CM-I decompression surgery. Methods The authors retrospectively analyzed the validity of two ICD-9-CM code algorithms for identifying pediatric CM-I decompression surgery performed at 3 academic medical centers between 2001 and 2013. Algorithm 1 included any discharge diagnosis code of 348.4 (CM-I), as well as a procedure code of 01.24 (cranial decompression) or 03.09 (spinal decompression or laminectomy). Algorithm 2 restricted this group to the subset of patients with a primary discharge diagnosis of 348.4. The positive predictive value (PPV) and sensitivity of each algorithm were calculated. Results Among 625 first-time admissions identified by Algorithm 1, the overall PPV for CM-I decompression was 92%. Among the 581 admissions identified by Algorithm 2, the PPV was 97%. The PPV for Algorithm 1 was lower in one center (84%) compared with the other centers (93%–94%), whereas the PPV of Algorithm 2 remained high (96%–98%) across all subgroups. The sensitivity of Algorithms 1 (91%) and 2 (89%) was very good and remained so across subgroups (82%–97%). Conclusions An ICD-9-CM algorithm requiring a primary diagnosis of CM-I has excellent PPV and very good sensitivity for identifying CM-I decompression surgery in pediatric patients. These results establish a basis for utilizing administrative billing data to assess pediatric CM-I treatment outcomes. PMID:26799412

  11. Chiari malformation Type I surgery in pediatric patients. Part 1: validation of an ICD-9-CM code search algorithm.

    PubMed

    Ladner, Travis R; Greenberg, Jacob K; Guerrero, Nicole; Olsen, Margaret A; Shannon, Chevis N; Yarbrough, Chester K; Piccirillo, Jay F; Anderson, Richard C E; Feldstein, Neil A; Wellons, John C; Smyth, Matthew D; Park, Tae Sung; Limbrick, David D

    2016-05-01

    OBJECTIVE Administrative billing data may facilitate large-scale assessments of treatment outcomes for pediatric Chiari malformation Type I (CM-I). Validated International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) code algorithms for identifying CM-I surgery are critical prerequisites for such studies but are currently only available for adults. The objective of this study was to validate two ICD-9-CM code algorithms using hospital billing data to identify pediatric patients undergoing CM-I decompression surgery. METHODS The authors retrospectively analyzed the validity of two ICD-9-CM code algorithms for identifying pediatric CM-I decompression surgery performed at 3 academic medical centers between 2001 and 2013. Algorithm 1 included any discharge diagnosis code of 348.4 (CM-I), as well as a procedure code of 01.24 (cranial decompression) or 03.09 (spinal decompression or laminectomy). Algorithm 2 restricted this group to the subset of patients with a primary discharge diagnosis of 348.4. The positive predictive value (PPV) and sensitivity of each algorithm were calculated. RESULTS Among 625 first-time admissions identified by Algorithm 1, the overall PPV for CM-I decompression was 92%. Among the 581 admissions identified by Algorithm 2, the PPV was 97%. The PPV for Algorithm 1 was lower in one center (84%) compared with the other centers (93%-94%), whereas the PPV of Algorithm 2 remained high (96%-98%) across all subgroups. The sensitivity of Algorithms 1 (91%) and 2 (89%) was very good and remained so across subgroups (82%-97%). CONCLUSIONS An ICD-9-CM algorithm requiring a primary diagnosis of CM-I has excellent PPV and very good sensitivity for identifying CM-I decompression surgery in pediatric patients. These results establish a basis for utilizing administrative billing data to assess pediatric CM-I treatment outcomes.

  12. Pipe Cleaning Operating Procedures

    SciTech Connect

    Clark, D.; Wu, J.; /Fermilab

    1991-01-24

    This cleaning procedure outlines the steps involved in cleaning the high purity argon lines associated with the DO calorimeters. The procedure is broken down into 7 cycles: system setup, initial flush, wash, first rinse, second rinse, final rinse and drying. The system setup involves preparing the pump cart, line to be cleaned, distilled water, and interconnecting hoses and fittings. The initial flush is an off-line flush of the pump cart and its plumbing in order to preclude contaminating the line. The wash cycle circulates the detergent solution (Micro) at 180 degrees Fahrenheit through the line to be cleaned. The first rinse is then intended to rid the line of the majority of detergent and only needs to run for 30 minutes and at ambient temperature. The second rinse (if necessary) should eliminate the remaining soap residue. The final rinse is then intended to be a check that there is no remaining soap or other foreign particles in the line, particularly metal 'chips.' The final rinse should be run at 180 degrees Fahrenheit for at least 90 minutes. The filters should be changed after each cycle, paying particular attention to the wash cycle and the final rinse cycle return filters. These filters, which should be bagged and labeled, prove that the pipeline is clean. Only distilled water should be used for all cycles, especially rinsing. The level in the tank need not be excessive, merely enough to cover the heater float switch. The final rinse, however, may require a full 50 gallons. Note that most of the details of the procedure are included in the initial flush description. This section should be referred to if problems arise in the wash or rinse cycles.

  13. NWRA AVOSS Wake Vortex Prediction Algorithm. 3.1.1

    NASA Technical Reports Server (NTRS)

    Robins, R. E.; Delisi, D. P.; Hinton, David (Technical Monitor)

    2002-01-01

    This report provides a detailed description of the wake vortex prediction algorithm used in the Demonstration Version of NASA's Aircraft Vortex Spacing System (AVOSS). The report includes all equations used in the algorithm, an explanation of how to run the algorithm, and a discussion of how the source code for the algorithm is organized. Several appendices contain important supplementary information, including suggestions for enhancing the algorithm and results from test cases.

  14. Promoting Understanding of Linear Equations with the Median-Slope Algorithm

    ERIC Educational Resources Information Center

    Edwards, Michael Todd

    2005-01-01

    The preliminary findings resulting when invented algorithm is used with entry-level students while introducing linear equations is described. As calculations are accessible, the algorithm is preferable to more rigorous statistical procedures in entry-level classrooms.

  15. Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  16. The Rational Hybrid Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Clark, Michael

    2006-12-01

    The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.

  17. Computerized procedures system

    DOEpatents

    Lipner, Melvin H.; Mundy, Roger A.; Franusich, Michael D.

    2010-10-12

    An online data driven computerized procedures system that guides an operator through a complex process facility's operating procedures. The system monitors plant data, processes the data and then, based upon this processing, presents the status of the current procedure step and/or substep to the operator. The system supports multiple users and a single procedure definition supports several interface formats that can be tailored to the individual user. Layered security controls access privileges and revisions are version controlled. The procedures run on a server that is platform independent of the user workstations that the server interfaces with and the user interface supports diverse procedural views.

  18. Designing Flightdeck Procedures

    NASA Technical Reports Server (NTRS)

    Barshi, Immanuel; Mauro, Robert; Degani, Asaf; Loukopoulou, Loukia

    2016-01-01

    The primary goal of this document is to provide guidance on how to design, implement, and evaluate flight deck procedures. It provides a process for developing procedures that meet clear and specific requirements. This document provides a brief overview of: 1) the requirements for procedures, 2) a process for the design of procedures, and 3) a process for the design of checklists. The brief overview is followed by amplified procedures that follow the above steps and provide details for the proper design, implementation and evaluation of good flight deck procedures and checklists.

  19. RADFLO physics and algorithms

    SciTech Connect

    Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.

    1995-09-01

    This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.

  20. Dynamic Analyses Including Joints Of Truss Structures

    NASA Technical Reports Server (NTRS)

    Belvin, W. Keith

    1991-01-01

    Method for mathematically modeling joints to assess influences of joints on dynamic response of truss structures developed in study. Only structures with low-frequency oscillations considered; only Coulomb friction and viscous damping included in analysis. Focus of effort to obtain finite-element mathematical models of joints exhibiting load-vs.-deflection behavior similar to measured load-vs.-deflection behavior of real joints. Experiments performed to determine stiffness and damping nonlinearities typical of joint hardware. Algorithm for computing coefficients of analytical joint models based on test data developed to enable study of linear and nonlinear effects of joints on global structural response. Besides intended application to large space structures, applications in nonaerospace community include ground-based antennas and earthquake-resistant steel-framed buildings.

  1. Generalizability Analyses: Principles and Procedures. ACT Technical Bulletin No. 26.

    ERIC Educational Resources Information Center

    Brennan, Robert L.

    Rules, procedures, and algorithms intended to aid researchers and practitioners in the application of generalizability theory to a broad range of measurement problems are presented. Two examples of measurement research are G studies, which examine the dependability of some general measurement procedure; and D studies, which provide the data for…

  2. Post-processing procedure for industrial quantum key distribution systems

    NASA Astrophysics Data System (ADS)

    Kiktenko, Evgeny; Trushechkin, Anton; Kurochkin, Yury; Fedorov, Aleksey

    2016-08-01

    We present algorithmic solutions aimed on post-processing procedure for industrial quantum key distribution systems with hardware sifting. The main steps of the procedure are error correction, parameter estimation, and privacy amplification. Authentication of classical public communication channel is also considered.

  3. A Short Survey of Document Structure Similarity Algorithms

    SciTech Connect

    Buttler, D

    2004-02-27

    This paper provides a brief survey of document structural similarity algorithms, including the optimal Tree Edit Distance algorithm and various approximation algorithms. The approximation algorithms include the simple weighted tag similarity algorithm, Fourier transforms of the structure, and a new application of the shingle technique to structural similarity. We show three surprising results. First, the Fourier transform technique proves to be the least accurate of any of approximation algorithms, while also being slowest. Second, optimal Tree Edit Distance algorithms may not be the best technique for clustering pages from different sites. Third, the simplest approximation to structure may be the most effective and efficient mechanism for many applications.

  4. Mathematical algorithms for approximate reasoning

    NASA Technical Reports Server (NTRS)

    Murphy, John H.; Chay, Seung C.; Downs, Mary M.

    1988-01-01

    Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away

  5. Alternative Refractive Surgery Procedures

    MedlinePlus

    ... LASIK Alternative Refractive Surgery Procedures Laser Surgery Recovery Alternative Refractive Surgery Procedures Dec. 12, 2015 Today's refractive ... that releases controlled amounts of radio frequency (RF) energy, instead of a laser, to apply heat to ...

  6. Cosmetic Procedure Questions

    MedlinePlus

    ... Stretch Marks Sun-damaged Skin Unwanted Hair Unwanted Tattoos Varicose Veins Vitiligo Wrinkles Treatments and Procedures Ambulatory ... Stretch Marks Sun-damaged Skin Unwanted Hair Unwanted Tattoos Varicose Veins Vitiligo Wrinkles Treatments and Procedures Ambulatory ...

  7. Online processing in the ALICE DAQ The detector algorithms

    NASA Astrophysics Data System (ADS)

    Chapeland, S.; Altini, V.; Carena, F.; Carena, W.; Chibante Barroso, V.; Costa, F.; Divià, R.; Fuchs, U.; Makhlyueva, I.; Roukoutakis, F.; Schossmaier, K.; Soós, C.; Vande Vyvre, P.; von Haller, B.; ALICE Collaboration

    2010-04-01

    ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). Some specific calibration tasks are performed regularly for each of the 18 ALICE sub-detectors in order to achieve most accurate physics measurements. These procedures involve events analysis in a wide range of experimental conditions, implicating various trigger types, data throughputs, electronics settings, and algorithms, both during short sub-detector standalone runs and long global physics runs. A framework was designed to collect statistics and compute some of the calibration parameters directly online, using resources of the Data Acquisition System (DAQ), and benefiting from its inherent parallel architecture to process events. This system has been used at the experimental area for one year, and includes more than 30 calibration routines in production. This paper describes the framework architecture and the synchronization mechanisms involved at the level of the Experiment Control System (ECS) of ALICE. The software libraries interfacing detector algorithms (DA) to the online data flow, configuration database, experiment logbook, and offline system are reviewed. The test protocols followed to integrate and validate each sub-detector component are also discussed, including the automatic build system and validation procedures used to ensure a smooth deployment. The offline post-processing and archiving of the DA results is covered in a separate paper.

  8. Certification procedure of building thermographers

    NASA Astrophysics Data System (ADS)

    Kauppinen, Timo T.; Paloniitty, Sauli; Krankka, Juha

    2005-03-01

    Thermography has been used in Finland in building survey from the late 70s. The service has been provided by consultants, whose background is varied. When technology and devices have improved and the prices have increased, more and more doers have come into the market. At the same time, building developers and contractors have begun to use thermography for quality control in new building. Thermography has also been used in renovation planning. The problem is, that there are no procedures for building thermography, no guidelines to order the thermography services, no instructions how to scan, how to report and most important -- how to interpret the results. That fact has caused a lot of problems and also damaged the reputation and reliability of the method. In this year 2004 the various organizations in building trade launched a pilot project to certificate building thermographers. The procedure is divided into two parts: Part 1 is Level I (the basis of thermography) and Part II (divided into two periods) thermography applications of buildings, including also information on building physics, heat and mass transfer and structures. Both parts will take a week, two weeks in total with the examinations. The procedure follows moisture measurement procedure -- certification of building moisture measurements started a couple of years ago. In the paper the procedure, problems and the future plans are introduced. The following big issue is to develop and improve the interpretation procedure for reporting the results of thermography.

  9. Collected radiochemical and geochemical procedures

    SciTech Connect

    Kleinberg, J

    1990-05-01

    This revision of LA-1721, 4th Ed., Collected Radiochemical Procedures, reflects the activities of two groups in the Isotope and Nuclear Chemistry Division of the Los Alamos National Laboratory: INC-11, Nuclear and radiochemistry; and INC-7, Isotope Geochemistry. The procedures fall into five categories: I. Separation of Radionuclides from Uranium, Fission-Product Solutions, and Nuclear Debris; II. Separation of Products from Irradiated Targets; III. Preparation of Samples for Mass Spectrometric Analysis; IV. Dissolution Procedures; and V. Geochemical Procedures. With one exception, the first category of procedures is ordered by the positions of the elements in the Periodic Table, with separate parts on the Representative Elements (the A groups); the d-Transition Elements (the B groups and the Transition Triads); and the Lanthanides (Rare Earths) and Actinides (the 4f- and 5f-Transition Elements). The members of Group IIIB-- scandium, yttrium, and lanthanum--are included with the lanthanides, elements they resemble closely in chemistry and with which they occur in nature. The procedures dealing with the isolation of products from irradiated targets are arranged by target element.

  10. Manual of General Searching Procedures.

    ERIC Educational Resources Information Center

    Cornell Univ., Ithaca, NY. Univ. Libraries

    A training and reference tool for searchers in the Preorder Section of Cornell's Olin Library Acquisitions Department, this manual establishes the rationale for searching operations and includes illustrations as well as detailed explanations of searching procedures and problems. The information given applies only to the searching of monographs,…

  11. Pollutant Assessments Group Procedures Manual: Volume 1, Administrative and support procedures

    SciTech Connect

    Not Available

    1992-03-01

    This manual describes procedures currently in use by the Pollutant Assessments Group. The manual is divided into two volumes: Volume 1 includes administrative and support procedures, and Volume 2 includes technical procedures. These procedures are revised in an ongoing process to incorporate new developments in hazardous waste assessment technology and changes in administrative policy. Format inconsistencies will be corrected in subsequent revisions of individual procedures. The purpose of the Pollutant Assessments Groups Procedures Manual is to provide a standardized set of procedures documenting in an auditable manner the activities performed by the Pollutant Assessments Group (PAG) of the Health and Safety Research Division (HASRD) of the Environmental Measurements and Applications Section (EMAS) at Oak Ridge National Laboratory (ORNL). The Procedures Manual ensures that the organizational, administrative, and technical activities of PAG conform properly to protocol outlined by funding organizations. This manual also ensures that the techniques and procedures used by PAG and other contractor personnel meet the requirements of applicable governmental, scientific, and industrial standards. The Procedures Manual is sufficiently comprehensive for use by PAG and contractor personnel in the planning, performance, and reporting of project activities and measurements. The Procedures Manual provides procedures for conducting field measurements and includes program planning, equipment operation, and quality assurance elements. Successive revisions of this manual will be archived in the PAG Document Control Department to facilitate tracking of the development of specific procedures.

  12. The BR eigenvalue algorithm

    SciTech Connect

    Geist, G.A.; Howell, G.W.; Watkins, D.S.

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  13. A comparative study of algorithms for radar imaging from gapped data

    NASA Astrophysics Data System (ADS)

    Xu, Xiaojian; Luan, Ruixue; Jia, Li; Huang, Ying

    2007-09-01

    In ultra wideband (UWB) radar imagery, there are often cases where the radar's operating bandwidth is interrupted due to various reasons, either periodically or randomly. Such interruption produces phase history data gaps, which in turn result in artifacts in the image if conventional image reconstruction techniques are used. The higher level artifacts severely degrade the radar images. In this work, several novel techniques for artifacts suppression in gapped data imaging were discussed. These include: (1) A maximum entropy based gap filling technique using a modified Burg algorithm (MEBGFT); (2) An alternative iteration deconvolution based on minimum entropy (AIDME) and its modified version, a hybrid max-min entropy procedure; (3) A windowed coherent CLEAN algorithm; and (4) Two-dimensional (2-D) periodically-gapped Capon (PG-Capon) and APES (PG-APES) algorithms. Performance of various techniques is comparatively studied.

  14. A High-Order Finite-Volume Algorithm for Fokker-Planck Collisions in Magnetized Plasmas

    SciTech Connect

    Xiong, Z; Cohen, R H; Rognlien, T D; Xu, X Q

    2007-04-18

    A high-order finite volume algorithm is developed for the Fokker-Planck Operator (FPO) describing Coulomb collisions in strongly magnetized plasmas. The algorithm is based on a general fourth-order reconstruction scheme for an unstructured grid in the velocity space spanned by parallel velocity and magnetic moment. The method provides density conservation and high-order-accurate evaluation of the FPO independent of the choice of the velocity coordinates. As an example, a linearized FPO in constant-of-motion coordinates, i.e. the total energy and the magnetic moment, is developed using the present algorithm combined with a cut-cell merging procedure. Numerical tests include the Spitzer thermalization problem and the return to isotropy for distributions initialized with velocity space loss cones. Utilization of the method for a nonlinear FPO is straightforward but requires evaluation of the Rosenbluth potentials.

  15. Crew procedures development techniques

    NASA Technical Reports Server (NTRS)

    Arbet, J. D.; Benbow, R. L.; Hawk, M. L.; Mangiaracina, A. A.; Mcgavern, J. L.; Spangler, M. C.

    1975-01-01

    The study developed requirements, designed, developed, checked out and demonstrated the Procedures Generation Program (PGP). The PGP is a digital computer program which provides a computerized means of developing flight crew procedures based on crew action in the shuttle procedures simulator. In addition, it provides a real time display of procedures, difference procedures, performance data and performance evaluation data. Reconstruction of displays is possible post-run. Data may be copied, stored on magnetic tape and transferred to the document processor for editing and documentation distribution.

  16. Improved pulse laser ranging algorithm based on high speed sampling

    NASA Astrophysics Data System (ADS)

    Gao, Xuan-yi; Qian, Rui-hai; Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; He, Shi-jie; Guo, Xiao-kang

    2016-10-01

    Narrow pulse laser ranging achieves long-range target detection using laser pulse with low divergent beams. Pulse laser ranging is widely used in military, industrial, civil, engineering and transportation field. In this paper, an improved narrow pulse laser ranging algorithm is studied based on the high speed sampling. Firstly, theoretical simulation models have been built and analyzed including the laser emission and pulse laser ranging algorithm. An improved pulse ranging algorithm is developed. This new algorithm combines the matched filter algorithm and the constant fraction discrimination (CFD) algorithm. After the algorithm simulation, a laser ranging hardware system is set up to implement the improved algorithm. The laser ranging hardware system includes a laser diode, a laser detector and a high sample rate data logging circuit. Subsequently, using Verilog HDL language, the improved algorithm is implemented in the FPGA chip based on fusion of the matched filter algorithm and the CFD algorithm. Finally, the laser ranging experiment is carried out to test the improved algorithm ranging performance comparing to the matched filter algorithm and the CFD algorithm using the laser ranging hardware system. The test analysis result demonstrates that the laser ranging hardware system realized the high speed processing and high speed sampling data transmission. The algorithm analysis result presents that the improved algorithm achieves 0.3m distance ranging precision. The improved algorithm analysis result meets the expected effect, which is consistent with the theoretical simulation.

  17. On recursive least-squares filtering algorithms and implementations. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Hsieh, Shih-Fu

    1990-01-01

    In many real-time signal processing applications, fast and numerically stable algorithms for solving least-squares problems are necessary and important. In particular, under non-stationary conditions, these algorithms must be able to adapt themselves to reflect the changes in the system and take appropriate adjustments to achieve optimum performances. Among existing algorithms, the QR-decomposition (QRD)-based recursive least-squares (RLS) methods have been shown to be useful and effective for adaptive signal processing. In order to increase the speed of processing and achieve high throughput rate, many algorithms are being vectorized and/or pipelined to facilitate high degrees of parallelism. A time-recursive formulation of RLS filtering employing block QRD will be considered first. Several methods, including a new non-continuous windowing scheme based on selectively rejecting contaminated data, were investigated for adaptive processing. Based on systolic triarrays, many other forms of systolic arrays are shown to be capable of implementing different algorithms. Various updating and downdating systolic algorithms and architectures for RLS filtering are examined and compared in details, which include Householder reflector, Gram-Schmidt procedure, and Givens rotation. A unified approach encompassing existing square-root-free algorithms is also proposed. For the sinusoidal spectrum estimation problem, a judicious method of separating the noise from the signal is of great interest. Various truncated QR methods are proposed for this purpose and compared to the truncated SVD method. Computer simulations provided for detailed comparisons show the effectiveness of these methods. This thesis deals with fundamental issues of numerical stability, computational efficiency, adaptivity, and VLSI implementation for the RLS filtering problems. In all, various new and modified algorithms and architectures are proposed and analyzed; the significance of any of the new method depends

  18. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  19. A spreadsheet algorithm for stagewise solvent extraction

    SciTech Connect

    Leonard, R.A.; Regalbuto, M.C.

    1993-01-01

    Part of the novelty is the way in which the problem is organized in the spreadsheet. In addition, to facilitate spreadsheet setup, a new calculational procedure has been developed. The resulting Spreadsheet Algorithm for Stagewise Solvent Extraction (SASSE) can be used with either IBM or Macintosh personal computers as a simple yet powerful tool for analyzing solvent extraction flowsheets.

  20. Efficient Algorithm for Optimizing Adaptive Quantum Metrology Processes

    NASA Astrophysics Data System (ADS)

    Hentschel, Alexander; Sanders, Barry C.

    2011-12-01

    Quantum-enhanced metrology infers an unknown quantity with accuracy beyond the standard quantum limit (SQL). Feedback-based metrological techniques are promising for beating the SQL but devising the feedback procedures is difficult and inefficient. Here we introduce an efficient self-learning swarm-intelligence algorithm for devising feedback-based quantum metrological procedures. Our algorithm can be trained with simulated or real-world trials and accommodates experimental imperfections, losses, and decoherence.

  1. Statistical Signal Models and Algorithms for Image Analysis

    DTIC Science & Technology

    1984-10-25

    In this report, two-dimensional stochastic linear models are used in developing algorithms for image analysis such as classification, segmentation, and object detection in images characterized by textured backgrounds. These models generate two-dimensional random processes as outputs to which statistical inference procedures can naturally be applied. A common thread throughout our algorithms is the interpretation of the inference procedures in terms of linear prediction

  2. A Parallel Algorithm for the Vehicle Routing Problem

    SciTech Connect

    Groer, Christopher S; Golden, Bruce; Edward, Wasil

    2011-01-01

    The vehicle routing problem (VRP) is a dicult and well-studied combinatorial optimization problem. We develop a parallel algorithm for the VRP that combines a heuristic local search improvement procedure with integer programming. We run our parallel algorithm with as many as 129 processors and are able to quickly nd high-quality solutions to standard benchmark problems. We assess the impact of parallelism by analyzing our procedure's performance under a number of dierent scenarios.

  3. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal

  4. An algorithm for the detection of the white-tide ('mucilage') phenomenon in the Adriatic Sea using AVHRR data

    SciTech Connect

    Tassan, S. )

    1993-06-01

    An algorithm using AVHRR data has been set up for the detection of a white tide consisting of algae secretion ('mucilage'), an event occurring in the Adriatic Sea under particular meteorological conditions. The algorithm, which includes an ad hoc procedure for cloud masking, has been tested with reference to the mucilage map obtained from the analysis of contemporary Thematic Mapper data, as well as by comparing consecutive AVHRR scenes. The main features of the exceptional mucilage phenomenon that took place in the northern basin of the Adriatic Sea in summer 1989 are shown by a time series of maps.

  5. The indications for and techniques and outcomes of ablative procedures of the distal ulna. The Darrach resection, hemiresection, matched resection, and Sauvé-Kapandji procedure.

    PubMed

    Lichtman, D M; Ganocy, T K; Kim, D C

    1998-05-01

    Several ablative procedures exist for the treatment of distal radio-ulnar joint arthritis. This article describes the indications, techniques, pitfalls, and outcomes for the four most popular procedures: Darrach, hemiresection-interposition, Sauvé-Kapandji, and matched ulnar resection. The authors explain their personal algorithm for treatment selection, emphasizing patient requirements versus the physiologic characteristics of each procedure.

  6. Algorithms for skiascopy measurement automatization

    NASA Astrophysics Data System (ADS)

    Fomins, Sergejs; Trukša, Renārs; KrūmiĆa, Gunta

    2014-10-01

    Automatic dynamic infrared retinoscope was developed, which allows to run procedure at a much higher rate. Our system uses a USB image sensor with up to 180 Hz refresh rate equipped with a long focus objective and 850 nm infrared light emitting diode as light source. Two servo motors driven by microprocessor control the rotation of semitransparent mirror and motion of retinoscope chassis. Image of eye pupil reflex is captured via software and analyzed along the horizontal plane. Algorithm for automatic accommodative state analysis is developed based on the intensity changes of the fundus reflex.

  7. Writer`s guide for technical procedures

    SciTech Connect

    1998-12-01

    A primary objective of operations conducted in the US Department of Energy (DOE) complex is safety. Procedures are a critical element of maintaining a safety envelope to ensure safe facility operation. This DOE Writer`s Guide for Technical Procedures addresses the content, format, and style of technical procedures that prescribe production, operation of equipment and facilities, and maintenance activities. The DOE Writer`s Guide for Management Control Procedures and DOE Writer`s Guide for Emergency and Alarm Response Procedures are being developed to assist writers in developing nontechnical procedures. DOE is providing this guide to assist writers across the DOE complex in producing accurate, complete, and usable procedures that promote safe and efficient operations that comply with DOE orders, including DOE Order 5480.19, Conduct of Operations for DOE Facilities, and 5480.6, Safety of Department of Energy-Owned Nuclear Reactors.

  8. Genetic Algorithms Viewed as Anticipatory Systems

    NASA Astrophysics Data System (ADS)

    Mocanu, Irina; Kalisz, Eugenia; Negreanu, Lorina

    2010-11-01

    This paper proposes a new version of genetic algorithms—the anticipatory genetic algorithm AGA. The performance evaluation included in the paper shows that AGA is superior to traditional genetic algorithm from both speed and accuracy points of view. The paper also presents how this algorithm can be applied to solve a complex problem: image annotation, intended to be used in content based image retrieval systems.

  9. SSME structural computer program development: BOPACE theoretical manual, addendum. [algorithms

    NASA Technical Reports Server (NTRS)

    1975-01-01

    An algorithm developed and incorporated into BOPACE for improving the convergence and accuracy of the inelastic stress-strain calculations is discussed. The implementation of separation of strains in the residual-force iterative procedure is defined. The elastic-plastic quantities used in the strain-space algorithm are defined and compared with previous quantities.

  10. Teaching Computation in Primary School without Traditional Written Algorithms

    ERIC Educational Resources Information Center

    Hartnett, Judy

    2015-01-01

    Concerns regarding the dominance of the traditional written algorithms in schools have been raised by many mathematics educators, yet the teaching of these procedures remains a dominant focus in in primary schools. This paper reports on a project in one school where the staff agreed to put the teaching of the traditional written algorithm aside,…

  11. Making Sense of the Traditional Long Division Algorithm

    ERIC Educational Resources Information Center

    Lee, Ji-Eun

    2007-01-01

    This classroom scholarship report presents a group of elementary students' experiences learning the traditional long division algorithm. The traditional long division algorithm is often taught mechanically, resulting in the student's performance of step-by-step procedures with no or weak understanding of the concept. While noting some initial…

  12. Fluorometric procedures for dye tracing

    USGS Publications Warehouse

    Wilson, James F.

    1968-01-01

    This manual describes the current fluorometric procedures used by the U.S. Geological Survey in dye tracer studies such as time of travel, dispersion, reaeration, and dilution-type discharge measurements. The advantages of dye tracing are (1) low detection and measurement limits and (2) simplicity and accuracy in measuring dye tracer concentrations using fluorometric techniques. The manual contains necessary background information about fluorescence, dyes, and fluorometers and a description of fluorometric operation and calibration procedures as a guide for laboratory and field use. The background information should be useful to anyone wishing to experiment with dyes, fluorometer components, or procedures different from those described. In addition, a brief section on aerial photography is included because of its possible use to supplement ground-level fluorometry.

  13. Fluorometric procedures for dye tracing

    USGS Publications Warehouse

    Wilson, James E.; Cobb, E.D.; Kilpatrick, F.A.

    1984-01-01

    This manual describes the current fluorometric procedures used by the U.S. Geological Survey in dye tracer studies such as time of travel, dispersion, reaeration, and dilution-type discharge measurements. The outstanding characteristics of dye tracing are: (1) the low detection and measurement limits, and (2) the simplicity and accuracy of measuring dye tracer concentrations using fluorometric techniques. The manual contains necessary background information about fluorescence, dyes, and fluorometers and a description of fluorometric operation and calibration procedures as a general guide for laboratory and field use. The background information should be useful to anyone wishing to experiment with dyes, fluorometer components, or procedures different from those described. In addition, a brief section is included on aerial photography because of its possible use to supplement ground-level fluorometry. (USGS)

  14. Fluorometric procedures for dye tracing

    USGS Publications Warehouse

    Wilson, James F.; Cobb, Ernest D.; Kilpatrick, F.A.

    1986-01-01

    This manual describes the current fluorometric procedures used by the U.S. Geological Survey in dye tracer studies such as time of travel, dispersion, reaeration, and dilution-type discharge measurements. The advantages of dye tracing are (1) low detection and measurement limits and (2) simplicity and accuracy in measuring dye tracer concentrations using fluorometric techniques. The manual contains necessary background information about fluorescence, dyes, and fluorometers and a description of fluorometric operation and calibration procedures as a guide for laboratory and field use. The background information should be useful to anyone wishing to experiment with dyes, fluorometer components, or procedures different from those described. In addition, a brief section on aerial photography is included because of its possible use to supplement ground-level fluorometry.

  15. Noise-enhanced clustering and competitive learning algorithms.

    PubMed

    Osoba, Osonde; Kosko, Bart

    2013-01-01

    Noise can provably speed up convergence in many centroid-based clustering algorithms. This includes the popular k-means clustering algorithm. The clustering noise benefit follows from the general noise benefit for the expectation-maximization algorithm because many clustering algorithms are special cases of the expectation-maximization algorithm. Simulations show that noise also speeds up convergence in stochastic unsupervised competitive learning, supervised competitive learning, and differential competitive learning.

  16. Research on numerical algorithms for large space structures

    NASA Technical Reports Server (NTRS)

    Denman, E. D.

    1981-01-01

    Numerical algorithms for analysis and design of large space structures are investigated. The sign algorithm and its application to decoupling of differential equations are presented. The generalized sign algorithm is given and its application to several problems discussed. The Laplace transforms of matrix functions and the diagonalization procedure for a finite element equation are discussed. The diagonalization of matrix polynomials is considered. The quadrature method and Laplace transforms is discussed and the identification of linear systems by the quadrature method investigated.

  17. Adaptive Estimation and Parameter Identification Using Multiple Model Estimation Algorithm

    DTIC Science & Technology

    1976-06-23

    Point Continuous Linear Smoothing ," Proc. Joint Automatic Control Conf., June 1967, pp. 249-257. [26] J. S. Meditch , "On Optimal Linear Smoothing ...Theory," Infor- mation and Control, 10, 598-615 (1967). [27] J. S. Meditch , "A Successive Approximation Procedure for Nonlinear Data Smoothing ," Proc...algorithm Kalman filter algorithms multiple model smoothing algorithm 70. ABSTRACT (Coensnia• en rever.e side if eceossuy Adidonilty by block nu.wbe

  18. Analysis and Evaluation of GPM Pre-launch Algorithms

    NASA Astrophysics Data System (ADS)

    Chandrasekar, Venkatachalam; Le, Minda

    2014-05-01

    The Global Precipitation Measurement (GPM) mission is the next satellite mission to obtain global precipitation measurements following success of TRMM (Tropical Rainfall Measuring Mission). GPM will be launched on February 28, 2014. The GPM mission architecture consists of satellite instruments flying within a constellation to provide accurate precipitation measurements around the globe every 2 to 4 hours and the its orbits cover up to 65 degree latitude of the earth. The GPM core satellite will be equipped with a dual-frequency precipitation radar (DPR) operating at Ku- (13.6 GHz) and Ka- (35.5 GHz) band. DPR on aboard the GPM core satellite is expected to improve our knowledge of precipitation processes relative to the single-frequency (Ku- band) radar used in TRMM by providing greater dynamic range, more detailed information on microphysics, and better accuracies in rainfall and liquid water content retrievals. New Ka- band channel observation of DPR will help to improve the detection thresholds for light rain and snow relative to TRMM PR. The dual-frequency signals will allow us to distinguish regions of liquid, frozen, and mixed-phase precipitation. GPM-DPR level 2 pre-launch algorithms include seven modules. Classification module plays a critical function in the retrieval system of DPR. The outputs of the classification module determine the nature of microphysical models and algorithms to be used in the retrievals. Classification module involves two main aspects: 1) precipitation type classification, including classifying stratiform, convective, and other rain type; and 2) hydrometeor profile characterization or hydrometeor phase state detection. DPR offers dual-frequency observations along the vertical profile, which provides additional information for investigating the microphysical properties using the difference in measured radar reflectivities at the two frequencies, a quantity often called the measured dual-frequency ratio (DFRm). The vertical profile

  19. A linear-time algorithm for Gaussian and non-Gaussian trait evolution models.

    PubMed

    Ho, Lam si Tung; Ané, Cécile

    2014-05-01

    We developed a linear-time algorithm applicable to a large class of trait evolution models, for efficient likelihood calculations and parameter inference on very large trees. Our algorithm solves the traditional computational burden associated with two key terms, namely the determinant of the phylogenetic covariance matrix V and quadratic products involving the inverse of V. Applications include Gaussian models such as Brownian motion-derived models like Pagel's lambda, kappa, delta, and the early-burst model; Ornstein-Uhlenbeck models to account for natural selection with possibly varying selection parameters along the tree; as well as non-Gaussian models such as phylogenetic logistic regression, phylogenetic Poisson regression, and phylogenetic generalized linear mixed models. Outside of phylogenetic regression, our algorithm also applies to phylogenetic principal component analysis, phylogenetic discriminant analysis or phylogenetic prediction. The computational gain opens up new avenues for complex models or extensive resampling procedures on very large trees. We identify the class of models that our algorithm can handle as all models whose covariance matrix has a 3-point structure. We further show that this structure uniquely identifies a rooted tree whose branch lengths parametrize the trait covariance matrix, which acts as a similarity matrix. The new algorithm is implemented in the R package phylolm, including functions for phylogenetic linear regression and phylogenetic logistic regression.

  20. The computational structural mechanics testbed procedures manual

    NASA Technical Reports Server (NTRS)

    Stewart, Caroline B. (Compiler)

    1991-01-01

    The purpose of this manual is to document the standard high level command language procedures of the Computational Structural Mechanics (CSM) Testbed software system. A description of each procedure including its function, commands, data interface, and use is presented. This manual is designed to assist users in defining and using command procedures to perform structural analysis in the CSM Testbed User's Manual and the CSM Testbed Data Library Description.

  1. Apollo experience report: Systems and flight procedures development

    NASA Technical Reports Server (NTRS)

    Kramer, P. C.

    1973-01-01

    This report describes the process of crew procedures development used in the Apollo Program. The two major categories, Systems Procedures and Flight Procedures, are defined, as are the forms of documentation required. A description is provided of the operation of the procedures change control process, which includes the roles of man-in-the-loop simulations and the Crew Procedures Change Board. Brief discussions of significant aspects of the attitude control, computer, electrical power, environmental control, and propulsion subsystems procedures development are presented. Flight procedures are subdivided by mission phase: launch and translunar injection, rendezvous, lunar descent and ascent, and entry. Procedures used for each mission phase are summarized.

  2. Fission Reaction Event Yield Algorithm

    SciTech Connect

    Hagmann, Christian; Verbeke, Jerome; Vogt, Ramona; Roundrup, Jorgen

    2016-05-31

    FREYA (Fission Reaction Event Yield Algorithm) is a code that simulated the decay of a fissionable nucleus at specified excitation energy. In its present form, FREYA models spontaneous fission and neutron-induced fission up to 20 MeV. It includes the possibility of neutron emission from the nuclear prior to its fussion (nth chance fission).

  3. Candidate CDTI procedures study

    NASA Technical Reports Server (NTRS)

    Ace, R. E.

    1981-01-01

    A concept with potential for increasing airspace capacity by involving the pilot in the separation control loop is discussed. Some candidate options are presented. Both enroute and terminal area procedures are considered and, in many cases, a technologically advanced Air Traffic Control structure is assumed. Minimum display characteristics recommended for each of the described procedures are presented. Recommended sequencing of the operational testing of each of the candidate procedures is presented.

  4. Algorithms, modelling and VO₂ kinetics.

    PubMed

    Capelli, Carlo; Carlo, Capelli; Cautero, Michela; Michela, Cautero; Pogliaghi, Silvia; Silvia, Pogliaghi

    2011-03-01

    This article summarises the pros and cons of different algorithms developed for estimating breath-by-breath (B-by-B) alveolar O(2) transfer (VO 2A) in humans. VO 2A is the difference between O(2) uptake at the mouth and changes in alveolar O(2) stores (∆ VO(2s)), which for any given breath, are equal to the alveolar volume change at constant FAO2/FAiO2 ∆VAi plus the O(2) alveolar fraction change at constant volume [V Ai-1(F Ai - F Ai-1) O2, where V (Ai-1) is the alveolar volume at the beginning of a breath. Therefore, VO 2A can be determined B-by-B provided that V (Ai-1) is: (a) set equal to the subject's functional residual capacity (algorithm of Auchincloss, A) or to zero; (b) measured (optoelectronic plethysmography, OEP); (c) selected according to a procedure that minimises B-by-B variability (algorithm of Busso and Robbins, BR). Alternatively, the respiratory cycle can be redefined as the time between equal FO(2) in two subsequent breaths (algorithm of Grønlund, G), making any assumption of V (Ai-1) unnecessary. All the above methods allow an unbiased estimate of VO2 at steady state, albeit with different precision. Yet the algorithms "per se" affect the parameters describing the B-by-B kinetics during exercise transitions. Among these approaches, BR and G, by increasing the signal-to-noise ratio of the measurements, reduce the number of exercise repetitions necessary to study VO2 kinetics, compared to A approach. OEP and G (though technically challenging and conceptually still debated), thanks to their ability to track ∆VO(2s) changes during the early phase of exercise transitions, appear rather promising for investigating B-by-B gas exchange.

  5. Modified arthroscopic Brostrom procedure.

    PubMed

    Lui, Tun Hing

    2015-09-01

    The open modified Brostrom anatomic repair technique is widely accepted as the reference standard for lateral ankle stabilization. However, there is high incidence of intra-articular pathologies associated with chronic lateral ankle instability which may not be addressed by an isolated open Brostrom procedure. Arthroscopic Brostrom procedure with suture anchor has been described for anatomic repair of chronic lateral ankle instability and management of intra-articular lesions. However, the complication rates seemed to be higher than open Brostrom procedure. Modification of the arthroscopic Brostrom procedure with the use of bone tunnel may reduce the risk of certain complications.

  6. Global Precipitation Measurement: GPM Microwave Imager (GMI) Algorithm Development Approach

    NASA Technical Reports Server (NTRS)

    Stocker, Erich Franz

    2009-01-01

    This slide presentation reviews the approach to the development of the Global Precipitation Measurement algorithm. This presentation includes information about the responsibilities for the development of the algorithm, and the calibration. Also included is information about the orbit, and the sun angle. The test of the algorithm code will be done with synthetic data generated from the Precipitation Processing System (PPS).

  7. Algorithm Improvement Program Nuclide Identification Algorithm Scoring Criteria And Scoring Application - DNDO.

    SciTech Connect

    Enghauser, Michael

    2015-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  8. 21 CFR 211.100 - Written procedures; deviations.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 4 2014-04-01 2014-04-01 false Written procedures; deviations. 211.100 Section... Process Controls § 211.100 Written procedures; deviations. (a) There shall be written procedures for... in this subpart. These written procedures, including any changes, shall be drafted, reviewed,...

  9. Mandibular reconstructions using computer-aided design/computer-aided manufacturing: A systematic review of a defect-based reconstructive algorithm.

    PubMed

    Tarsitano, Achille; Del Corso, Giacomo; Ciocca, Leonardo; Scotti, Roberto; Marchetti, Claudio

    2015-11-01

    Modern planning techniques, including computer-aided design/computer-aided manufacturing (CAD-CAM) can be used to plan reconstructive surgery, optimising aesthetic outcomes and functional rehabilitation. However, although many such applications are available, no systematic protocol yet describes the entire reconstructive procedure, which must include virtual planning, custom manufacture, and a reconstructive algorithm. We reviewed current practices in this novel field, analysed case series described in the literature, and developed a new, defect-based reconstructive algorithm. We also evaluated methods of mandibular reconstruction featuring virtual planning, the use of surgical guides, and laser printing of custom titanium bony plates to support composite free flaps, and evaluated their utility.

  10. Genetic algorithms for the vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Volna, Eva

    2016-06-01

    The Vehicle Routing Problem (VRP) is one of the most challenging combinatorial optimization tasks. This problem consists in designing the optimal set of routes for fleet of vehicles in order to serve a given set of customers. Evolutionary algorithms are general iterative algorithms for combinatorial optimization. These algorithms have been found to be very effective and robust in solving numerous problems from a wide range of application domains. This problem is known to be NP-hard; hence many heuristic procedures for its solution have been suggested. For such problems it is often desirable to obtain approximate solutions, so they can be found fast enough and are sufficiently accurate for the purpose. In this paper we have performed an experimental study that indicates the suitable use of genetic algorithms for the vehicle routing problem.

  11. The transfer of analytical procedures.

    PubMed

    Ermer, J; Limberger, M; Lis, K; Wätzig, H

    2013-11-01

    Analytical method transfers are certainly among the most discussed topics in the GMP regulated sector. However, they are surprisingly little regulated in detail. General information is provided by USP, WHO, and ISPE in particular. Most recently, the EU emphasized the importance of analytical transfer by including it in their draft of the revised GMP Guideline. In this article, an overview and comparison of these guidelines is provided. The key to success for method transfers is the excellent communication between sending and receiving unit. In order to facilitate this communication, procedures, flow charts and checklists for responsibilities, success factors, transfer categories, the transfer plan and report, strategies in case of failed transfers, tables with acceptance limits are provided here, together with a comprehensive glossary. Potential pitfalls are described such that they can be avoided. In order to assure an efficient and sustainable transfer of analytical procedures, a practically relevant and scientifically sound evaluation with corresponding acceptance criteria is crucial. Various strategies and statistical tools such as significance tests, absolute acceptance criteria, and equivalence tests are thoroughly descibed and compared in detail giving examples. Significance tests should be avoided. The success criterion is not statistical significance, but rather analytical relevance. Depending on a risk assessment of the analytical procedure in question, statistical equivalence tests are recommended, because they include both, a practically relevant acceptance limit and a direct control of the statistical risks. However, for lower risk procedures, a simple comparison of the transfer performance parameters to absolute limits is also regarded as sufficient.

  12. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  13. Software For Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steve E.

    1992-01-01

    SPLICER computer program is genetic-algorithm software tool used to solve search and optimization problems. Provides underlying framework and structure for building genetic-algorithm application program. Written in Think C.

  14. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  15. An algorithm for haplotype analysis

    SciTech Connect

    Lin, Shili; Speed, T.P.

    1997-12-01

    This paper proposes an algorithm for haplotype analysis based on a Monte Carlo method. Haplotype configurations are generated according to the distribution of joint haplotypes of individuals in a pedigree given their phenotype data, via a Markov chain Monte Carlo algorithm. The haplotype configuration which maximizes this conditional probability distribution can thus be estimated. In addition, the set of haplotype configurations with relatively high probabilities can also be estimated as possible alternatives to the most probable one. This flexibility enables geneticists to choose the haplotype configurations which are most reasonable to them, allowing them to include their knowledge of the data under analysis. 18 refs., 2 figs., 1 tab.

  16. Algorithm for genome contig assembly. Final report

    SciTech Connect

    1995-09-01

    An algorithm was developed for genome contig assembly which extended the range of data types that could be included in assembly and which ran on the order of a hundred times faster than the algorithm it replaced. Maps of all existing cosmid clone and YAC data at the Human Genome Information Resource were assembled using ICA. The resulting maps are summarized.

  17. Excursion-Set-Mediated Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Noever, David; Baskaran, Subbiah

    1995-01-01

    Excursion-set-mediated genetic algorithm (ESMGA) is embodiment of method of searching for and optimizing computerized mathematical models. Incorporates powerful search and optimization techniques based on concepts analogous to natural selection and laws of genetics. In comparison with other genetic algorithms, this one achieves stronger condition for implicit parallelism. Includes three stages of operations in each cycle, analogous to biological generation.

  18. A parallel algorithm for global routing

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall J.; Banerjee, Prithviraj

    1990-01-01

    A Parallel Hierarchical algorithm for Global Routing (PHIGURE) is presented. The router is based on the work of Burstein and Pelavin, but has many extensions for general global routing and parallel execution. Main features of the algorithm include structured hierarchical decomposition into separate independent tasks which are suitable for parallel execution and adaptive simplex solution for adding feedthroughs and adjusting channel heights for row-based layout. Alternative decomposition methods and the various levels of parallelism available in the algorithm are examined closely. The algorithm is described and results are presented for a shared-memory multiprocessor implementation.

  19. A stabilization algorithm for linear discrete constant systems

    NASA Technical Reports Server (NTRS)

    Armstrong, E. S.; Rublein, G. T.

    1976-01-01

    A procedure is derived for stabilizing linear constant discrete systems which is a discrete analog to the extended Bass algorithm for stabilizing linear constant continuous systems. The procedure offers a method for constructing a stabilizing feedback without the computational difficulty of raising the unstable open-loop response matrix to powers thus making the method attractive for high order or poorly conditioned systems.

  20. Model Specification Searches Using Ant Colony Optimization Algorithms

    ERIC Educational Resources Information Center

    Marcoulides, George A.; Drezner, Zvi

    2003-01-01

    Ant colony optimization is a recently proposed heuristic procedure inspired by the behavior of real ants. This article applies the procedure to model specification searches in structural equation modeling and reports the results. The results demonstrate the capabilities of ant colony optimization algorithms for conducting automated searches.

  1. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  2. INSENS classification algorithm report

    SciTech Connect

    Hernandez, J.E.; Frerking, C.J.; Myers, D.W.

    1993-07-28

    This report describes a new algorithm developed for the Imigration and Naturalization Service (INS) in support of the INSENS project for classifying vehicles and pedestrians using seismic data. This algorithm is less sensitive to nuisance alarms due to environmental events than the previous algorithm. Furthermore, the algorithm is simple enough that it can be implemented in the 8-bit microprocessor used in the INSENS system.

  3. Efficient computer algebra algorithms for polynomial matrices in control design

    NASA Technical Reports Server (NTRS)

    Baras, J. S.; Macenany, D. C.; Munach, R.

    1989-01-01

    The theory of polynomial matrices plays a key role in the design and analysis of multi-input multi-output control and communications systems using frequency domain methods. Examples include coprime factorizations of transfer functions, cannonical realizations from matrix fraction descriptions, and the transfer function design of feedback compensators. Typically, such problems abstract in a natural way to the need to solve systems of Diophantine equations or systems of linear equations over polynomials. These and other problems involving polynomial matrices can in turn be reduced to polynomial matrix triangularization procedures, a result which is not surprising given the importance of matrix triangularization techniques in numerical linear algebra. Matrices with entries from a field and Gaussian elimination play a fundamental role in understanding the triangularization process. In the case of polynomial matrices, matrices with entries from a ring for which Gaussian elimination is not defined and triangularization is accomplished by what is quite properly called Euclidean elimination. Unfortunately, the numerical stability and sensitivity issues which accompany floating point approaches to Euclidean elimination are not very well understood. New algorithms are presented which circumvent entirely such numerical issues through the use of exact, symbolic methods in computer algebra. The use of such error-free algorithms guarantees that the results are accurate to within the precision of the model data--the best that can be hoped for. Care must be taken in the design of such algorithms due to the phenomenon of intermediate expressions swell.

  4. 48 CFR 410.002 - Procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Procedures. 410.002 Section 410.002 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE COMPETITION AND ACQUISITION PLANNING MARKET RESEARCH 410.002 Procedures. Market research must include obtaining information...

  5. Irrigation customer survey procedures and results

    SciTech Connect

    Harrer, B.J.; Johnston, J.W.; Dase, J.E.; Hattrup, M.P.; Reed, G.

    1987-03-01

    This report describes the statistical procedures, administrative procedures, and results of a telephone survey designed to collect primary data from individuals in the Pacific Northwest region who use electricity in irrigating agricultural crops. The project was intended to collect data useful for a variety of purposes, including conservation planning, load forecasting, and rate design.

  6. Spanish Basic Course: Radio Communications Procedures, USAF.

    ERIC Educational Resources Information Center

    Defense Language Inst., Washington, DC.

    This guide to radio communication procedures is offered in Spanish and English as a means of securing a closer working relationship among United States Air Force personnel and Latin American aviators and technicians. Eight dialogues concerning routine flight procedures and aerospace technology are included. It is suggested that two rated students…

  7. W-087 Acceptance test procedure. Revision 1

    SciTech Connect

    Joshi, A.W.

    1997-06-10

    This Acceptance Test Procedure/Operational Test Procedure (ATP/OTP) has been prepared to demonstrate that the Electrical/Instrumentation and Mechanical systems function as required by project criteria and to verify proper operation of the integrated system including the interlocks.

  8. 34 CFR 674.45 - Collection procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 3 2011-07-01 2011-07-01 false Collection procedures. 674.45 Section 674.45 Education..., DEPARTMENT OF EDUCATION FEDERAL PERKINS LOAN PROGRAM Due Diligence § 674.45 Collection procedures. (a) The..., including litigation as described in § 674.46, to recover amounts owed from defaulted borrowers who do...

  9. 34 CFR 674.45 - Collection procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false Collection procedures. 674.45 Section 674.45 Education..., DEPARTMENT OF EDUCATION FEDERAL PERKINS LOAN PROGRAM Due Diligence § 674.45 Collection procedures. (a) The..., including litigation as described in § 674.46, to recover amounts owed from defaulted borrowers who do...

  10. Parliamentary Procedure for the FFA Member.

    ERIC Educational Resources Information Center

    Joestgen, John G.

    Information and examples concerning parliamentary procedures are presented in this instructional manual written for Wisconsin Future Farmers of America (FFA) members and FFA parliamentary procedure teams. Topics include the following: secretary minutes (bylaws, officers, quorum, order of business, meeting and session, introducing business,…

  11. 38 CFR 18.436 - Procedural safeguards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... or are believed to need special instruction or related services. The system shall include: (1) Notice... Adult Education § 18.436 Procedural safeguards. (a) A recipient that operates a public elementary or secondary education program shall implement a system of procedural safeguards with respect to...

  12. 14 CFR 183.53 - Procedures manual.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Procedures manual. 183.53 Section 183.53... manual. No ODA Letter of Designation may be issued before the Administrator approves an applicant's procedures manual. The approved manual must: (a) Be available to each member of the ODA Unit; (b) Include...

  13. 10 CFR 1706.7 - Procedures.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 4 2012-01-01 2012-01-01 false Procedures. 1706.7 Section 1706.7 Energy DEFENSE NUCLEAR FACILITIES SAFETY BOARD ORGANIZATIONAL AND CONSULTANT CONFLICTS OF INTERESTS § 1706.7 Procedures. (a) Pre... the same defense nuclear facility that is the subject of the proposed new work (including...

  14. 10 CFR 1706.7 - Procedures.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Procedures. 1706.7 Section 1706.7 Energy DEFENSE NUCLEAR FACILITIES SAFETY BOARD ORGANIZATIONAL AND CONSULTANT CONFLICTS OF INTERESTS § 1706.7 Procedures. (a) Pre... the same defense nuclear facility that is the subject of the proposed new work (including...

  15. 10 CFR 1706.7 - Procedures.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 4 2014-01-01 2014-01-01 false Procedures. 1706.7 Section 1706.7 Energy DEFENSE NUCLEAR FACILITIES SAFETY BOARD ORGANIZATIONAL AND CONSULTANT CONFLICTS OF INTERESTS § 1706.7 Procedures. (a) Pre... the same defense nuclear facility that is the subject of the proposed new work (including...

  16. Procedure to Generate the MPACT Multigroup Library

    SciTech Connect

    Kim, Kang Seog

    2015-12-17

    The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the light water reactor. The objective of this document is focused on reviewing the current procedure to generate the MPACT multigroup library. Detailed methodologies and procedures are included in this document for further discussion to improve the MPACT multigroup library.

  17. Study on the Algorithm of Local Atomic Time

    NASA Astrophysics Data System (ADS)

    Li, B.; Qu, L. L.; Gao, Y. P.; Hu, Y. H.

    2010-10-01

    It is always an endless target for all time and frequency laboratories to develop, own and keep a stable, accurate and reliable time scale. As a comparatively mature algorithm, ALGOS, which has been concerned about the long-term stability of the time scale, is widely used by the majority of time laboratories. For ALGOS, the weights are assumed on the basis of the frequencies of 12 months and the present month interval is included in the computation. This procedure uses clock measurements covering 12 months, so annual frequency variations and long-term drifts can lead to de-weight. This helps to decrease the seasonal variation of the time scale and improve its long-term stability. However, the local atomic time scale is primarily concerned with long-term stability not more than 60 days. So when the local time scale is computed with ALGOS in time laboratories, it is necessary to modify ALGOS correspondingly according to the performances of contributing clocks, the requirement of stability for local time scale and so on. There are 22 high performance atomic clocks at National Time Service Center, Chinese Academy of Sciences (NTSC). They include 18 cesium standards and 4 hydrogen masers. Because hydrogen masers behave poor, we only regard an ensemble of 18 cesium clocks in our improved algorithm. The performances of these clocks are very similar, and the number is less than 20. By analyzing and studying the noise models of atomic clocks, this paper presents a complete improved algorithm of TA(NTSC). This improved TA(NTSC) algorithm includes three aspects: the selection of the maximum weight, the selection of clocks taking part in TA(NTSC) computation and the estimation of the weights of contributing clocks. We validate the new algorithm with the annually atomic clock comparative data of NTSC taking part in TAI computation in 2008. The results show that the long-term and short-term stabilities of TA(NTSC) are all improved. This conclusion is based on the clock

  18. Caudal and dorsal septal reconstruction: an algorithm for graft choices.

    PubMed

    Sherris, D A

    1997-01-01

    The objective of this study is to present an algorithm for choosing graft materials for the reconstruction of severe caudal and/or dorsal septal cartilage abnormalities and to examine the long-term results obtained with these techniques. Retrospective review of 21 consecutive cases of caudal and/or dorsal septal reconstruction via the external approach using the algorithms is presented. The techniques used are carefully described. Patient survey at least 1 year after the initial procedure, rhinologic examination before and after the procedure, and photographic analysis of preoperative and postoperative views are presented. The graft choice algorithm presented helps the surgeon to consider appropriate graft choice alternatives before surgery.

  19. Clustering algorithm studies

    NASA Astrophysics Data System (ADS)

    Graf, Norman A.

    2001-07-01

    An object-oriented framework for undertaking clustering algorithm studies has been developed. We present here the definitions for the abstract Cells and Clusters as well as the interface for the algorithm. We intend to use this framework to investigate the interplay between various clustering algorithms and the resulting jet reconstruction efficiency and energy resolutions to assist in the design of the calorimeter detector.

  20. Procedural Learning and Dyslexia

    ERIC Educational Resources Information Center

    Nicolson, R. I.; Fawcett, A. J.; Brookes, R. L.; Needle, J.

    2010-01-01

    Three major "neural systems", specialized for different types of information processing, are the sensory, declarative, and procedural systems. It has been proposed ("Trends Neurosci.",30(4), 135-141) that dyslexia may be attributable to impaired function in the procedural system together with intact declarative function. We provide a brief…

  1. Enucleation Procedure Manual.

    ERIC Educational Resources Information Center

    Davis, Kevin; Poston, George

    This manual provides information on the enucleation procedure (removal of the eyes for organ banks). An introductory section focuses on the anatomy of the eye and defines each of the parts. Diagrams of the eye are provided. A list of enucleation materials follows. Other sections present outlines of (1) a sterile procedure; (2) preparation for eye…

  2. Connectionist Learning Procedures.

    ERIC Educational Resources Information Center

    Hinton, Geoffrey E.

    A major goal of research on networks of neuron-like processing units is to discover efficient learning procedures that allow these networks to construct complex internal representations of their environment. The learning procedures must be capable of modifying the connection strengths in such a way that internal units which are not part of the…

  3. Hyperspectral data analysis procedures with reduced sensitivity to noise

    NASA Technical Reports Server (NTRS)

    Landgrebe, David A.

    1993-01-01

    Multispectral sensor systems have become steadily improved over the years in their ability to deliver increased spectral detail. With the advent of hyperspectral sensors, including imaging spectrometers, this technology is in the process of taking a large leap forward, thus providing the possibility of enabling delivery of much more detailed information. However, this direction of development has drawn even more attention to the matter of noise and other deleterious effects in the data, because reducing the fundamental limitations of spectral detail on information collection raises the limitations presented by noise to even greater importance. Much current effort in remote sensing research is thus being devoted to adjusting the data to mitigate the effects of noise and other deleterious effects. A parallel approach to the problem is to look for analysis approaches and procedures which have reduced sensitivity to such effects. We discuss some of the fundamental principles which define analysis algorithm characteristics providing such reduced sensitivity. One such analysis procedure including an example analysis of a data set is described, illustrating this effect.

  4. Visualizing output for a data learning algorithm

    NASA Astrophysics Data System (ADS)

    Carson, Daniel; Graham, James; Ternovskiy, Igor

    2016-05-01

    This paper details the process we went through to visualize the output for our data learning algorithm. We have been developing a hierarchical self-structuring learning algorithm based around the general principles of the LaRue model. One example of a proposed application of this algorithm would be traffic analysis, chosen because it is conceptually easy to follow and there is a significant amount of already existing data and related research material with which to work with. While we choose the tracking of vehicles for our initial approach, it is by no means the only target of our algorithm. Flexibility is the end goal, however, we still need somewhere to start. To that end, this paper details our creation of the visualization GUI for our algorithm, the features we included and the initial results we obtained from our algorithm running a few of the traffic based scenarios we designed.

  5. Runtime support for parallelizing data mining algorithms

    NASA Astrophysics Data System (ADS)

    Jin, Ruoming; Agrawal, Gagan

    2002-03-01

    With recent technological advances, shared memory parallel machines have become more scalable, and offer large main memories and high bus bandwidths. They are emerging as good platforms for data warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining algorithms. We have developed a series of techniques for parallelization of data mining algorithms, including full replication, full locking, fixed locking, optimized full locking, and cache-sensitive locking. Unlike previous work on shared memory parallelization of specific data mining algorithms, all of our techniques apply to a large number of common data mining algorithms. In addition, we propose a reduction-object based interface for specifying a data mining algorithm. We show how our runtime system can apply any of the technique we have developed starting from a common specification of the algorithm.

  6. A boundary finding algorithm and its applications

    NASA Technical Reports Server (NTRS)

    Gupta, J. N.; Wintz, P. A.

    1975-01-01

    An algorithm for locating gray level and/or texture edges in digitized pictures is presented. The algorithm is based on the concept of hypothesis testing. The digitized picture is first subdivided into subsets of picture elements, e.g., 2 x 2 arrays. The algorithm then compares the first- and second-order statistics of adjacent subsets; adjacent subsets having similar first- and/or second-order statistics are merged into blobs. By continuing this process, the entire picture is segmented into blobs such that the picture elements within each blob have similar characteristics. The boundaries between the blobs comprise the boundaries. The algorithm always generates closed boundaries. The algorithm was developed for multispectral imagery of the earth's surface. Application of this algorithm to various image processing techniques such as efficient coding, information extraction (terrain classification), and pattern recognition (feature selection) are included.

  7. An algorithmic approach to crustal deformation analysis

    NASA Technical Reports Server (NTRS)

    Iz, Huseyin Baki

    1987-01-01

    In recent years the analysis of crustal deformation measurements has become important as a result of current improvements in geodetic methods and an increasing amount of theoretical and observational data provided by several earth sciences. A first-generation data analysis algorithm which combines a priori information with current geodetic measurements was proposed. Relevant methods which can be used in the algorithm were discussed. Prior information is the unifying feature of this algorithm. Some of the problems which may arise through the use of a priori information in the analysis were indicated and preventive measures were demonstrated. The first step in the algorithm is the optimal design of deformation networks. The second step in the algorithm identifies the descriptive model of the deformation field. The final step in the algorithm is the improved estimation of deformation parameters. Although deformation parameters are estimated in the process of model discrimination, they can further be improved by the use of a priori information about them. According to the proposed algorithm this information must first be tested against the estimates calculated using the sample data only. Null-hypothesis testing procedures were developed for this purpose. Six different estimators which employ a priori information were examined. Emphasis was put on the case when the prior information is wrong and analytical expressions for possible improvements under incompatible prior information were derived.

  8. GRISOTTO: A greedy approach to improve combinatorial algorithms for motif discovery with prior knowledge

    PubMed Central

    2011-01-01

    Background Position-specific priors (PSP) have been used with success to boost EM and Gibbs sampler-based motif discovery algorithms. PSP information has been computed from different sources, including orthologous conservation, DNA duplex stability, and nucleosome positioning. The use of prior information has not yet been used in the context of combinatorial algorithms. Moreover, priors have been used only independently, and the gain of combining priors from different sources has not yet been studied. Results We extend RISOTTO, a combinatorial algorithm for motif discovery, by post-processing its output with a greedy procedure that uses prior information. PSP's from different sources are combined into a scoring criterion that guides the greedy search procedure. The resulting method, called GRISOTTO, was evaluated over 156 yeast TF ChIP-chip sequence-sets commonly used to benchmark prior-based motif discovery algorithms. Results show that GRISOTTO is at least as accurate as other twelve state-of-the-art approaches for the same task, even without combining priors. Furthermore, by considering combined priors, GRISOTTO is considerably more accurate than the state-of-the-art approaches for the same task. We also show that PSP's improve GRISOTTO ability to retrieve motifs from mouse ChiP-seq data, indicating that the proposed algorithm can be applied to data from a different technology and for a higher eukaryote. Conclusions The conclusions of this work are twofold. First, post-processing the output of combinatorial algorithms by incorporating prior information leads to a very efficient and effective motif discovery method. Second, combining priors from different sources is even more beneficial than considering them separately. PMID:21513505

  9. Fourier Lucas-Kanade algorithm.

    PubMed

    Lucey, Simon; Navarathna, Rajitha; Ashraf, Ahmed Bilal; Sridharan, Sridha

    2013-06-01

    In this paper, we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one preprocesses the source image and template/model with a bank of filters (e.g., oriented edges, Gabor, etc.) as 1) it can handle substantial illumination variations, 2) the inefficient preprocessing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, 3) unlike traditional LK, the computational cost is invariant to the number of filters and as a result is far more efficient, and 4) this approach can be extended to the Inverse Compositional (IC) form of the LK algorithm where nearly all steps (including Fourier transform and filter bank preprocessing) can be precomputed, leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to nonrigid object alignment tasks that are considered extensions of the LK algorithm, such as those found in Active Appearance Models (AAMs).

  10. SDR Input Power Estimation Algorithms

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer M.; Briones, Janette C.

    2013-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  11. POSE Algorithms for Automated Docking

    NASA Technical Reports Server (NTRS)

    Heaton, Andrew F.; Howard, Richard T.

    2011-01-01

    POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data.

  12. Molecular classification of pesticides including persistent organic pollutants, phenylurea and sulphonylurea herbicides.

    PubMed

    Torrens, Francisco; Castellano, Gloria

    2014-06-05

    Pesticide residues in wine were analyzed by liquid chromatography-tandem mass spectrometry. Retentions are modelled by structure-property relationships. Bioplastic evolution is an evolutionary perspective conjugating effect of acquired characters and evolutionary indeterminacy-morphological determination-natural selection principles; its application to design co-ordination index barely improves correlations. Fractal dimensions and partition coefficient differentiate pesticides. Classification algorithms are based on information entropy and its production. Pesticides allow a structural classification by nonplanarity, and number of O, S, N and Cl atoms and cycles; different behaviours depend on number of cycles. The novelty of the approach is that the structural parameters are related to retentions. Classification algorithms are based on information entropy. When applying procedures to moderate-sized sets, excessive results appear compatible with data suffering a combinatorial explosion. However, equipartition conjecture selects criterion resulting from classification between hierarchical trees. Information entropy permits classifying compounds agreeing with principal component analyses. Periodic classification shows that pesticides in the same group present similar properties; those also in equal period, maximum resemblance. The advantage of the classification is to predict the retentions for molecules not included in the categorization. Classification extends to phenyl/sulphonylureas and the application will be to predict their retentions.

  13. Cloud Screening and Quality Control Algorithm for Star Photometer Data: Assessment with Lidar Measurements and with All-sky Images

    NASA Technical Reports Server (NTRS)

    Ramirez, Daniel Perez; Lyamani, H.; Olmo, F. J.; Whiteman, D. N.; Navas-Guzman, F.; Alados-Arboledas, L.

    2012-01-01

    This paper presents the development and set up of a cloud screening and data quality control algorithm for a star photometer based on CCD camera as detector. These algorithms are necessary for passive remote sensing techniques to retrieve the columnar aerosol optical depth, delta Ae(lambda), and precipitable water vapor content, W, at nighttime. This cloud screening procedure consists of calculating moving averages of delta Ae() and W under different time-windows combined with a procedure for detecting outliers. Additionally, to avoid undesirable Ae(lambda) and W fluctuations caused by the atmospheric turbulence, the data are averaged on 30 min. The algorithm is applied to the star photometer deployed in the city of Granada (37.16 N, 3.60 W, 680 ma.s.l.; South-East of Spain) for the measurements acquired between March 2007 and September 2009. The algorithm is evaluated with correlative measurements registered by a lidar system and also with all-sky images obtained at the sunset and sunrise of the previous and following days. Promising results are obtained detecting cloud-affected data. Additionally, the cloud screening algorithm has been evaluated under different aerosol conditions including Saharan dust intrusion, biomass burning and pollution events.

  14. The EXIT procedure: principles, pitfalls, and progress.

    PubMed

    Marwan, Ahmad; Crombleholme, Timothy M

    2006-05-01

    Although performing procedures on a fetus before severing the umbilical cord has previously been reported, the principles of the ex utero intrapartum treatment (EXIT) procedure were first fully developed for reversing tracheal occlusion in fetuses with severe congenital diaphragmatic hernia. The EXIT procedure offers the advantage of insuring uteroplacental gas exchange while on placental support. The lessons learned in the development of the principles that underlie the EXIT procedure have improved outcomes when applied in other conditions, most notably in cases of airway obstruction. The range of indications for the EXIT procedure has expanded and currently includes giant fetal neck masses, lung or mediastinal tumors, congenital high airway obstruction syndrome, and EXIT to ECMO (extracorporeal membrane oxygenation), among others. This review summarizes the underlying principles of the EXIT procedure, the expanding indications for its use, the pitfalls of management, and the progress that has been made in its successful application.

  15. Algorithm for backrub motions in protein design

    PubMed Central

    Georgiev, Ivelin; Keedy, Daniel; Richardson, Jane S.; Richardson, David C.; Donald, Bruce R.

    2008-01-01

    Motivation: The Backrub is a small but kinematically efficient side-chain-coupled local backbone motion frequently observed in atomic-resolution crystal structures of proteins. A backrub shifts the Cα–Cβ orientation of a given side-chain by rigid-body dipeptide rotation plus smaller individual rotations of the two peptides, with virtually no change in the rest of the protein. Backrubs can therefore provide a biophysically realistic model of local backbone flexibility for structure-based protein design. Previously, however, backrub motions were applied via manual interactive model-building, so their incorporation into a protein design algorithm (a simultaneous search over mutation and backbone/side-chain conformation space) was infeasible. Results: We present a combinatorial search algorithm for protein design that incorporates an automated procedure for local backbone flexibility via backrub motions. We further derive a dead-end elimination (DEE)-based criterion for pruning candidate rotamers that, in contrast to previous DEE algorithms, is provably accurate with backrub motions. Our backrub-based algorithm successfully predicts alternate side-chain conformations from ≤0.9 Å resolution structures, confirming the suitability of the automated backrub procedure. Finally, the application of our algorithm to redesign two different proteins is shown to identify a large number of lower-energy conformations and mutation sequences that would have been ignored by a rigid-backbone model. Availability: Contact authors for source code. Contact: brd+ismb08@cs.duke.edu PMID:18586714

  16. Basic Fourier properties for generalized phase shifting and some interesting detuning insensitive algorithms

    NASA Astrophysics Data System (ADS)

    Téllez-Quiñones, Alejandro; Malacara-Doblado, Daniel; García-Márquez, Jorge

    2011-07-01

    In this manuscript, some interesting properties for generalized or nonuniform phase-shifting algorithms are shown in the Fourier frequency space. A procedure to find algorithms with equal amplitudes for their sampling function transforms is described. We also consider in this procedure the finding of algorithms that are orthogonal for all possible values in the frequency space. This last kind of algorithms should closely satisfy the first order detuning insensitive condition. The procedure consists of the minimization of functionals associated with the desired insensitivity conditions.

  17. Basic Fourier properties for generalized phase shifting and some interesting detuning insensitive algorithms.

    PubMed

    Téllez-Quiñones, Alejandro; Malacara-Doblado, Daniel; García-Márquez, Jorge

    2011-07-20

    In this manuscript, some interesting properties for generalized or nonuniform phase-shifting algorithms are shown in the Fourier frequency space. A procedure to find algorithms with equal amplitudes for their sampling function transforms is described. We also consider in this procedure the finding of algorithms that are orthogonal for all possible values in the frequency space. This last kind of algorithms should closely satisfy the first order detuning insensitive condition. The procedure consists of the minimization of functionals associated with the desired insensitivity conditions.

  18. 17 CFR 10.92 - Shortened procedure.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... section, the term “statement” includes (1) Statements of fact signed and sworn to by persons having... shortened procedure must be sworn to by persons having knowledge thereof and, except under...

  19. 17 CFR 10.92 - Shortened procedure.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... section, the term “statement” includes (1) Statements of fact signed and sworn to by persons having... shortened procedure must be sworn to by persons having knowledge thereof and, except under...

  20. 10 CFR 452.5 - Bidding procedures.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... for the reverse auction. (e) Bid evaluation and incentive awards selection procedures include the... feedstock suppliers. (4) In the event more than one lowest tied bid equally meets the standards in...

  1. 14 CFR 1259.202 - Application procedures.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) The application procedures and evaluation guidelines for awards under this section will be included in the announcements of such programs. (c) The applications will be reviewed by a peer review...

  2. 22 CFR 518.44 - Procurement procedures.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Requirements Procurement Standards § 518.44 Procurement procedures. (a) All recipients shall establish written... required, including the range of acceptable characteristics or minimum acceptable standards. (iv) The...'s business enterprises. (4) Encourage contracting with consortiums of small businesses,...

  3. 29 CFR 95.44 - Procurement procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Procurement Standards § 95.44 Procurement procedures. (a) All recipients shall establish written procurement... required, including the range of acceptable characteristics or minimum acceptable standards. (iv) The...'s business enterprises. (4) Encourage contracting with consortiums of small businesses,...

  4. 29 CFR 95.44 - Procurement procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Procurement Standards § 95.44 Procurement procedures. (a) All recipients shall establish written procurement... required, including the range of acceptable characteristics or minimum acceptable standards. (iv) The...'s business enterprises. (4) Encourage contracting with consortiums of small businesses,...

  5. 29 CFR 95.44 - Procurement procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Procurement Standards § 95.44 Procurement procedures. (a) All recipients shall establish written procurement... required, including the range of acceptable characteristics or minimum acceptable standards. (iv) The...'s business enterprises. (4) Encourage contracting with consortiums of small businesses,...

  6. 22 CFR 518.44 - Procurement procedures.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Requirements Procurement Standards § 518.44 Procurement procedures. (a) All recipients shall establish written... required, including the range of acceptable characteristics or minimum acceptable standards. (iv) The...'s business enterprises. (4) Encourage contracting with consortiums of small businesses,...

  7. 22 CFR 201.22 - Procurement under public sector procedures.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... option of the importer. Formal competitive bidding procedures include advertising the availability of an... competitive negotiation procedure may be used. Competitive negotiation procedures include advertising the... advertising is not required. The request for quotations may be prepared as a new document or may...

  8. 22 CFR 201.22 - Procurement under public sector procedures.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... option of the importer. Formal competitive bidding procedures include advertising the availability of an... competitive negotiation procedure may be used. Competitive negotiation procedures include advertising the... advertising is not required. The request for quotations may be prepared as a new document or may...

  9. An Improved Algorithm for Linear Inequalities in Pattern Recognition and Switching Theory.

    ERIC Educational Resources Information Center

    Geary, Leo C.

    This thesis presents a new iterative algorithm for solving an n by l solution vector w, if one exists, to a set of linear inequalities, A w greater than zero which arises in pattern recognition and switching theory. The algorithm is an extension of the Ho-Kashyap algorithm, utilizing the gradient descent procedure to minimize a criterion function…

  10. Unified development of multiplicative algorithms for linear and quadratic nonnegative matrix factorization.

    PubMed

    Yang, Zhirong; Oja, Erkki

    2011-12-01

    Multiplicative updates have been widely used in approximative nonnegative matrix factorization (NMF) optimization because they are convenient to deploy. Their convergence proof is usually based on the minimization of an auxiliary upper-bounding function, the construction of which however remains specific and only available for limited types of dissimilarity measures. Here we make significant progress in developing convergent multiplicative algorithms for NMF. First, we propose a general approach to derive the auxiliary function for a wide variety of NMF problems, as long as the approximation objective can be expressed as a finite sum of monomials with real exponents. Multiplicative algorithms with theoretical guarantee of monotonically decreasing objective function sequence can thus be obtained. The solutions of NMF based on most commonly used dissimilarity measures such as α- and β-divergence as well as many other more comprehensive divergences can be derived by the new unified principle. Second, our method is extended to a nonseparable case that includes e.g., γ-divergence and Rényi divergence. Third, we develop multiplicative algorithms for NMF using second-order approximative factorizations, in which each factorizing matrix may appear twice. Preliminary numerical experiments demonstrate that the multiplicative algorithms developed using the proposed procedure can achieve satisfactory Karush-Kuhn-Tucker optimality. We also demonstrate NMF problems where algorithms by the conventional method fail to guarantee descent at each iteration but those by our principle are immune to such violation.

  11. High-resolution algorithms for the Navier-Stokes equations for generalized discretizations

    NASA Astrophysics Data System (ADS)

    Mitchell, Curtis Randall

    Accurate finite volume solution algorithms for the two dimensional Navier Stokes equations and the three dimensional Euler equations for both structured and unstructured grid topologies are presented. Results for two dimensional quadrilateral and triangular elements and three dimensional tetrahedral elements will be provided. Fundamental to the solution algorithm is a technique for generating multidimensional polynomials which model the spatial variation of the flow variables. Cell averaged data is used to reconstruct pointwise distributions of the dependent variables. The reconstruction errors are evaluated on triangular meshes. The implementation of the algorithm is unique in that three reconstructions are performed for each cell face in the domain. Two of the reconstructions are used to evaluate the inviscid fluxes and correspond to the right and left interface states needed for the solution of a Riemann problem. The third reconstruction is used to evaluate the viscous fluxes. The gradient terms that appear in the viscous fluxes are formed by simply differentiating the polynomial. By selecting the appropriate cell control volumes, centered, upwind and upwind-biased stencils are possible. Numerical calculations in two dimensions include solutions to elliptic boundary value problems, Ringlebs' flow, an inviscid shock reflection, a flat plate boundary layer, and a shock induced separation over a flat plate. Three dimensional results include the ONERA M6 wing. All of the unstructured grids were generated using an advancing front mesh generation procedure. Modifications to the three dimensional grid generator were necessary to discretize the surface grids for bodies with high curvature. In addition, mesh refinement algorithms were implemented to improve the surface grid integrity. Examples include a Glasair fuselage, High Speed Civil Transport, and the ONERA M6 wing. The role of reconstruction as applied to adaptive remeshing is discussed and a new first order error

  12. Canalith Repositioning Procedure

    MedlinePlus

    ... repositioning procedure can help relieve benign paroxysmal positional vertigo (BPPV), a condition in which you have brief, but intense, episodes of dizziness that occur when you move your head. Vertigo ...

  13. Lithotripsy procedure (image)

    MedlinePlus

    Extracorporeal shock wave lithotripsy (ESWL) is a procedure used to shatter simple stones in the kidney or upper urinary tract. Ultrasonic waves are passed through the body until they strike the dense stones. Pulses of ...

  14. Dynamic alarm response procedures

    SciTech Connect

    Martin, J.; Gordon, P.; Fitch, K.

    2006-07-01

    The Dynamic Alarm Response Procedure (DARP) system provides a robust, Web-based alternative to existing hard-copy alarm response procedures. This paperless system improves performance by eliminating time wasted looking up paper procedures by number, looking up plant process values and equipment and component status at graphical display or panels, and maintenance of the procedures. Because it is a Web-based system, it is platform independent. DARP's can be served from any Web server that supports CGI scripting, such as Apache{sup R}, IIS{sup R}, TclHTTPD, and others. DARP pages can be viewed in any Web browser that supports Javascript and Scalable Vector Graphics (SVG), such as Netscape{sup R}, Microsoft Internet Explorer{sup R}, Mozilla Firefox{sup R}, Opera{sup R}, and others. (authors)

  15. Short Nuss bar procedure

    PubMed Central

    2016-01-01

    The Nuss procedure is now the preferred operation for surgical correction of pectus excavatum (PE). It is a minimally invasive technique, whereby one to three curved metal bars are inserted behind the sternum in order to push it into a normal position. The bars are left in situ for three years and then removed. This procedure significantly improves quality of life and, in most cases, also improves cardiac performance. Previously, the modified Ravitch procedure was used with resection of cartilage and the use of posterior support. This article details the new modified Nuss procedure, which requires the use of shorter bars than specified by the original technique. This technique facilitates the operation as the bar may be guided manually through the chest wall and no additional stabilizing sutures are necessary. PMID:27747185

  16. Safeguards management inspection procedures

    SciTech Connect

    Barth, M.J.; Dunn, D.R.

    1984-08-01

    The objective of this inspection module is to independently assess the contributions of licensee management to overall safeguards systems performance. The inspector accomplishes this objective by comparing the licensee's safeguards management to both the 10 CFR, parts 70 and 73, requirements and to generally accepted management practices. The vehicle by which this comparison is to be made consists of assessment questions and key issues which point the inspector to areas of primary concern to the NRC and which raise additional issues for the purpose of exposing management ineffectiveness. Further insight into management effectiveness is obtained through those assessment questions specifically directed toward the licensee's safeguards system performance. If the quality of the safeguards is poor, then the inspector should strongly suspect that management's role is ineffective and should attempt to determine management's influence (or lack thereof) on the underlying safeguards deficiencies. (The converse is not necessarily true, however.) The assessment questions in essence provide an opportunity for the inspector to identify, to single out, and to probe further, questionable management practices. Specific issues, circumstances, and concerns which point to questionable or inappropriate practices should be explicitly identified and referenced against the CFR and the assessment questions. The inspection report should also explain why the inspector feels certain management practices are poor, counter to the CFR, and/or point to ineffecive management. Concurrent with documenting the inspection results, the inspector should provide recommendations for alleviating observed management practices that are detrimental to effective safeguards. The recommendations could include: specific changes in the practices of the licensee, followup procedures on the part of NRC, and proposed license changes.

  17. Procedures for the computation of unsteady transonic flows including viscous effects

    NASA Technical Reports Server (NTRS)

    Rizzetta, D. P.

    1982-01-01

    Modifications of the code LTRAN2, developed by Ballhaus and Goorjian, which account for viscous effects in the computation of planar unsteady transonic flows are presented. Two models are considered and their theoretical development and numerical implementation is discussed. Computational examples employing both models are compared with inviscid solutions and with experimental data. Use of the modified code is described.

  18. Experimental procedure for the evaluation of tooth stiffness in spline coupling including angular misalignment

    NASA Astrophysics Data System (ADS)

    Curà, Francesca; Mura, Andrea

    2013-11-01

    Tooth stiffness is a very important parameter in studying both static and dynamic behaviour of spline couplings and gears. Many works concerning tooth stiffness calculation are available in the literature, but experimental results are very rare, above all considering spline couplings. In this work experimental values of spline coupling tooth stiffness have been obtained by means of a special hexapod measuring device. Experimental results have been compared with the corresponding theoretical and numerical ones. Also the effect of angular misalignments between hub and shaft has been investigated in the experimental planning.

  19. NCAA Enforcement Procedures Including the Role of the Committee on Infractions.

    ERIC Educational Resources Information Center

    Remington, Frank J.

    1984-01-01

    The task of the Infractions Committee and the NCAA enforcement staff is to deal with inappropriate conduct in intercollegiate athletic programs by member institutions, their staffs, their student athletes, and other institutional representatives. The work of the Committee and the NCAA enforcement process is described. (Author/MLW)

  20. New data evaluation procedure including advanced background subtraction for radiography using the example of insect mandibles

    NASA Astrophysics Data System (ADS)

    Mangold, Stefan; van de Kamp, Thomas; Steininger, Ralph

    2016-05-01

    The usefulness of full field transmission spectroscopy is shown using the example of mandible of the stick insect Peruphasma schultei. An advanced data evaluation tool chain with an energy drift correction and highly reproducible automatic background correction is presented. The results show significant difference between the top and the bottom of the mandible of an adult stick insect.

  1. 47 CFR 36.142 - Categories and apportionment procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... JURISDICTIONAL SEPARATIONS PROCEDURES; STANDARD PROCEDURES FOR SEPARATING TELECOMMUNICATIONS PROPERTY COSTS...) Other Information Origination/Termination Equipment—Category 1. This category includes the cost of other information origination/termination equipment not assigned to Category 2. The costs of other...

  2. 48 CFR 6.102 - Use of competitive procedures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... procedure (see subpart 36.6 for procedures). (2) Competitive selection of basic and applied research and... nature identifying areas of research interest, including criteria for selecting proposals, and...

  3. Overview of an Algorithm Plugin Package (APP)

    NASA Astrophysics Data System (ADS)

    Linda, M.; Tilmes, C.; Fleig, A. J.

    2004-12-01

    Science software that runs operationally is fundamentally different than software that runs on a scientist's desktop. There are complexities in hosting software for automated production that are necessary and significant. Identifying common aspects of these complexities can simplify algorithm integration. We use NASA's MODIS and OMI data production systems as examples. An Algorithm Plugin Package (APP) is science software that is combined with algorithm-unique elements that permit the algorithm to interface with, and function within, the framework of a data processing system. The framework runs algorithms operationally against large quantities of data. The extra algorithm-unique items are constrained by the design of the data processing system. APPs often include infrastructure that is vastly similar. When the common elements in APPs are identified and abstracted, the cost of APP development, testing, and maintenance will be reduced. This paper is an overview of the extra algorithm-unique pieces that are shared between MODAPS and OMIDAPS APPs. Our exploration of APP structure will help builders of other production systems identify their common elements and reduce algorithm integration costs. Our goal is to complete the development of a library of functions and a menu of implementation choices that reflect common needs of APPs. The library and menu will reduce the time and energy required for science developers to integrate algorithms into production systems.

  4. Replication and Comparison of the Newly Proposed ADOS-2, Module 4 Algorithm in ASD without ID: A Multi-site Study

    PubMed Central

    Pugliese, Cara E.; Kenworthy, Lauren; Bal, Vanessa Hus; Wallace, Gregory L; Yerys, Benjamin E; Maddox, Brenna B.; White, Susan W.; Popal, Haroon; Armour, Anna Chelsea; Miller, Judith; Herrington, John D.; Schultz, Robert T.; Martin, Alex; Anthony, Laura Gutermuth

    2015-01-01

    Recent updates have been proposed to the Autism Diagnostic Observation Schedule-2 Module 4 diagnostic algorithm. This new algorithm, however, has not yet been validated in an independent sample without intellectual disability (ID). This multi-site study compared the original and revised algorithms in individuals with ASD without ID. The revised algorithm demonstrated increased sensitivity, but lower specificity in the overall sample. Estimates were highest for females, individuals with a verbal IQ below 85 or above 115, and ages 16 and older. Best practice diagnostic procedures should include the Module 4 in conjunction with other assessment tools. Balancing needs for sensitivity and specificity depending on the purpose of assessment (e.g., clinical vs. research) and demographic characteristics mentioned above will enhance its utility. PMID:26385796

  5. [The EXIT procedure].

    PubMed

    Lehmann, S; Blödow, A; Flügel, W; Renner-Lützkendorf, H; Isbruch, A; Siegling, F; Untch, M; Strauß, J; Bloching, M B

    2013-08-01

    The ex utero intrapartum treatment (EXIT) procedure is used for unborn fetuses in cases of predictable complications of postpartum airway obstruction. Indications for the EXIT procedure are fetal neck tumors, obstruction of the trachea, hiatus hernia of the diaphragm and congenital high airway obstruction syndrome (CHAOS). Large cervical tumors prevent normal delivery of a fetus due to reclination of the head with airway obstruction. Therefore, a primary caesarean section or the EXIT procedure has to be considered. The EXIT procedure has time limitations as the blood supply by the placenta only lasts for 30-60 min. Airway protection has to be ensured during parturition.This article reports the case of an unborn fetus with a large cervical teratoma where an obstruction of the cervical airway was detected and monitored by ultrasound and magnetic resonance imaging (MRI) during pregnancy. The EXIT procedure was therefore used and successfully accomplished. The features of the interdisciplinary aspects of the EXIT procedure are described with the special aspects of each medical discipline.

  6. An innovative algorithm to accurately solve the Euler equations for rotary wing flow

    NASA Astrophysics Data System (ADS)

    Wagner, S.; Kraemer, E.

    Due to the ability of Euler methods to treat rotational, nonisentropic flows and also to correctly transport on the rotation embedded in the flow field it is possible to correctly represent the inflow conditions on the blade in the stationary hovering flight of a helicopter, which are significantly influenced by the tip vortices (blade-vortex interaction) of all blades. It is shown that also the very complex starting procedure of a helicopter rotor can be very well described by a simple Euler method that is to say without a wake model. The algorithm based on the procedure is part of category upwind schemes, in which the difference formation orientates to the actual, local flow state that is to say to the typical distrubance expansion direction. Hence, the artificial dissipation required for the numerical stability is included in a natural way adapted to the real flow state over the break-up error of the difference equation and has not to be included from outside. This makes the procedure robust. An implicit solution algorithm is used, where the invertation of the coefficient matrix is carried out by means of a Point-Gauss-Seidel relaxation.

  7. [Algorithm for percutaneous origin of irreversible icterus ].

    PubMed

    Marković, Z; Milićević, M; Masulović, D; Saranović, Dj; Stojanović, V; Marković, B; Kovacević, S

    2007-01-01

    It is retrospective analysis of all percutaneous billiary dranage typs used in 600 patients with opstructive icterus in last 10 years.The procedure technics is analysed. It had positiv therapeutical result in about 75% cases. The most frequent complication are showed. The most coressponding percutaneous derivation algorithm is discussed. As initial method is suggested the usage of externo-internal derivation which, in dependence of the procedure, continue by internal derivation-catheteral endoprosthesys or matelic stent. The covered metalic stents usage is suggested as method of choise in metalic endoprosthesys application.

  8. Continuation of advanced crew procedures development techniques

    NASA Technical Reports Server (NTRS)

    Arbet, J. D.; Benbow, R. L.; Evans, M. E.; Mangiaracina, A. A.; Mcgavern, J. L.; Spangler, M. C.; Tatum, I. C.

    1976-01-01

    An operational computer program, the Procedures and Performance Program (PPP) which operates in conjunction with the Phase I Shuttle Procedures Simulator to provide a procedures recording and crew/vehicle performance monitoring capability was developed. A technical synopsis of each task resulting in the development of the Procedures and Performance Program is provided. Conclusions and recommendations for action leading to the improvements in production of crew procedures development and crew training support are included. The PPP provides real-time CRT displays and post-run hardcopy output of procedures, difference procedures, performance data, parametric analysis data, and training script/training status data. During post-run, the program is designed to support evaluation through the reconstruction of displays to any point in time. A permanent record of the simulation exercise can be obtained via hardcopy output of the display data and via transfer to the Generalized Documentation Processor (GDP). Reference procedures data may be transferred from the GDP to the PPP. Interface is provided with the all digital trajectory program, the Space Vehicle Dynamics Simulator (SVDS) to support initial procedures timeline development.

  9. A theoretical comparison of evolutionary algorithms and simulated annealing

    SciTech Connect

    Hart, W.E.

    1995-08-28

    This paper theoretically compares the performance of simulated annealing and evolutionary algorithms. Our main result is that under mild conditions a wide variety of evolutionary algorithms can be shown to have greater performance than simulated annealing after a sufficiently large number of function evaluations. This class of EAs includes variants of evolutionary strategie and evolutionary programming, the canonical genetic algorithm, as well as a variety of genetic algorithms that have been applied to combinatorial optimization problems. The proof of this result is based on a performance analysis of a very general class of stochastic optimization algorithms, which has implications for the performance of a variety of other optimization algorithm.

  10. An improved conscan algorithm based on a Kalman filter

    NASA Technical Reports Server (NTRS)

    Eldred, D. B.

    1994-01-01

    Conscan is commonly used by DSN antennas to allow adaptive tracking of a target whose position is not precisely known. This article describes an algorithm that is based on a Kalman filter and is proposed to replace the existing fast Fourier transform based (FFT-based) algorithm for conscan. Advantages of this algorithm include better pointing accuracy, continuous update information, and accommodation of missing data. Additionally, a strategy for adaptive selection of the conscan radius is proposed. The performance of the algorithm is illustrated through computer simulations and compared to the FFT algorithm. The results show that the Kalman filter algorithm is consistently superior.

  11. Sequential unconstrained minimization algorithms for constrained optimization

    NASA Astrophysics Data System (ADS)

    Byrne, Charles

    2008-02-01

    The problem of minimizing a function f(x):RJ → R, subject to constraints on the vector variable x, occurs frequently in inverse problems. Even without constraints, finding a minimizer of f(x) may require iterative methods. We consider here a general class of iterative algorithms that find a solution to the constrained minimization problem as the limit of a sequence of vectors, each solving an unconstrained minimization problem. Our sequential unconstrained minimization algorithm (SUMMA) is an iterative procedure for constrained minimization. At the kth step we minimize the function G_k(x)=f(x)+g_k(x), to obtain xk. The auxiliary functions gk(x):D ⊆ RJ → R+ are nonnegative on the set D, each xk is assumed to lie within D, and the objective is to minimize the continuous function f:RJ → R over x in the set C=\\overline D , the closure of D. We assume that such minimizers exist, and denote one such by \\hat x . We assume that the functions gk(x) satisfy the inequalities 0\\leq g_k(x)\\leq G_{k-1}(x)-G_{k-1}(x^{k-1}), for k = 2, 3, .... Using this assumption, we show that the sequence {f(xk)} is decreasing and converges to f({\\hat x}) . If the restriction of f(x) to D has bounded level sets, which happens if \\hat x is unique and f(x) is closed, proper and convex, then the sequence {xk} is bounded, and f(x^*)=f({\\hat x}) , for any cluster point x*. Therefore, if \\hat x is unique, x^*={\\hat x} and \\{x^k\\}\\rightarrow {\\hat x} . When \\hat x is not unique, convergence can still be obtained, in particular cases. The SUMMA includes, as particular cases, the well-known barrier- and penalty-function methods, the simultaneous multiplicative algebraic reconstruction technique (SMART), the proximal minimization algorithm of Censor and Zenios, the entropic proximal methods of Teboulle, as well as certain cases of gradient descent and the Newton-Raphson method. The proof techniques used for SUMMA can be extended to obtain related results for the induced proximal

  12. Multidisciplinary design optimization using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1994-01-01

    Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared

  13. Incorporating Spatial Models in Visual Field Test Procedures

    PubMed Central

    Rubinstein, Nikki J.; McKendrick, Allison M.; Turpin, Andrew

    2016-01-01

    Purpose To introduce a perimetric algorithm (Spatially Weighted Likelihoods in Zippy Estimation by Sequential Testing [ZEST] [SWeLZ]) that uses spatial information on every presentation to alter visual field (VF) estimates, to reduce test times without affecting output precision and accuracy. Methods SWeLZ is a maximum likelihood Bayesian procedure, which updates probability mass functions at VF locations using a spatial model. Spatial models were created from empirical data, computational models, nearest neighbor, random relationships, and interconnecting all locations. SWeLZ was compared to an implementation of the ZEST algorithm for perimetry using computer simulations on 163 glaucomatous and 233 normal VFs (Humphrey Field Analyzer 24-2). Output measures included number of presentations and visual sensitivity estimates. Results There was no significant difference in accuracy or precision of SWeLZ for the different spatial models relative to ZEST, either when collated across whole fields or when split by input sensitivity. Inspection of VF maps showed that SWeLZ was able to detect localized VF loss. SWeLZ was faster than ZEST for normal VFs: median number of presentations reduced by 20% to 38%. The number of presentations was equivalent for SWeLZ and ZEST when simulated on glaucomatous VFs. Conclusions SWeLZ has the potential to reduce VF test times in people with normal VFs, without detriment to output precision and accuracy in glaucomatous VFs. Translational Relevance SWeLZ is a novel perimetric algorithm. Simulations show that SWeLZ can reduce the number of test presentations for people with normal VFs. Since many patients have normal fields, this has the potential for significant time savings in clinical settings. PMID:26981329

  14. Generic Survey Procedures.

    ERIC Educational Resources Information Center

    Matross, Ron; Roesler, Jon

    Hints on conducting surveys appropriate for university use are outlined, and sample checklists and forms are provided. The following research elements concerning generic surveys are covered: sequences of events for surveys conducted by mail (15 weeks) and telephone (11 weeks); algorithms for estimating materials costs and quantities; a catalog of…

  15. New algorithms for the symmetric tridiagonal eigenvalue computation

    SciTech Connect

    Pan, V. |

    1994-12-31

    The author presents new algorithms that accelerate the bisection method for the symmetric eigenvalue problem. The algorithms rely on some new techniques, which include acceleration of Newton`s iteration and can also be further applied to acceleration of some other iterative processes, in particular, of iterative algorithms for approximating polynomial zeros.

  16. An algorithm for the automatic synchronization of Omega receivers

    NASA Technical Reports Server (NTRS)

    Stonestreet, W. M.; Marzetta, T. L.

    1977-01-01

    The Omega navigation system and the requirement for receiver synchronization are discussed. A description of the synchronization algorithm is provided. The numerical simulation and its associated assumptions were examined and results of the simulation are presented. The suggested form of the synchronization algorithm and the suggested receiver design values were surveyed. A Fortran of the synchronization algorithm used in the simulation was also included.

  17. License plate detection algorithm

    NASA Astrophysics Data System (ADS)

    Broitman, Michael; Klopovsky, Yuri; Silinskis, Normunds

    2013-12-01

    A novel algorithm for vehicle license plates localization is proposed. The algorithm is based on pixel intensity transition gradient analysis. Near to 2500 natural-scene gray-level vehicle images of different backgrounds and ambient illumination was tested. The best set of algorithm's parameters produces detection rate up to 0.94. Taking into account abnormal camera location during our tests and therefore geometrical distortion and troubles from trees this result could be considered as passable. Correlation between source data, such as license Plate dimensions and texture, cameras location and others, and parameters of algorithm were also defined.

  18. Distributed Minimum Hop Algorithms

    DTIC Science & Technology

    1982-01-01

    acknowledgement), node d starts iteration i+1, and otherwise the algorithm terminates. A detailed description of the algorithm is given in pidgin algol...precise behavior of the algorithm under these circumstances is described by the pidgin algol program in the appendix which is executed by each node. The...l) < N!(2) for each neighbor j, and thus by induction,J -1 N!(2-1) < n-i + (Z-1) + N!(Z-1), completing the proof. Algorithm Dl in Pidgin Algol It is

  19. Advances in Procedural Techniques - Antegrade

    PubMed Central

    Wilson, William; Spratt, James C.

    2014-01-01

    There have been many technological advances in antegrade CTO PCI, but perhaps most importantly has been the evolution of the “hybrid’ approach where ideally there exists a seamless interplay of antegrade wiring, antegrade dissection re-entry and retrograde approaches as dictated by procedural factors. Antegrade wire escalation with intimal tracking remains the preferred initial strategy in short CTOs without proximal cap ambiguity. More complex CTOs, however, usually require either a retrograde or an antegrade dissection re-entry approach, or both. Antegrade dissection re-entry is well suited to long occlusions where there is a healthy distal vessel and limited “interventional” collaterals. Early use of a dissection re-entry strategy will increase success rates, reduce complications, and minimise radiation exposure, contrast use as well as procedural times. Antegrade dissection can be achieved with a knuckle wire technique or the CrossBoss catheter whilst re-entry will be achieved in the most reproducible and reliable fashion by the Stingray balloon/wire. It should be avoided where there is potential for loss of large side branches. It remains to be seen whether use of newer dissection re-entry strategies will be associated with lower restenosis rates compared with the more uncontrolled subintimal tracking strategies such as STAR and whether stent insertion in the subintimal space is associated with higher rates of late stent malapposition and stent thrombosis. It is to be hoped that the algorithms, which have been developed to guide CTO operators, allow for a better transfer of knowledge and skills to increase uptake and acceptance of CTO PCI as a whole. PMID:24694104

  20. The development of flux-split algorithms for flows with non-equilibrium thermodynamics and chemical reactions

    NASA Technical Reports Server (NTRS)

    Grossman, B.; Cinella, P.

    1988-01-01

    A finite-volume method for the numerical computation of flows with nonequilibrium thermodynamics and chemistry is presented. A thermodynamic model is described which simplifies the coupling between the chemistry and thermodynamics and also results in the retention of the homogeneity property of the Euler equations (including all the species continuity and vibrational energy conservation equations). Flux-splitting procedures are developed for the fully coupled equations involving fluid dynamics, chemical production and thermodynamic relaxation processes. New forms of flux-vector split and flux-difference split algorithms are embodied in a fully coupled, implicit, large-block structure, including all the species conservation and energy production equations. Several numerical examples are presented, including high-temperature shock tube and nozzle flows. The methodology is compared to other existing techniques, including spectral and central-differenced procedures, and favorable comparisons are shown regarding accuracy, shock-capturing and convergence rates.

  1. Ice surface temperature retrieval from AVHRR, ATSR, and passive microwave satellite data: Algorithm development and application

    NASA Technical Reports Server (NTRS)

    Key, Jeff; Maslanik, James; Steffen, Konrad

    1995-01-01

    During the second phase project year we have made progress in the development and refinement of surface temperature retrieval algorithms and in product generation. More specifically, we have accomplished the following: (1) acquired a new advanced very high resolution radiometer (AVHRR) data set for the Beaufort Sea area spanning an entire year; (2) acquired additional along-track scanning radiometer(ATSR) data for the Arctic and Antarctic now totalling over eight months; (3) refined our AVHRR Arctic and Antarctic ice surface temperature (IST) retrieval algorithm, including work specific to Greenland; (4) developed ATSR retrieval algorithms for the Arctic and Antarctic, including work specific to Greenland; (5) developed cloud masking procedures for both AVHRR and ATSR; (6) generated a two-week bi-polar global area coverage (GAC) set of composite images from which IST is being estimated; (7) investigated the effects of clouds and the atmosphere on passive microwave 'surface' temperature retrieval algorithms; and (8) generated surface temperatures for the Beaufort Sea data set, both from AVHRR and special sensor microwave imager (SSM/I).

  2. Procedural learning and dyslexia.

    PubMed

    Nicolson, R I; Fawcett, A J; Brookes, R L; Needle, J

    2010-08-01

    Three major 'neural systems', specialized for different types of information processing, are the sensory, declarative, and procedural systems. It has been proposed (Trends Neurosci., 30(4), 135-141) that dyslexia may be attributable to impaired function in the procedural system together with intact declarative function. We provide a brief overview of the increasing evidence relating to the hypothesis, noting that the framework involves two main claims: first that 'neural systems' provides a productive level of description avoiding the underspecificity of cognitive descriptions and the overspecificity of brain structural accounts; and second that a distinctive feature of procedural learning is its extended time course, covering from minutes to months. In this article, we focus on the second claim. Three studies-speeded single word reading, long-term response learning, and overnight skill consolidation-are reviewed which together provide clear evidence of difficulties in procedural learning for individuals with dyslexia, even when the tasks are outside the literacy domain. The educational implications of the results are then discussed, and in particular the potential difficulties that impaired overnight procedural consolidation would entail. It is proposed that response to intervention could be better predicted if diagnostic tests on the different forms of learning were first undertaken.

  3. Procedural sedation analgesia

    PubMed Central

    Sheta, Saad A

    2010-01-01

    The number of noninvasive and minimally invasive procedures performed outside of the operating room has grown exponentially over the last several decades.Sedation, analgesia, or both may be needed for many of these interventional or diagnostic procedures. Individualized care is important when determining if a patient requires procedural sedation analgesia (PSA). The patient might need an anti-anxiety drug, pain medicine, immobilization, simple reassurance, or a combination of these interventions. The goals of PSA in four different multidisciplinary practices namely; emergency, dentistry, radiology and gastrointestinal endoscopy are discussed in this review article. Some procedures are painful, others painless. Therefore, goals of PSA vary widely. Sedation management can range from minimal sedation, to the extent of minimal anesthesia. Procedural sedation in emergency department (ED) usually requires combinations of multiple agents to reach desired effects of analgesia plus anxiolysis. However, in dental practice, moderate sedation analgesia (known to the dentists as conscious sedation) is usually what is required. It is usually most effective with the combined use of local anesthesia. The mainstay of success for painless imaging is absolute immobility. Immobility can be achieved by deep sedation or minimal anesthesia. On the other hand, moderate sedation, deep sedation, minimal anesthesia and conventional general anesthesia can be all utilized for management of gastrointestinal endoscopy. PMID:20668560

  4. Mobile Energy Laboratory Procedures

    SciTech Connect

    Armstrong, P.R.; Batishko, C.R.; Dittmer, A.L.; Hadley, D.L.; Stoops, J.L.

    1993-09-01

    Pacific Northwest Laboratory (PNL) has been tasked to plan and implement a framework for measuring and analyzing the efficiency of on-site energy conversion, distribution, and end-use application on federal facilities as part of its overall technical support to the US Department of Energy (DOE) Federal Energy Management Program (FEMP). The Mobile Energy Laboratory (MEL) Procedures establish guidelines for specific activities performed by PNL staff. PNL provided sophisticated energy monitoring, auditing, and analysis equipment for on-site evaluation of energy use efficiency. Specially trained engineers and technicians were provided to conduct tests in a safe and efficient manner with the assistance of host facility staff and contractors. Reports were produced to describe test procedures, results, and suggested courses of action. These reports may be used to justify changes in operating procedures, maintenance efforts, system designs, or energy-using equipment. The MEL capabilities can subsequently be used to assess the results of energy conservation projects. These procedures recognize the need for centralized NM administration, test procedure development, operator training, and technical oversight. This need is evidenced by increasing requests fbr MEL use and the economies available by having trained, full-time MEL operators and near continuous MEL operation. DOE will assign new equipment and upgrade existing equipment as new capabilities are developed. The equipment and trained technicians will be made available to federal agencies that provide funding for the direct costs associated with MEL use.

  5. Comparison of rotation algorithms for digital images

    NASA Astrophysics Data System (ADS)

    Starovoitov, Valery V.; Samal, Dmitry

    1999-09-01

    The paper presents a comparative study of several algorithms developed for digital image rotation. No losing generality we studied gray scale images. We have tested methods preserving gray values of the original images, performing some interpolation and two procedures implemented into the Corel Photo-paint and Adobe Photoshop soft packages. By the similar way methods for rotation of color images may be evaluated also.

  6. Variable neighbourhood simulated annealing algorithm for capacitated vehicle routing problems

    NASA Astrophysics Data System (ADS)

    Xiao, Yiyong; Zhao, Qiuhong; Kaku, Ikou; Mladenovic, Nenad

    2014-04-01

    This article presents the variable neighbourhood simulated annealing (VNSA) algorithm, a variant of the variable neighbourhood search (VNS) combined with simulated annealing (SA), for efficiently solving capacitated vehicle routing problems (CVRPs). In the new algorithm, the deterministic 'Move or not' criterion of the original VNS algorithm regarding the incumbent replacement is replaced by an SA probability, and the neighbourhood shifting of the original VNS (from near to far by k← k+1) is replaced by a neighbourhood shaking procedure following a specified rule. The geographical neighbourhood structure is introduced in constructing the neighbourhood structures for the CVRP of the string model. The proposed algorithm is tested against 39 well-known benchmark CVRP instances of different scales (small/middle, large, very large). The results show that the VNSA algorithm outperforms most existing algorithms in terms of computational effectiveness and efficiency, showing good performance in solving large and very large CVRPs.

  7. Algorithm refinement for the stochastic Burgers' equation

    SciTech Connect

    Bell, John B.; Foo, Jasmine; Garcia, Alejandro L. . E-mail: algarcia@algarcia.org

    2007-04-10

    In this paper, we develop an algorithm refinement (AR) scheme for an excluded random walk model whose mean field behavior is given by the viscous Burgers' equation. AR hybrids use the adaptive mesh refinement framework to model a system using a molecular algorithm where desired while allowing a computationally faster continuum representation to be used in the remainder of the domain. The focus in this paper is the role of fluctuations on the dynamics. In particular, we demonstrate that it is necessary to include a stochastic forcing term in Burgers' equation to accurately capture the correct behavior of the system. The conclusion we draw from this study is that the fidelity of multiscale methods that couple disparate algorithms depends on the consistent modeling of fluctuations in each algorithm and on a coupling, such as algorithm refinement, that preserves this consistency.

  8. A region labeling algorithm based on block

    NASA Astrophysics Data System (ADS)

    Wang, Jing

    2009-10-01

    The time performance of region labeling algorithm is important for image process. However, common region labeling algorithms cannot meet the requirements of real-time image processing. In this paper, a technique using block to record the connective area is proposed. By this technique, connective closure and information related to the target can be computed during a one-time image scan. It records the edge pixel's coordinate, including outer side edges and inner side edges, as well as the label, and then it can calculate connecting area's shape center, area and gray. Compared to others, this block based region labeling algorithm is more efficient. It can well meet the time requirements of real-time processing. Experiment results also validate the correctness and efficiency of the algorithm. Experiment results show that it can detect any connecting areas in binary images, which contains various complex and quaint patterns. The block labeling algorithm is used in a real-time image processing program now.

  9. Algorithms for improved performance in cryptographic protocols.

    SciTech Connect

    Schroeppel, Richard Crabtree; Beaver, Cheryl Lynn

    2003-11-01

    Public key cryptographic algorithms provide data authentication and non-repudiation for electronic transmissions. The mathematical nature of the algorithms, however, means they require a significant amount of computation, and encrypted messages and digital signatures possess high bandwidth. Accordingly, there are many environments (e.g. wireless, ad-hoc, remote sensing networks) where public-key requirements are prohibitive and cannot be used. The use of elliptic curves in public-key computations has provided a means by which computations and bandwidth can be somewhat reduced. We report here on the research conducted in an LDRD aimed to find even more efficient algorithms and to make public-key cryptography available to a wider range of computing environments. We improved upon several algorithms, including one for which a patent has been applied. Further we discovered some new problems and relations on which future cryptographic algorithms may be based.

  10. Algorithms for radio networks with dynamic topology

    NASA Astrophysics Data System (ADS)

    Shacham, Nachum; Ogier, Richard; Rutenburg, Vladislav V.; Garcia-Luna-Aceves, Jose

    1991-08-01

    The objective of this project was the development of advanced algorithms and protocols that efficiently use network resources to provide optimal or nearly optimal performance in future communication networks with highly dynamic topologies and subject to frequent link failures. As reflected by this report, we have achieved our objective and have significantly advanced the state-of-the-art in this area. The research topics of the papers summarized include the following: efficient distributed algorithms for computing shortest pairs of disjoint paths; minimum-expected-delay alternate routing algorithms for highly dynamic unreliable networks; algorithms for loop-free routing; multipoint communication by hierarchically encoded data; efficient algorithms for extracting the maximum information from event-driven topology updates; methods for the neural network solution of link scheduling and other difficult problems arising in communication networks; and methods for robust routing in networks subject to sophisticated attacks.

  11. Universal charge algorithm for telecommunication batteries

    SciTech Connect

    Tsenter, B.; Schwartzmiller, F.

    1997-12-01

    Three chemistries are used extensively in today`s portable telecommunication devices: nickel-cadmium, nickel-metal hydride, and lithium-ion. Nickel-cadmium and nickel-metal hydride batteries (also referred to as nickel-based batteries) are well known while lithium-ion batteries are less known. An universal charging algorithm should satisfactorily charge all chemistries while providing recognition among them. Total Battery Management, Inc. (TBM) has developed individual charging algorithms for nickel-based and lithium-ion batteries and a procedure for recognition, if necessary, to incorporate in an universal algorithm. TBM`s charging philosophy is the first to understand the battery from the chemical point of view and then provide an electronic solution.

  12. Recurrent neural networks training with stable bounding ellipsoid algorithm.

    PubMed

    Yu, Wen; de Jesús Rubio, José

    2009-06-01

    Bounding ellipsoid (BE) algorithms offer an attractive alternative to traditional training algorithms for neural networks, for example, backpropagation and least squares methods. The benefits include high computational efficiency and fast convergence speed. In this paper, we propose an ellipsoid propagation algorithm to train the weights of recurrent neural networks for nonlinear systems identification. Both hidden layers and output layers can be updated. The stability of the BE algorithm is proven.

  13. Clause Elimination Procedures for CNF Formulas

    NASA Astrophysics Data System (ADS)

    Heule, Marijn; Järvisalo, Matti; Biere, Armin

    We develop and analyze clause elimination procedures, a specific family of simplification techniques for conjunctive normal form (CNF) formulas. Extending known procedures such as tautology, subsumption, and blocked clause elimination, we introduce novel elimination procedures based on hidden and asymmetric variants of these techniques. We analyze the resulting nine (including five new) clause elimination procedures from various perspectives: size reduction, BCP-preservance, confluence, and logical equivalence. For the variants not preserving logical equivalence, we show how to reconstruct solutions to original CNFs from satisfying assignments to simplified CNFs. We also identify a clause elimination procedure that does a transitive reduction of the binary implication graph underlying any CNF formula purely on the CNF level.

  14. Computerized operating procedures

    SciTech Connect

    Ness, E.; Teigen, J.

    1994-12-31

    A number of observed and potential problems in the nuclear industry are related to the quality of operating procedures. Many of the problems identified in operating procedure preparation, implementation, and maintenance have a technical nature, which can be directly addressed by developing computerized procedure handling tools. The Halden Reactor Project (HRP) of the Organization for Economic Cooperation and Development has since 1985 performed research work within this field. A product of this effort is the development of a second version of the computerized operation manuals (COPMA) system. This paper summarizes the most important characteristics of the COPMA-II system and discusses some of the experiences in using a system like COPMA-II.

  15. Reasoning about procedural knowledge

    NASA Technical Reports Server (NTRS)

    Georgeff, M. P.

    1985-01-01

    A crucial aspect of automated reasoning about space operations is that knowledge of the problem domain is often procedural in nature - that is, the knowledge is often in the form of sequences of actions or procedures for achieving given goals or reacting to certain situations. In this paper a system is described that explicitly represents and reasons about procedural knowledge. The knowledge representation used is sufficiently rich to describe the effects of arbitrary sequences of tests and actions, and the inference mechanism provides a means for directly using this knowledge to reach desired operational goals. Furthermore, the representation has a declarative semantics that provides for incremental changes to the system, rich explanatory capabilities, and verifiability. The approach also provides a mechanism for reasoning about the use of this knowledge, thus enabling the system to choose effectively between alternative courses of action.

  16. Algorithm Updates for the Fourth SeaWiFS Data Reprocessing

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford, B. (Editor); Firestone, Elaine R. (Editor); Patt, Frederick S.; Barnes, Robert A.; Eplee, Robert E., Jr.; Franz, Bryan A.; Robinson, Wayne D.; Feldman, Gene Carl; Bailey, Sean W.

    2003-01-01

    The efforts to improve the data quality for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) data products have continued, following the third reprocessing of the global data set in May 2000. Analyses have been ongoing to address all aspects of the processing algorithms, particularly the calibration methodologies, atmospheric correction, and data flagging and masking. All proposed changes were subjected to rigorous testing, evaluation and validation. The results of these activities culminated in the fourth reprocessing, which was completed in July 2002. The algorithm changes, which were implemented for this reprocessing, are described in the chapters of this volume. Chapter 1 presents an overview of the activities leading up to the fourth reprocessing, and summarizes the effects of the changes. Chapter 2 describes the modifications to the on-orbit calibration, specifically the focal plane temperature correction and the temporal dependence. Chapter 3 describes the changes to the vicarious calibration, including the stray light correction to the Marine Optical Buoy (MOBY) data and improved data screening procedures. Chapter 4 describes improvements to the near-infrared (NIR) band correction algorithm. Chapter 5 describes changes to the atmospheric correction and the oceanic property retrieval algorithms, including out-of-band corrections, NIR noise reduction, and handling of unusual conditions. Chapter 6 describes various changes to the flags and masks, to increase the number of valid retrievals, improve the detection of the flag conditions, and add new flags. Chapter 7 describes modifications to the level-la and level-3 algorithms, to improve the navigation accuracy, correct certain types of spacecraft time anomalies, and correct a binning logic error. Chapter 8 describes the algorithm used to generate the SeaWiFS photosynthetically available radiation (PAR) product. Chapter 9 describes a coupled ocean-atmosphere model, which is used in one of the changes

  17. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1998-01-01

    A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and

  18. Procedure and Program Examples

    NASA Astrophysics Data System (ADS)

    Britz, Dieter

    Here some modules, procedures and whole programs are described, that may be useful to the reader, as they have been, to the author. They are all in Fortran 90/95 and start with a generally useful module, that will be used in most procedures and programs in the examples, and another module useful for programs using a Rosenbrock variant. The source texts (except for the two modules) are not reproduced here, but can be downloaded from the web site www.springerlink.com/openurl.asp?genre=issue &issn=1616-6361&volume=666 (the two lines form one contiguous URL!).

  19. Clutter discrimination algorithm simulation in pulse laser radar imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; Su, Xuan; Zhu, Fule

    2015-10-01

    Pulse laser radar imaging performance is greatly influenced by different kinds of clutter. Various algorithms are developed to mitigate clutter. However, estimating performance of a new algorithm is difficult. Here, a simulation model for estimating clutter discrimination algorithms is presented. This model consists of laser pulse emission, clutter jamming, laser pulse reception and target image producing. Additionally, a hardware platform is set up gathering clutter data reflected by ground and trees. The data logging is as clutter jamming input in the simulation model. The hardware platform includes a laser diode, a laser detector and a high sample rate data logging circuit. The laser diode transmits short laser pulses (40ns FWHM) at 12.5 kilohertz pulse rate and at 905nm wavelength. An analog-to-digital converter chip integrated in the sample circuit works at 250 mega samples per second. The simulation model and the hardware platform contribute to a clutter discrimination algorithm simulation system. Using this system, after analyzing clutter data logging, a new compound pulse detection algorithm is developed. This new algorithm combines matched filter algorithm and constant fraction discrimination (CFD) algorithm. Firstly, laser echo pulse signal is processed by matched filter algorithm. After the first step, CFD algorithm comes next. Finally, clutter jamming from ground and trees is discriminated and target image is produced. Laser radar images are simulated using CFD algorithm, matched filter algorithm and the new algorithm respectively. Simulation result demonstrates that the new algorithm achieves the best target imaging effect of mitigating clutter reflected by ground and trees.

  20. Management of Tissue Ischemia in Mastectomy Skin Flaps: Algorithm Integrating SPY Angiography and Topical Nitroglycerin

    PubMed Central

    Sanniec, Kyle; Teotia, Sumeet

    2016-01-01

    Summary: Tissue ischemia can be managed in several different ways based on the cause of the perfusion defect, including topical nitroglycerin or surgical intervention. However, there are times when tissue perfusion is questioned and clinical examination is unable to determine definitively the cause of ischemic tissue and whether it will survive. In this technique article, we describe our comprehensive algorithm for the management of tissue ischemia in mastectomy skin flaps, which can be applied to other plastic surgery procedures by integrating SPY angiography and topical nitroglycerin. PMID:27826472

  1. Operational Control Procedures for the Activated Sludge Process, Part III-A: Calculation Procedures.

    ERIC Educational Resources Information Center

    West, Alfred W.

    This is the second in a series of documents developed by the National Training and Operational Technology Center describing operational control procedures for the activated sludge process used in wastewater treatment. This document deals exclusively with the calculation procedures, including simplified mixing formulas, aeration tank…

  2. Toddler test or procedure preparation

    MedlinePlus

    Preparing toddler for test/procedure; Test/procedure preparation - toddler; Preparing for a medical test or procedure - toddler ... Before the test, know that your child will probably cry. Even if you prepare, your child may feel some discomfort or ...

  3. Preschooler test or procedure preparation

    MedlinePlus

    Preparing preschoolers for test/procedure; Test/procedure preparation - preschooler ... Preparing children for medical tests can reduce their anxiety. It can also make them less likely to cry and resist the procedure. Research shows that ...

  4. Transitional Division Algorithms.

    ERIC Educational Resources Information Center

    Laing, Robert A.; Meyer, Ruth Ann

    1982-01-01

    A survey of general mathematics students whose teachers were taking an inservice workshop revealed that they had not yet mastered division. More direct introduction of the standard division algorithm is favored in elementary grades, with instruction of transitional processes curtailed. Weaknesses in transitional algorithms appear to outweigh…

  5. Ultrametric Hierarchical Clustering Algorithms.

    ERIC Educational Resources Information Center

    Milligan, Glenn W.

    1979-01-01

    Johnson has shown that the single linkage and complete linkage hierarchical clustering algorithms induce a metric on the data known as the ultrametric. Johnson's proof is extended to four other common clustering algorithms. Two additional methods also produce hierarchical structures which can violate the ultrametric inequality. (Author/CTM)

  6. Testing Intelligently Includes Double-Checking Wechsler IQ Scores

    ERIC Educational Resources Information Center

    Kuentzel, Jeffrey G.; Hetterscheidt, Lesley A.; Barnett, Douglas

    2011-01-01

    The rigors of standardized testing make for numerous opportunities for examiner error, including simple computational mistakes in scoring. Although experts recommend that test scoring be double-checked, the extent to which independent double-checking would reduce scoring errors is not known. A double-checking procedure was established at a…

  7. Radiation injuries after fluoroscopic procedures.

    PubMed

    Mettler, Fred A; Koenig, Titus R; Wagner, Louis K; Kelsey, Charles A

    2002-10-01

    Fluoroscopically guided diagnostic and interventional procedures have become much more commonplace over the last decade. Current fluoroscopes are easily capable of producing dose rates in the range of 0.2 Gy (20 rads) per minute. The dose rate often changes dramatically with patient positioning and size. Most machines currently in use have no method to display approximate patient dose other than the rough surrogate of total fluoroscopy time. This does not include patient dose incurred during fluorography (serial imaging or cine runs), which can be considerably greater than dose during fluoroscopy. There have been over 100 cases of documented radiation skin and underlying tissue injury, a large portion of which resulted in dermal necrosis. The true number of injuries is undoubtedly much higher. The highest dose procedures are complex interventions such as those involving percutaneous angioplasties, stent placements, embolizations, and TIPS. In some cases skin doses have been in excess of 60 Gy (6000 rads). In many instances the procedures have been performed by physicians with little training in radiation effects, little appreciation of the radiation injuries that are possible or the strategies that could have been used to reduce both patient and staff doses. Almost all of the severe injuries that have occurred were avoidable.

  8. Totally parallel multilevel algorithms

    NASA Technical Reports Server (NTRS)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  9. Operational Implementation of Space Debris Mitigation Procedures

    NASA Astrophysics Data System (ADS)

    Gicquel, Anne-Helene; Bonaventure, Francois

    2013-08-01

    During the spacecraft lifetime, Astrium supports its customers to manage collision risks alerts from the Joint Space Operations Center (JSpOC). This was previously done with hot-line support and a manual operational procedure. Today, it is automated and integrated in QUARTZ, the Astrium Flight Dynamics operational tool. The algorithms and process details for this new 5- step functionality are provided in this paper. To improve this functionality, some R&D activities such as the study of dilution phenomenon and low relative velocity encounters are going on. Regarding end of life disposal, recent operational experiences as well as studies results are presented.

  10. Monte Carlo procedure for protein design

    NASA Astrophysics Data System (ADS)

    Irbäck, Anders; Peterson, Carsten; Potthast, Frank; Sandelin, Erik

    1998-11-01

    A method for sequence optimization in protein models is presented. The approach, which has inherited its basic philosophy from recent work by Deutsch and Kurosky [Phys. Rev. Lett. 76, 323 (1996)] by maximizing conditional probabilities rather than minimizing energy functions, is based upon a different and very efficient multisequence Monte Carlo scheme. By construction, the method ensures that the designed sequences represent good folders thermodynamically. A bootstrap procedure for the sequence space search is devised making very large chains feasible. The algorithm is successfully explored on the two-dimensional HP model [K. F. Lau and K. A. Dill, Macromolecules 32, 3986 (1989)] with chain lengths N=16, 18, and 32.

  11. Instrument Calibration and Certification Procedure

    SciTech Connect

    Davis, R. Wesley

    2016-05-31

    The Amptec 640SL-2 is a 4-wire Kelvin failsafe resistance meter, designed to reliably use very low-test currents for its resistance measurements. The 640SL-1 is a 2-wire version, designed to support customers using the Reynolds Industries type 311 connector. For both versions, a passive (analog) dual function DC Milliameter/Voltmeter allows the user to verify the actual 640SL output current level and the open circuit voltage on the test leads. This procedure includes tests of essential performance parameters. Any malfunction noticed during calibration, whether specifically tested for or not, shall be corrected before calibration continues or is completed.

  12. Spacecraft crew procedures from paper to computers

    NASA Technical Reports Server (NTRS)

    Oneal, Michael; Manahan, Meera

    1991-01-01

    Described here is a research project that uses human factors and computer systems knowledge to explore and help guide the design and creation of an effective Human-Computer Interface (HCI) for spacecraft crew procedures. By having a computer system behind the user interface, it is possible to have increased procedure automation, related system monitoring, and personalized annotation and help facilities. The research project includes the development of computer-based procedure system HCI prototypes and a testbed for experiments that measure the effectiveness of HCI alternatives in order to make design recommendations. The testbed will include a system for procedure authoring, editing, training, and execution. Progress on developing HCI prototypes for a middeck experiment performed on Space Shuttle Mission STS-34 and for upcoming medical experiments are discussed. The status of the experimental testbed is also discussed.

  13. Evaluation Perspectives and Procedures.

    ERIC Educational Resources Information Center

    Scriven, Michael

    This article on evaluation perspectives and procedures is divided into six sections. The first section briefly discusses qualitative and quantitative research and evaluation. In the second section there is an exploration of the utility and validity of a checklist that can be used to evaluate products, as an instrument for evaluating producers, for…

  14. Educational Accounting Procedures.

    ERIC Educational Resources Information Center

    Tidwell, Sam B.

    This chapter of "Principles of School Business Management" reviews the functions, procedures, and reports with which school business officials must be familiar in order to interpret and make decisions regarding the school district's financial position. Among the accounting functions discussed are financial management, internal auditing,…

  15. Student Loan Collection Procedures.

    ERIC Educational Resources Information Center

    National Association of College and University Business Officers, Washington, DC.

    This manual on the collection of student loans is intended for the use of business officers and loan collection personnel of colleges and universities of all sizes. The introductory chapter is an overview of sound collection practices and procedures. It discusses the making of a loan, in-school servicing of the accounts, the exit interview, the…

  16. Write Procedures That Work.

    ERIC Educational Resources Information Center

    Cubberley, Carol W.

    1991-01-01

    Discusses written procedures that explain library tasks and describes methods for writing them clearly and coherently. The use of appropriate terminology and vocabulary is discussed; the value of illustrations, typography, and format to enhance the visual effect is explained; the intended audience is considered; and examples are given. (seven…

  17. Simulating Laboratory Procedures.

    ERIC Educational Resources Information Center

    Baker, J. E.; And Others

    1986-01-01

    Describes the use of computer assisted instruction in a medical microbiology course. Presents examples of how computer assisted instruction can present case histories in which the laboratory procedures are simulated. Discusses an authoring system used to prepare computer simulations and provides one example of a case history dealing with fractured…

  18. Pediatric Procedural Pain

    ERIC Educational Resources Information Center

    Blount, Ronald L.; Piira, Tiina; Cohen, Lindsey L.; Cheng, Patricia S.

    2006-01-01

    This article reviews the various settings in which infants, children, and adolescents experience pain during acute medical procedures and issues related to referral of children to pain management teams. In addition, self-report, reports by others, physiological monitoring, and direct observation methods of assessment of pain and related constructs…

  19. Visual Screening: A Procedure.

    ERIC Educational Resources Information Center

    Williams, Robert T.

    Vision is a complex process involving three phases: physical (acuity), physiological (integrative), and psychological (perceptual). Although these phases cannot be considered discrete, they provide the basis for the visual screening procedure used by the Reading Services of Colorado State University and described in this document. Ten tests are…

  20. Special Education: Procedural Guide.

    ERIC Educational Resources Information Center

    Dependents Schools (DOD), Washington, DC.

    The guide is intended to provide information to administrators and regional and local case study committees on special education procedures within Department of Defense Dependents Schools (DoDDS). The manual addresses a step-by step approach from referral to the implementation of individualized education programs (IEP). The following topics are…

  1. Inverse wing design in transonic flow including viscous interaction

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.; Ratcliff, Robert R.; Gally, Thomas A.; Campbell, Richard L.

    1989-01-01

    Several inverse methods were compared and initial results indicate that differences in results are primarily due to coordinate systems and fuselage representations and not to design procedures. Further, results from a direct-inverse method that includes 3-D wing boundary layer effects, wake curvature, and wake displacement are represented. These results show that boundary layer displacements must be included in the design process for accurate results.

  2. Simultaneous image compression, fusion and encryption algorithm based on compressive sensing and chaos

    NASA Astrophysics Data System (ADS)

    Liu, Xingbin; Mei, Wenbo; Du, Huiqian

    2016-05-01

    In this paper, a novel approach based on compressive sensing and chaos is proposed for simultaneously compressing, fusing and encrypting multi-modal images. The sparsely represented source images are firstly measured with the key-controlled pseudo-random measurement matrix constructed using logistic map, which reduces the data to be processed and realizes the initial encryption. Then the obtained measurements are fused by the proposed adaptive weighted fusion rule. The fused measurement is further encrypted into the ciphertext through an iterative procedure including improved random pixel exchanging technique and fractional Fourier transform. The fused image can be reconstructed by decrypting the ciphertext and using a recovery algorithm. The proposed algorithm not only reduces data volume but also simplifies keys, which improves the efficiency of transmitting data and distributing keys. Numerical results demonstrate the feasibility and security of the proposed scheme.

  3. THE APPLICATION OF AN EVOLUTIONARY ALGORITHM TO THE OPTIMIZATION OF A MESOSCALE METEOROLOGICAL MODEL

    SciTech Connect

    Werth, D.; O'Steen, L.

    2008-02-11

    We show that a simple evolutionary algorithm can optimize a set of mesoscale atmospheric model parameters with respect to agreement between the mesoscale simulation and a limited set of synthetic observations. This is illustrated using the Regional Atmospheric Modeling System (RAMS). A set of 23 RAMS parameters is optimized by minimizing a cost function based on the root mean square (rms) error between the RAMS simulation and synthetic data (observations derived from a separate RAMS simulation). We find that the optimization can be efficient with relatively modest computer resources, thus operational implementation is possible. The optimization efficiency, however, is found to depend strongly on the procedure used to perturb the 'child' parameters relative to their 'parents' within the evolutionary algorithm. In addition, the meteorological variables included in the rms error and their weighting are found to be an important factor with respect to finding the global optimum.

  4. 17 CFR 38.3 - Procedures for designation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... description of the trading system, algorithm, security and access limitation procedures with a timeline for an... results and contingency or disaster recovery plans; (C) A copy of any documents describing the applicant's... for review, or the amendment or supplement that is inconsistent with § 38.3(a)(2)(iii)....

  5. Complications in common general pediatric surgery procedures.

    PubMed

    Linnaus, Maria E; Ostlie, Daniel J

    2016-12-01

    Complications related to general pediatric surgery procedures are a major concern for pediatric surgeons and their patients. Although infrequent, when they occur the consequences can lead to significant morbidity and psychosocial stress. The purpose of this article is to discuss the common complications encountered during several common pediatric general surgery procedures including inguinal hernia repair (open and laparoscopic), umbilical hernia repair, laparoscopic pyloromyotomy, and laparoscopic appendectomy.

  6. YF-16 flight flutter test procedures

    NASA Technical Reports Server (NTRS)

    Brignac, W. J.; Ness, H. B.; Johnson, M. K.; Smith, L. M.

    1976-01-01

    The Random Decrement technique (Randomdec) was incorporated in procedures for flight testing of the YF-16 lightweight fighter prototype. Damping values obtained substantiate the adequacy of the flutter margin of safety. To confirm the structural modes which were being excited, a spectral analysis of each channel was performed using the AFFTC time/data 1923/50 time series analyzer. Inflight test procedure included the careful monitoring of strip charts, three axis pulses, rolls, and pullups.

  7. Nonequilibrium chemistry boundary layer integral matrix procedure

    NASA Technical Reports Server (NTRS)

    Tong, H.; Buckingham, A. C.; Morse, H. L.

    1973-01-01

    The development of an analytic procedure for the calculation of nonequilibrium boundary layer flows over surfaces of arbitrary catalycities is described. An existing equilibrium boundary layer integral matrix code was extended to include nonequilibrium chemistry while retaining all of the general boundary condition features built into the original code. For particular application to the pitch-plane of shuttle type vehicles, an approximate procedure was developed to estimate the nonequilibrium and nonisentropic state at the edge of the boundary layer.

  8. An exact accelerated stochastic simulation algorithm.

    PubMed

    Mjolsness, Eric; Orendorff, David; Chatelain, Philippe; Koumoutsakos, Petros

    2009-04-14

    An exact method for stochastic simulation of chemical reaction networks, which accelerates the stochastic simulation algorithm (SSA), is proposed. The present "ER-leap" algorithm is derived from analytic upper and lower bounds on the multireaction probabilities sampled by SSA, together with rejection sampling and an adaptive multiplicity for reactions. The algorithm is tested on a number of well-quantified reaction networks and is found experimentally to be very accurate on test problems including a chaotic reaction network. At the same time ER-leap offers a substantial speedup over SSA with a simulation time proportional to the 23 power of the number of reaction events in a Galton-Watson process.

  9. Efficient Algorithms for Langevin and DPD Dynamics.

    PubMed

    Goga, N; Rzepiela, A J; de Vries, A H; Marrink, S J; Berendsen, H J C

    2012-10-09

    In this article, we present several algorithms for stochastic dynamics, including Langevin dynamics and different variants of Dissipative Particle Dynamics (DPD), applicable to systems with or without constraints. The algorithms are based on the impulsive application of friction and noise, thus avoiding the computational complexity of algorithms that apply continuous friction and noise. Simulation results on thermostat strength and diffusion properties for ideal gas, coarse-grained (MARTINI) water, and constrained atomic (SPC/E) water systems are discussed. We show that the measured thermal relaxation rates agree well with theoretical predictions. The influence of various parameters on the diffusion coefficient is discussed.

  10. Recursive algorithms for vector extrapolation methods

    NASA Technical Reports Server (NTRS)

    Ford, William F.; Sidi, Avram

    1988-01-01

    Three classes of recursion relations are devised for implementing some extrapolation methods for vector sequences. One class of recursion relations can be used to implement methods like the modified minimal polynomial extrapolation and the topological epsilon algorithm; another allows implementation of methods like minimal polynomial and reduced rank extrapolation; while the remaining class can be employed in the implementation of the vector E-algorithm. Operation counts and storage requirements for these methods are also discussed, and some related techniques for special applications are also presented. Included are methods for the rapid evaluations of the vector E-algorithm.

  11. Universal lossless compression algorithm for textual images

    NASA Astrophysics Data System (ADS)

    al Zahir, Saif

    2012-03-01

    In recent years, an unparalleled volume of textual information has been transported over the Internet via email, chatting, blogging, tweeting, digital libraries, and information retrieval systems. As the volume of text data has now exceeded 40% of the total volume of traffic on the Internet, compressing textual data becomes imperative. Many sophisticated algorithms were introduced and employed for this purpose including Huffman encoding, arithmetic encoding, the Ziv-Lempel family, Dynamic Markov Compression, and Burrow-Wheeler Transform. My research presents novel universal algorithm for compressing textual images. The algorithm comprises two parts: 1. a universal fixed-to-variable codebook; and 2. our row and column elimination coding scheme. Simulation results on a large number of Arabic, Persian, and Hebrew textual images show that this algorithm has a compression ratio of nearly 87%, which exceeds published results including JBIG2.

  12. Genetic-based EM algorithm for learning Gaussian mixture models.

    PubMed

    Pernkopf, Franz; Bouchaffra, Djamel

    2005-08-01

    We propose a genetic-based expectation-maximization (GA-EM) algorithm for learning Gaussian mixture models from multivariate data. This algorithm is capable of selecting the number of components of the model using the minimum description length (MDL) criterion. Our approach benefits from the properties of Genetic algorithms (GA) and the EM algorithm by combination of both into a single procedure. The population-based stochastic search of the GA explores the search space more thoroughly than the EM method. Therefore, our algorithm enables escaping from local optimal solutions since the algorithm becomes less sensitive to its initialization. The GA-EM algorithm is elitist which maintains the monotonic convergence property of the EM algorithm. The experiments on simulated and real data show that the GA-EM outperforms the EM method since: 1) We have obtained a better MDL score while using exactly the same termination condition for both algorithms. 2) Our approach identifies the number of components which were used to generate the underlying data more often than the EM algorithm.

  13. Vectorized Rebinning Algorithm for Fast Data Down-Sampling

    NASA Technical Reports Server (NTRS)

    Dean, Bruce; Aronstein, David; Smith, Jeffrey

    2013-01-01

    A vectorized rebinning (down-sampling) algorithm, applicable to N-dimensional data sets, has been developed that offers a significant reduction in computer run time when compared to conventional rebinning algorithms. For clarity, a two-dimensional version of the algorithm is discussed to illustrate some specific details of the algorithm content, and using the language of image processing, 2D data will be referred to as "images," and each value in an image as a "pixel." The new approach is fully vectorized, i.e., the down-sampling procedure is done as a single step over all image rows, and then as a single step over all image columns. Data rebinning (or down-sampling) is a procedure that uses a discretely sampled N-dimensional data set to create a representation of the same data, but with fewer discrete samples. Such data down-sampling is fundamental to digital signal processing, e.g., for data compression applications.

  14. A Stepwise Canonical Procedure and the Shrinkage of Canonical Correlations.

    ERIC Educational Resources Information Center

    Rim, Eui-Do

    A stepwise canonical procedure, including two selection indices for variable deletion and a rule for stopping the iterative procedure, was derived as a method of selecting core variables from predictors and criteria. The procedure was applied to simulated data varying in the degree of built in structures in population correlation matrices, number…

  15. 34 CFR 303.512 - Minimum State complaint procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... TODDLERS WITH DISABILITIES State Administration Lead Agency Procedures for Resolving Complaints § 303.512 Minimum State complaint procedures. (a) Time limit, minimum procedures. Each lead agency shall include in...) to— (1) Carry out an independent on-site investigation, if the lead agency determines that such...

  16. 34 CFR 303.512 - Minimum State complaint procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... TODDLERS WITH DISABILITIES State Administration Lead Agency Procedures for Resolving Complaints § 303.512 Minimum State complaint procedures. (a) Time limit, minimum procedures. Each lead agency shall include in...) to— (1) Carry out an independent on-site investigation, if the lead agency determines that such...

  17. 40 CFR 86.135-00 - Dynamometer procedure.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Dynamometer procedure. 86.135-00... Heavy-Duty Vehicles; Test Procedures § 86.135-00 Dynamometer procedure. Section 86.135-00 includes text... accelerator pedal perturbations are to be avoided. When using two-roll dynamometers a truer speed-time...

  18. 42 CFR 431.708 - Procedures for applying standards.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Procedures for applying standards. 431.708 Section... Programs for Licensing Nursing Home Administrators § 431.708 Procedures for applying standards. The agency or board must develop and apply appropriate procedures and techniques, including examinations...

  19. Evaluating a President: Criteria and Procedures

    ERIC Educational Resources Information Center

    Hays, Garry D.

    1976-01-01

    Part II of the Minnesota Plan is presented. Criteria include problem solving and decision-making, personnel, academic planning and administration, fiscal management, student affairs, external relations, and relationship to the board. Procedures include the evaluation team, presidential self-assessment, institutional visit, exit interviews, and…

  20. A selective-update affine projection algorithm with selective input vectors

    NASA Astrophysics Data System (ADS)

    Kong, NamWoong; Shin, JaeWook; Park, PooGyeon

    2011-10-01

    This paper proposes an affine projection algorithm (APA) with selective input vectors, which based on the concept of selective-update in order to reduce estimation errors and computations. The algorithm consists of two procedures: input- vector-selection and state-decision. The input-vector-selection procedure determines the number of input vectors by checking with mean square error (MSE) whether the input vectors have enough information for update. The state-decision procedure determines the current state of the adaptive filter by using the state-decision criterion. As the adaptive filter is in transient state, the algorithm updates the filter coefficients with the selected input vectors. On the other hand, as soon as the adaptive filter reaches the steady state, the update procedure is not performed. Through these two procedures, the proposed algorithm achieves small steady-state estimation errors, low computational complexity and low update complexity for colored input signals.

  1. Algorithm refinement for fluctuating hydrodynamics

    SciTech Connect

    Williams, Sarah A.; Bell, John B.; Garcia, Alejandro L.

    2007-07-03

    This paper introduces an adaptive mesh and algorithmrefinement method for fluctuating hydrodynamics. This particle-continuumhybrid simulates the dynamics of a compressible fluid with thermalfluctuations. The particle algorithm is direct simulation Monte Carlo(DSMC), a molecular-level scheme based on the Boltzmann equation. Thecontinuum algorithm is based on the Landau-Lifshitz Navier-Stokes (LLNS)equations, which incorporate thermal fluctuations into macroscopichydrodynamics by using stochastic fluxes. It uses a recently-developedsolver for LLNS, based on third-order Runge-Kutta. We present numericaltests of systems in and out of equilibrium, including time-dependentsystems, and demonstrate dynamic adaptive refinement by the computationof a moving shock wave. Mean system behavior and second moment statisticsof our simulations match theoretical values and benchmarks well. We findthat particular attention should be paid to the spectrum of the flux atthe interface between the particle and continuum methods, specificallyfor the non-hydrodynamic (kinetic) time scales.

  2. A fast meteor detection algorithm

    NASA Astrophysics Data System (ADS)

    Gural, P.

    2016-01-01

    A low latency meteor detection algorithm for use with fast steering mirrors had been previously developed to track and telescopically follow meteors in real-time (Gural, 2007). It has been rewritten as a generic clustering and tracking software module for meteor detection that meets both the demanding throughput requirements of a Raspberry Pi while also maintaining a high probability of detection. The software interface is generalized to work with various forms of front-end video pre-processing approaches and provides a rich product set of parameterized line detection metrics. Discussion will include the Maximum Temporal Pixel (MTP) compression technique as a fast thresholding option for feeding the detection module, the detection algorithm trade for maximum processing throughput, details on the clustering and tracking methodology, processing products, performance metrics, and a general interface description.

  3. Treatment for cartilage injuries of the knee with a new treatment algorithm

    PubMed Central

    Özmeriç, Ahmet; Alemdaroğlu, Kadir Bahadır; Aydoğan, Nevres Hürriyet

    2014-01-01

    Treatment of articular cartilage injuries to the knee remains a considerable challenge today. Current procedures succeed in providing relief of symptoms, however damaged articular tissue is not replaced with new tissue of the same biomechanical properties and long-term durability as normal hyaline cartilage. Despite many arthroscopic procedures that often manage to achieve these goals, results are far from perfect and there is no agreement on which of these procedures are appropriate, particularly when full-thickness chondral defects are considered.Therefore, the search for biological solution in long-term functional healing and increasing the quality of wounded cartilage has been continuing. For achieving this goal and apply in wide defects, scaffolds are developed.The rationale of using a scaffold is to create an environment with biodegradable polymers for the in vitro growth of living cells and their subsequent implantation into the lesion area. Previously a few numbers of surgical treatment algorithm was described in reports, however none of them contained one-step or two –steps scaffolds. The ultimate aim of this article was to review various arthroscopic treatment options for different stage lesions and develop a new treatment algorithm which included the scaffolds. PMID:25405097

  4. Numerical linear algebra algorithms and software

    NASA Astrophysics Data System (ADS)

    Dongarra, Jack J.; Eijkhout, Victor

    2000-11-01

    The increasing availability of advanced-architecture computers has a significant effect on all spheres of scientific computation, including algorithm research and software development in numerical linear algebra. Linear algebra - in particular, the solution of linear systems of equations - lies at the heart of most calculations in scientific computing. This paper discusses some of the recent developments in linear algebra designed to exploit these advanced-architecture computers. We discuss two broad classes of algorithms: those for dense, and those for sparse matrices.

  5. Evaluating Algorithm Performance Metrics Tailored for Prognostics

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2009-01-01

    Prognostics has taken a center stage in Condition Based Maintenance (CBM) where it is desired to estimate Remaining Useful Life (RUL) of the system so that remedial measures may be taken in advance to avoid catastrophic events or unwanted downtimes. Validation of such predictions is an important but difficult proposition and a lack of appropriate evaluation methods renders prognostics meaningless. Evaluation methods currently used in the research community are not standardized and in many cases do not sufficiently assess key performance aspects expected out of a prognostics algorithm. In this paper we introduce several new evaluation metrics tailored for prognostics and show that they can effectively evaluate various algorithms as compared to other conventional metrics. Specifically four algorithms namely; Relevance Vector Machine (RVM), Gaussian Process Regression (GPR), Artificial Neural Network (ANN), and Polynomial Regression (PR) are compared. These algorithms vary in complexity and their ability to manage uncertainty around predicted estimates. Results show that the new metrics rank these algorithms in different manner and depending on the requirements and constraints suitable metrics may be chosen. Beyond these results, these metrics offer ideas about how metrics suitable to prognostics may be designed so that the evaluation procedure can be standardized. 1

  6. A Cuckoo Search Algorithm for Multimodal Optimization

    PubMed Central

    2014-01-01

    Interest in multimodal optimization is expanding rapidly, since many practical engineering problems demand the localization of multiple optima within a search space. On the other hand, the cuckoo search (CS) algorithm is a simple and effective global optimization algorithm which can not be directly applied to solve multimodal optimization problems. This paper proposes a new multimodal optimization algorithm called the multimodal cuckoo search (MCS). Under MCS, the original CS is enhanced with multimodal capacities by means of (1) the incorporation of a memory mechanism to efficiently register potential local optima according to their fitness value and the distance to other potential solutions, (2) the modification of the original CS individual selection strategy to accelerate the detection process of new local minima, and (3) the inclusion of a depuration procedure to cyclically eliminate duplicated memory elements. The performance of the proposed approach is compared to several state-of-the-art multimodal optimization algorithms considering a benchmark suite of fourteen multimodal problems. Experimental results indicate that the proposed strategy is capable of providing better and even a more consistent performance over existing well-known multimodal algorithms for the majority of test problems yet avoiding any serious computational deterioration. PMID:25147850

  7. A cuckoo search algorithm for multimodal optimization.

    PubMed

    Cuevas, Erik; Reyna-Orta, Adolfo

    2014-01-01

    Interest in multimodal optimization is expanding rapidly, since many practical engineering problems demand the localization of multiple optima within a search space. On the other hand, the cuckoo search (CS) algorithm is a simple and effective global optimization algorithm which can not be directly applied to solve multimodal optimization problems. This paper proposes a new multimodal optimization algorithm called the multimodal cuckoo search (MCS). Under MCS, the original CS is enhanced with multimodal capacities by means of (1) the incorporation of a memory mechanism to efficiently register potential local optima according to their fitness value and the distance to other potential solutions, (2) the modification of the original CS individual selection strategy to accelerate the detection process of new local minima, and (3) the inclusion of a depuration procedure to cyclically eliminate duplicated memory elements. The performance of the proposed approach is compared to several state-of-the-art multimodal optimization algorithms considering a benchmark suite of fourteen multimodal problems. Experimental results indicate that the proposed strategy is capable of providing better and even a more consistent performance over existing well-known multimodal algorithms for the majority of test problems yet avoiding any serious computational deterioration.

  8. Ouroboros: A Tool for Building Generic, Hybrid, Divide& Conquer Algorithms

    SciTech Connect

    Johnson, J R; Foster, I

    2003-05-01

    A hybrid divide and conquer algorithm is one that switches from a divide and conquer to an iterative strategy at a specified problem size. Such algorithms can provide significant performance improvements relative to alternatives that use a single strategy. However, the identification of the optimal problem size at which to switch for a particular algorithm and platform can be challenging. We describe an automated approach to this problem that first conducts experiments to explore the performance space on a particular platform and then uses the resulting performance data to construct an optimal hybrid algorithm on that platform. We implement this technique in a tool, ''Ouroboros'', that automatically constructs a high-performance hybrid algorithm from a set of registered algorithms. We present results obtained with this tool for several classical divide and conquer algorithms, including matrix multiply and sorting, and report speedups of up to six times achieved over non-hybrid algorithms.

  9. An innovative thinking-based intelligent information fusion algorithm.

    PubMed

    Lu, Huimin; Hu, Liang; Liu, Gang; Zhou, Jin

    2013-01-01

    This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information.

  10. Proactive Alleviation Procedure to Handle Black Hole Attack and Its Version.

    PubMed

    Babu, M Rajesh; Dian, S Moses; Chelladurai, Siva; Palaniappan, Mathiyalagan

    2015-01-01

    The world is moving towards a new realm of computing such as Internet of Things. The Internet of Things, however, envisions connecting almost all objects within the world to the Internet by recognizing them as smart objects. In doing so, the existing networks which include wired, wireless, and ad hoc networks should be utilized. Moreover, apart from other networks, the ad hoc network is full of security challenges. For instance, the MANET (mobile ad hoc network) is susceptible to various attacks in which the black hole attacks and its versions do serious damage to the entire MANET infrastructure. The severity of this attack increases, when the compromised MANET nodes work in cooperation with each other to make a cooperative black hole attack. Therefore this paper proposes an alleviation procedure which consists of timely mandate procedure, hole detection algorithm, and sensitive guard procedure to detect the maliciously behaving nodes. It has been observed that the proposed procedure is cost-effective and ensures QoS guarantee by assuring resource availability thus making the MANET appropriate for Internet of Things.

  11. Proactive Alleviation Procedure to Handle Black Hole Attack and Its Version

    PubMed Central

    Babu, M. Rajesh; Dian, S. Moses; Chelladurai, Siva; Palaniappan, Mathiyalagan

    2015-01-01

    The world is moving towards a new realm of computing such as Internet of Things. The Internet of Things, however, envisions connecting almost all objects within the world to the Internet by recognizing them as smart objects. In doing so, the existing networks which include wired, wireless, and ad hoc networks should be utilized. Moreover, apart from other networks, the ad hoc network is full of security challenges. For instance, the MANET (mobile ad hoc network) is susceptible to various attacks in which the black hole attacks and its versions do serious damage to the entire MANET infrastructure. The severity of this attack increases, when the compromised MANET nodes work in cooperation with each other to make a cooperative black hole attack. Therefore this paper proposes an alleviation procedure which consists of timely mandate procedure, hole detection algorithm, and sensitive guard procedure to detect the maliciously behaving nodes. It has been observed that the proposed procedure is cost-effective and ensures QoS guarantee by assuring resource availability thus making the MANET appropriate for Internet of Things. PMID:26495430

  12. Auto-adaptive statistical procedure for tracking structural health monitoring data

    NASA Astrophysics Data System (ADS)

    Smith, R. Lowell; Jannarone, Robert J.

    2004-07-01

    Whatever specific methods come to be preferred in the field of structural health/integrity monitoring, the associated raw data will eventually have to provide inputs for appropriate damage accumulation models and decision making protocols. The status of hardware under investigation eventually will be inferred from the evolution in time of the characteristics of this kind of functional figure of merit. Irrespective of the specific character of raw and processed data, it is desirable to develop simple, practical procedures to support damage accumulation modeling, status discrimination, and operational decision making in real time. This paper addresses these concerns and presents an auto-adaptive procedure developed to process data output from an array of many dozens of correlated sensors. These represent a full complement of information channels associated with typical structural health monitoring applications. What the algorithm does is learn in statistical terms the normal behavior patterns of the system, and against that backdrop, is configured to recognize and flag departures from expected behavior. This is accomplished using standard statistical methods, with certain proprietary enhancements employed to address issues of ill conditioning that may arise. Examples have been selected to illustrate how the procedure performs in practice. These are drawn from the fields of nondestructive testing, infrastructure management, and underwater acoustics. The demonstrations presented include the evaluation of historical electric power utilization data for a major facility, and a quantitative assessment of the performance benefits of net-centric, auto-adaptive computational procedures as a function of scale.

  13. 28 CFR 65.84 - Procedures for the Attorney General when seeking State or local assistance.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., immigration law enforcement fundamentals and procedures, civil rights law, and sensitivity and cultural..., including applicable immigration law enforcement standards and procedures, civil rights law, and sensitivity... (CONTINUED) EMERGENCY FEDERAL LAW ENFORCEMENT ASSISTANCE Immigration Emergency Fund § 65.84 Procedures...

  14. 28 CFR 65.84 - Procedures for the Attorney General when seeking State or local assistance.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., immigration law enforcement fundamentals and procedures, civil rights law, and sensitivity and cultural..., including applicable immigration law enforcement standards and procedures, civil rights law, and sensitivity... (CONTINUED) EMERGENCY FEDERAL LAW ENFORCEMENT ASSISTANCE Immigration Emergency Fund § 65.84 Procedures...

  15. 28 CFR 65.84 - Procedures for the Attorney General when seeking State or local assistance.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., immigration law enforcement fundamentals and procedures, civil rights law, and sensitivity and cultural..., including applicable immigration law enforcement standards and procedures, civil rights law, and sensitivity... (CONTINUED) EMERGENCY FEDERAL LAW ENFORCEMENT ASSISTANCE Immigration Emergency Fund § 65.84 Procedures...

  16. Algorithm development for Maxwell's equations for computational electromagnetism

    NASA Technical Reports Server (NTRS)

    Goorjian, Peter M.

    1990-01-01

    A new algorithm has been developed for solving Maxwell's equations for the electromagnetic field. It solves the equations in the time domain with central, finite differences. The time advancement is performed implicitly, using an alternating direction implicit procedure. The space discretization is performed with finite volumes, using curvilinear coordinates with electromagnetic components along those directions. Sample calculations are presented of scattering from a metal pin, a square and a circle to demonstrate the capabilities of the new algorithm.

  17. Genetic-Algorithm Tool For Search And Optimization

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven

    1995-01-01

    SPLICER computer program used to solve search and optimization problems. Genetic algorithms adaptive search procedures (i.e., problem-solving methods) based loosely on processes of natural selection and Darwinian "survival of fittest." Algorithms apply genetically inspired operators to populations of potential solutions in iterative fashion, creating new populations while searching for optimal or nearly optimal solution to problem at hand. Written in Think C.

  18. Solar Position Algorithm for Solar Radiation Applications (Revised)

    SciTech Connect

    Reda, I.; Andreas, A.

    2008-01-01

    This report is a step-by-step procedure for implementing an algorithm to calculate the solar zenith and azimuth angles in the period from the year -2000 to 6000, with uncertainties of ?0.0003/. It is written in a step-by-step format to simplify otherwise complicated steps, with a focus on the sun instead of the planets and stars in general. The algorithm is written in such a way to accommodate solar radiation applications.

  19. The Algorithm Selection Problem

    NASA Technical Reports Server (NTRS)

    Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)

    1994-01-01

    Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.

  20. 17 CFR 38.3 - Procedures for designation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... order from input through settlement, and a copy of any system test procedures, tests conducted, test... legal status and governance structure, including governance fitness information; (D) An executed...

  1. Subsea HIPPS design procedure

    SciTech Connect

    Aaroe, R.; Lund, B.F.; Onshus, T.

    1995-12-31

    The paper is based on a feasibility study investigating the possibilities of using a HIPPS (High Integrity Pressure Protection System) to protect a subsea pipeline that is not rated for full wellhead shut-in pressure. The study was called the Subsea OPPS Feasibility Study, and was performed by SINTEF, Norway. Here, OPPS is an acronym for Overpressure Pipeline Protection System. A design procedure for a subsea HIPPS is described, based on the experience and knowledge gained through the ``Subsea OPPS Feasibility Study``. Before a subsea HIPPS can be applied, its technical feasibility, reliability and profitability must be demonstrated. The subsea HIPPS design procedure will help to organize and plan the design activities both with respect to development and verification of a subsea HIPPS. The paper also gives examples of how some of the discussed design steps were performed in the Subsea OPPS Feasibility Study. Finally, further work required to apply a subsea HIPPS is discussed.

  2. Modifications to Axially Symmetric Simulations Using New DSMC (2007) Algorithms

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.

    2008-01-01

    Several modifications aimed at improving physical accuracy are proposed for solving axially symmetric problems building on the DSMC (2007) algorithms introduced by Bird. Originally developed to solve nonequilibrium, rarefied flows, the DSMC method is now regularly used to solve complex problems over a wide range of Knudsen numbers. These new algorithms include features such as nearest neighbor collisions excluding the previous collision partners, separate collision and sampling cells, automatically adaptive variable time steps, a modified no-time counter procedure for collisions, and discontinuous and event-driven physical processes. Axially symmetric solutions require radial weighting for the simulated molecules since the molecules near the axis represent fewer real molecules than those farther away from the axis due to the difference in volume of the cells. In the present methodology, these radial weighting factors are continuous, linear functions that vary with the radial position of each simulated molecule. It is shown that how one defines the number of tentative collisions greatly influences the mean collision time near the axis. The method by which the grid is treated for axially symmetric problems also plays an important role near the axis, especially for scalar pressure. A new method to treat how the molecules are traced through the grid is proposed to alleviate the decrease in scalar pressure at the axis near the surface. Also, a modification to the duplication buffer is proposed to vary the duplicated molecular velocities while retaining the molecular kinetic energy and axially symmetric nature of the problem.

  3. An enhanced nonparametric streamflow disaggregation model with genetic algorithm

    NASA Astrophysics Data System (ADS)

    Lee, T.; Salas, J. D.; Prairie, J.

    2010-08-01

    Stochastic streamflow generation is generally utilized for planning and management of water resources systems. For this purpose, a number of parametric and nonparametric models have been suggested in literature. Among them, temporal and spatial disaggregation approaches play an important role particularly to make sure that historical variance-covariance properties are preserved at various temporal and spatial scales. In this paper, we review the underlying features of existing nonparametric disaggregation methods, identify some of their pros and cons, and propose a disaggregation algorithm that is capable of surmounting some of the shortcomings of the current models. The proposed models hinge on k-nearest neighbor resampling, the accurate adjusting procedure, and a genetic algorithm. The models have been tested and compared to an existing nonparametric disaggregation approach using data of the Colorado River system. It has been shown that the model is capable of (1) reproducing the season-to-season correlations including the correlation between the last season of the previous year and the first season of the current year, (2) minimizing or avoiding the generation of flow patterns across the year that are literally the same as those of the historical records, and (3) minimizing or avoiding the generation of negative flows. In addition, it is applicable to intermittent river regimes.

  4. An algorithm to build mock galaxy catalogues using MICE simulations

    NASA Astrophysics Data System (ADS)

    Carretero, J.; Castander, F. J.; Gaztañaga, E.; Crocce, M.; Fosalba, P.

    2015-02-01

    We present a method to build mock galaxy catalogues starting from a halo catalogue that uses halo occupation distribution (HOD) recipes as well as the subhalo abundance matching (SHAM) technique. Combining both prescriptions we are able to push the absolute magnitude of the resulting catalogue to fainter luminosities than using just the SHAM technique and can interpret our results in terms of the HOD modelling. We optimize the method by populating with galaxies friends-of-friends dark matter haloes extracted from the Marenostrum Institut de Ciències de l'Espai dark matter simulations and comparing them to observational constraints. Our resulting mock galaxy catalogues manage to reproduce the observed local galaxy luminosity function and the colour-magnitude distribution as observed by the Sloan Digital Sky Survey. They also reproduce the observed galaxy clustering properties as a function of luminosity and colour. In order to achieve that, the algorithm also includes scatter in the halo mass-galaxy luminosity relation derived from direct SHAM and a modified Navarro-Frenk-White mass density profile to place satellite galaxies in their host dark matter haloes. Improving on general usage of the HOD that fits the clustering for given magnitude limited samples, our catalogues are constructed to fit observations at all luminosities considered and therefore for any luminosity subsample. Overall, our algorithm is an economic procedure of obtaining galaxy mock catalogues down to faint magnitudes that are necessary to understand and interpret galaxy surveys.

  5. Gerchberg-Papoulis algorithm and the finite Zak transform

    NASA Astrophysics Data System (ADS)

    Brodzik, Andrzej K.; Tolimieri, Richard

    2000-12-01

    We propose a new, time-frequency formulation of the Gerchberg-Papoulis algorithm for extrapolation of band- limited signals. The new formulation is obtained by translating the constituent operations of the Gerchberg- Papoulis procedure, the truncation and the Fourier transform, into the language of the finite Zak transform, a time-frequency tool intimately related to the Fourier transform. We will show that the use of the Zak transform results in a significant reduction of the computational complexity of the Gerchberg-Papoulis procedure and in an increased flexibility of the algorithm.

  6. Group implicit concurrent algorithms in nonlinear structural dynamics

    NASA Technical Reports Server (NTRS)

    Ortiz, M.; Sotelino, E. D.

    1989-01-01

    During the 70's and 80's, considerable effort was devoted to developing efficient and reliable time stepping procedures for transient structural analysis. Mathematically, the equations governing this type of problems are generally stiff, i.e., they exhibit a wide spectrum in the linear range. The algorithms best suited to this type of applications are those which accurately integrate the low frequency content of the response without necessitating the resolution of the high frequency modes. This means that the algorithms must be unconditionally stable, which in turn rules out explicit integration. The most exciting possibility in the algorithms development area in recent years has been the advent of parallel computers with multiprocessing capabilities. So, this work is mainly concerned with the development of parallel algorithms in the area of structural dynamics. A primary objective is to devise unconditionally stable and accurate time stepping procedures which lend themselves to an efficient implementation in concurrent machines. Some features of the new computer architecture are summarized. A brief survey of current efforts in the area is presented. A new class of concurrent procedures, or Group Implicit algorithms is introduced and analyzed. The numerical simulation shows that GI algorithms hold considerable promise for application in coarse grain as well as medium grain parallel computers.

  7. Progress on automated data analysis algorithms for ultrasonic inspection of composites

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Forsyth, David S.; Welter, John T.

    2015-03-01

    Progress is presented on the development and demonstration of automated data analysis (ADA) software to address the burden in interpreting ultrasonic inspection data for large composite structures. The automated data analysis algorithm is presented in detail, which follows standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. New algorithms have been implemented to reliably identify indications in time-of-flight images near the front and back walls of composite panels. Adaptive call criteria have also been applied to address sensitivity to variation in backwall signal level, panel thickness variation, and internal signal noise. ADA processing results are presented for a variety of test specimens that include inserted materials and discontinuities produced under poor manufacturing conditions. Software tools have been developed to support both ADA algorithm design and certification, producing a statistical evaluation of indication results and false calls using a matching process with predefined truth tables. Parametric studies were performed to evaluate detection and false call results with respect to varying algorithm settings.

  8. A comparison of binary and continuous genetic algorithm in parameter estimation of a logistic growth model

    NASA Astrophysics Data System (ADS)

    Windarto, Indratno, S. W.; Nuraini, N.; Soewono, E.

    2014-02-01

    Genetic algorithm is an optimization method based on the principles of genetics and natural selection in life organisms. The algorithm begins by defining the optimization variables, defining the cost function (in a minimization problem) or the fitness function (in a maximization problem) and selecting genetic algorithm parameters. The main procedures in genetic algorithm are generating initial population, selecting some chromosomes (individual) as parent's individual, mating, and mutation. In this paper, binary and continuous genetic algorithms were implemented to estimate growth rate and carrying capacity parameter from poultry data cited from literature. For simplicity, all genetic algorithm parameters (selection rate and mutation rate) are set to be constant along implementation of the algorithm. It was found that by selecting suitable mutation rate, both algorithms can estimate these parameters well. Suitable range for mutation rate in continuous genetic algorithm is wider than the binary one.

  9. Motion Cueing Algorithm Development: Piloted Performance Testing of the Cueing Algorithms

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    The relative effectiveness in simulating aircraft maneuvers with both current and newly developed motion cueing algorithms was assessed with an eleven-subject piloted performance evaluation conducted on the NASA Langley Visual Motion Simulator (VMS). In addition to the current NASA adaptive algorithm, two new cueing algorithms were evaluated: the optimal algorithm and the nonlinear algorithm. The test maneuvers included a straight-in approach with a rotating wind vector, an offset approach with severe turbulence and an on/off lateral gust that occurs as the aircraft approaches the runway threshold, and a takeoff both with and without engine failure after liftoff. The maneuvers were executed with each cueing algorithm with added visual display delay conditions ranging from zero to 200 msec. Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. Piloted performance parameters for the approach maneuvers, the vertical velocity upon touchdown and the runway touchdown position, were also analyzed but did not show any noticeable difference among the cueing algorithms. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach were less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.

  10. Diagnostic Algorithm Benchmarking

    NASA Technical Reports Server (NTRS)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  11. Inclusive Flavour Tagging Algorithm

    NASA Astrophysics Data System (ADS)

    Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex

    2016-10-01

    Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment.

  12. An Eligibility Determination Algorithm for Part C Early Intervention Enrollment. TRACE Practice Guide, Volume 1, Number 1

    ERIC Educational Resources Information Center

    Dunst, Carl J.

    2006-01-01

    Procedures for using a decision algorithm for determining whether an infant or toddler is eligible for Part C early intervention is the focus of this eligibility determination practice guideline. An algorithm is a step-by-step problem-solving procedure or decision-making process that results in a solution or accurate decision in a finite number of…

  13. The calcaneo-stop procedure.

    PubMed

    Usuelli, F G; Montrasio, U Alfieri

    2012-06-01

    Flexible flatfoot is one of the most common deformities. Arthroereisis procedures are designed to correct this deformity. Among them, the calcaneo-stop is a procedure with both biomechanical and proprioceptive properties. It is designed for pediatric treatment. Results similar to endorthesis procedure are reported. Theoretically the procedure can be applied to adults if combined with other procedures to obtain a stable plantigrade foot, but medium-term follow up studies are missing.

  14. Pollutant Assessments Group Procedures Manual

    SciTech Connect

    Chavarria, D.E.; Davidson, J.R.; Espegren, M.L.; Kearl, P.M.; Knott, R.R.; Pierce, G.A.; Retolaza, C.D.; Smuin, D.R.; Wilson, M.J.; Witt, D.A. ); Conklin, N.G.; Egidi, P.V.; Ertel, D.B.; Foster, D.S.; Krall, B.J.; Meredith, R.L.; Rice, J.A.; Roemer, E.K. )

    1991-02-01

    This procedures manual combines the existing procedures for radiological and chemical assessment of hazardous wastes used by the Pollutant Assessments Group at the time of manuscript completion (October 1, 1990). These procedures will be revised in an ongoing process to incorporate new developments in hazardous waste assessment technology and changes in administrative policy and support procedures. Format inconsistencies will be corrected in subsequent revisions of individual procedures.

  15. OpenEIS Algorithms

    SciTech Connect

    2013-07-29

    The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.

  16. Implementation of Parallel Algorithms

    DTIC Science & Technology

    1993-06-30

    their socia ’ relations or to achieve some goals. For example, we define a pair-wise force law of i epulsion and attraction for a group of identical...quantization based compression schemes. Photo-refractive crystals, which provide high density recording in real time, are used as our holographic media . The...of Parallel Algorithms (J. Reif, ed.). Kluwer Academic Pu’ ishers, 1993. (4) "A Dynamic Separator Algorithm", D. Armon and J. Reif. To appear in

  17. Parallel Wolff Cluster Algorithms

    NASA Astrophysics Data System (ADS)

    Bae, S.; Ko, S. H.; Coddington, P. D.

    The Wolff single-cluster algorithm is the most efficient method known for Monte Carlo simulation of many spin models. Due to the irregular size, shape and position of the Wolff clusters, this method does not easily lend itself to efficient parallel implementation, so that simulations using this method have thus far been confined to workstations and vector machines. Here we present two parallel implementations of this algorithm, and show that one gives fairly good performance on a MIMD parallel computer.

  18. Painting with polygons: a procedural watercolor engine.

    PubMed

    DiVerdi, Stephen; Krishnaswamy, Aravind; Měch, Radomír; Ito, Daichi

    2013-05-01

    Existing natural media painting simulations have produced high-quality results, but have required powerful compute hardware and have been limited to screen resolutions. Digital artists would like to be able to use watercolor-like painting tools, but at print resolutions and on lower end hardware such as laptops or even slates. We present a procedural algorithm for generating watercolor-like dynamic paint behaviors in a lightweight manner. Our goal is not to exactly duplicate watercolor painting, but to create a range of dynamic behaviors that allow users to achieve a similar style of process and result, while at the same time having a unique character of its own. Our stroke representation is vector based, allowing for rendering at arbitrary resolutions, and our procedural pigment advection algorithm is fast enough to support painting on slate devices. We demonstrate our technique in a commercially available slate application used by professional artists. Finally, we present a detailed analysis of the different vector-rendering technologies available.

  19. Parallel iterative procedures for approximate solutions of wave propagation by finite element and finite difference methods

    SciTech Connect

    Kim, S.

    1994-12-31

    Parallel iterative procedures based on domain decomposition techniques are defined and analyzed for the numerical solution of wave propagation by finite element and finite difference methods. For finite element methods, in a Lagrangian framework, an efficient way for choosing the algorithm parameter as well as the algorithm convergence are indicated. Some heuristic arguments for finding the algorithm parameter for finite difference schemes are addressed. Numerical results are presented to indicate the effectiveness of the methods.

  20. Novel biomedical tetrahedral mesh methods: algorithms and applications

    NASA Astrophysics Data System (ADS)

    Yu, Xiao; Jin, Yanfeng; Chen, Weitao; Huang, Pengfei; Gu, Lixu

    2007-12-01

    Tetrahedral mesh generation algorithm, as a prerequisite of many soft tissue simulation methods, becomes very important in the virtual surgery programs because of the real-time requirement. Aiming to speed up the computation in the simulation, we propose a revised Delaunay algorithm which makes a good balance of quality of tetrahedra, boundary preservation and time complexity, with many improved methods. Another mesh algorithm named Space-Disassembling is also presented in this paper, and a comparison of Space-Disassembling, traditional Delaunay algorithm and the revised Delaunay algorithm is processed based on clinical soft-tissue simulation projects, including craniofacial plastic surgery and breast reconstruction plastic surgery.

  1. Based on Multi-sensor Information Fusion Algorithm of TPMS Research

    NASA Astrophysics Data System (ADS)

    Yulan, Zhou; Yanhong, Zang; Yahong, Lin

    In the paper are presented algorithms of TPMS (Tire Pressure Monitoring System) based on multi-sensor information fusion. A Unified mathematical models of information fusion are constructed and three algorithms are used to deal with, which include algorithm based on Bayesian, algorithm based on the relative distance (an improved algorithm of bayesian theory of evidence), algorithm based on multi-sensor weighted fusion. The calculating results shows that the algorithm based on d-s evidence theory of multisensor fusion method better than the algorithm the based on information fusion method or the bayesian method.

  2. Algorithm implementation on the Navier-Stokes computer

    NASA Technical Reports Server (NTRS)

    Krist, Steven E.; Zang, Thomas A.

    1987-01-01

    The Navier-Stokes Computer is a multi-purpose parallel-processing supercomputer which is currently under development at Princeton University. It consists of multiple local memory parallel processors, called Nodes, which are interconnected in a hypercube network. Details of the procedures involved in implementing an algorithm on the Navier-Stokes computer are presented. The particular finite difference algorithm considered in this analysis was developed for simulation of laminar-turbulent transition in wall bounded shear flows. Projected timing results for implementing this algorithm indicate that operation rates in excess of 42 GFLOPS are feasible on a 128 Node machine.

  3. Procedural Learning and Individual Differences in Language

    PubMed Central

    Lee, Joanna C.; Tomblin, J. Bruce

    2014-01-01

    The aim of the current study was to examine different aspects of procedural memory in young adults who varied with regard to their language abilities. We selected a sample of procedural memory tasks, each of which represented a unique type of procedural learning, and has been linked, at least partially, to the functionality of the corticostriatal system. The findings showed that variance in language abilities is associated with performance on different domains of procedural memory, including the motor domain (as shown in the pursuit rotor task), the cognitive domain (as shown in the weather prediction task), and the linguistic domain (as shown in the nonword repetition priming task). These results implicate the corticostriatal system in individual differences in language. PMID:26190949

  4. Spacecraft crew procedures from paper to computers

    NASA Technical Reports Server (NTRS)

    Oneal, Michael; Manahan, Meera

    1993-01-01

    Large volumes of paper are launched with each Space Shuttle Mission that contain step-by-step instructions for various activities that are to be performed by the crew during the mission. These instructions include normal operational procedures and malfunction or contingency procedures and are collectively known as the Flight Data File (FDF). An example of nominal procedures would be those used in the deployment of a satellite from the Space Shuttle; a malfunction procedure would describe actions to be taken if a specific problem developed during the deployment. A new FDF and associated system is being created for Space Station Freedom. The system will be called the Space Station Flight Data File (SFDF). NASA has determined that the SFDF will be computer-based rather than paper-based. Various aspects of the SFDF are discussed.

  5. Development of automated test procedures and techniques for LSI circuits

    NASA Technical Reports Server (NTRS)

    Carroll, B. D.

    1975-01-01

    Testing of large scale integrated (LSI) logic circuits was considered from the point of view of automatic test pattern generation. A system for automatic test pattern generation is described. A test generation algorithm is presented that can be applied to both combinational and sequential logic circuits. Also included is a programmed implementation of the algorithm and sample results from the program.

  6. Algorithms for intravenous insulin delivery.

    PubMed

    Braithwaite, Susan S; Clement, Stephen

    2008-08-01

    This review aims to classify algorithms for intravenous insulin infusion according to design. Essential input data include the current blood glucose (BG(current)), the previous blood glucose (BG(previous)), the test time of BG(current) (test time(current)), the test time of BG(previous) (test time(previous)), and the previous insulin infusion rate (IR(previous)). Output data consist of the next insulin infusion rate (IR(next)) and next test time. The classification differentiates between "IR" and "MR" algorithm types, both defined as a rule for assigning an insulin infusion rate (IR), having a glycemic target. Both types are capable of assigning the IR for the next iteration of the algorithm (IR(next)) as an increasing function of BG(current), IR(previous), and rate-of-change of BG with respect to time, each treated as an independent variable. Algorithms of the IR type directly seek to define IR(next) as an incremental adjustment to IR(previous). At test time(current), under an IR algorithm the differences in values of IR(next) that might be assigned depending upon the value of BG(current) are not necessarily continuously dependent upon, proportionate to, or commensurate with either the IR(previous) or the rate-of-change of BG. Algorithms of the MR type create a family of IR functions of BG differing according to maintenance rate (MR), each being an iso-MR curve. The change of IR(next) with respect to BG(current) is a strictly increasing function of MR. At test time(current), algorithms of the MR type use IR(previous) and the rate-of-change of BG to define the MR, multiplier, or column assignment, which will be used for patient assignment to the right iso-MR curve and as precedent for IR(next). Bolus insulin therapy is especially effective when used in proportion to carbohydrate load to cover anticipated incremental transitory enteral or parenteral carbohydrate exposure. Specific distinguishing algorithm design features and choice of parameters may be important to

  7. Scheduling with genetic algorithms

    NASA Technical Reports Server (NTRS)

    Fennel, Theron R.; Underbrink, A. J., Jr.; Williams, George P. W., Jr.

    1994-01-01

    In many domains, scheduling a sequence of jobs is an important function contributing to the overall efficiency of the operation. At Boeing, we develop schedules for many different domains, including assembly of military and commercial aircraft, weapons systems, and space vehicles. Boeing is under contract to develop scheduling systems for the Space Station Payload Planning System (PPS) and Payload Operations and Integration Center (POIC). These applications require that we respect certain sequencing restrictions among the jobs to be scheduled while at the same time assigning resources to the jobs. We call this general problem scheduling and resource allocation. Genetic algorithms (GA's) offer a search method that uses a population of solutions and benefits from intrinsic parallelism to search the problem space rapidly, producing near-optimal solutions. Good intermediate solutions are probabalistically recombined to produce better offspring (based upon some application specific measure of solution fitness, e.g., minimum flowtime, or schedule completeness). Also, at any point in the search, any intermediate solution can be accepted as a final solution; allowing the search to proceed longer usually produces a better solution while terminating the search at virtually any time may yield an acceptable solution. Many processes are constrained by restrictions of sequence among the individual jobs. For a specific job, other jobs must be completed beforehand. While there are obviously many other constraints on processes, it is these on which we focussed for this research: how to allocate crews to jobs while satisfying job precedence requirements and personnel, and tooling and fixture (or, more generally, resource) requirements.

  8. Automated training for algorithms that learn from genomic data.

    PubMed

    Cilingir, Gokcen; Broschat, Shira L

    2015-01-01

    Supervised machine learning algorithms are used by life scientists for a variety of objectives. Expert-curated public gene and protein databases are major resources for gathering data to train these algorithms. While these data resources are continuously updated, generally, these updates are not incorporated into published machine learning algorithms which thereby can become outdated soon after their introduction. In this paper, we propose a new model of operation for supervised machine learning algorithms that learn from genomic data. By defining these algorithms in a pipeline in which the training data gathering procedure and the learning process are automated, one can create a system that generates a classifier or predictor using information available from public resources. The proposed model is explained using three case studies on SignalP, MemLoci, and ApicoAP in which existing machine learning models are utilized in pipelines. Given that the vast majority of the procedures described for gathering training data can easily be automated, it is possible to transform valuable machine learning algorithms into self-evolving learners that benefit from the ever-changing data available for gene products and to develop new machine learning algorithms that are similarly capable.

  9. Linear-scaling and parallelisable algorithms for stochastic quantum chemistry

    NASA Astrophysics Data System (ADS)

    Booth, George H.; Smart, Simon D.; Alavi, Ali

    2014-07-01

    For many decades, quantum chemical method development has been dominated by algorithms which involve increasingly complex series of tensor contractions over one-electron orbital spaces. Procedures for their derivation and implementation have evolved to require the minimum amount of logic and rely heavily on computationally efficient library-based matrix algebra and optimised paging schemes. In this regard, the recent development of exact stochastic quantum chemical algorithms to reduce computational scaling and memory overhead requires a contrasting algorithmic philosophy, but one which when implemented efficiently can achieve higher accuracy/cost ratios with small random errors. Additionally, they can exploit the continuing trend for massive parallelisation which hinders the progress of deterministic high-level quantum chemical algorithms. In the Quantum Monte Carlo community, stochastic algorithms are ubiquitous but the discrete Fock space of quantum chemical methods is often unfamiliar, and the methods introduce new concepts required for algorithmic efficiency. In this paper, we explore these concepts and detail an algorithm used for Full Configuration Interaction Quantum Monte Carlo (FCIQMC), which is implemented and available in MOLPRO and as a standalone code, and is designed for high-level parallelism and linear-scaling with walker number. Many of the algorithms are also in use in, or can be transferred to, other stochastic quantum chemical methods and implementations. We apply these algorithms to the strongly correlated chromium dimer to demonstrate their efficiency and parallelism.

  10. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    PubMed

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.

  11. A Polynomial Time, Numerically Stable Integer Relation Algorithm

    NASA Technical Reports Server (NTRS)

    Ferguson, Helaman R. P.; Bailey, Daivd H.; Kutler, Paul (Technical Monitor)

    1998-01-01

    Let x = (x1, x2...,xn be a vector of real numbers. X is said to possess an integer relation if there exist integers a(sub i) not all zero such that a1x1 + a2x2 + ... a(sub n)Xn = 0. Beginning in 1977 several algorithms (with proofs) have been discovered to recover the a(sub i) given x. The most efficient of these existing integer relation algorithms (in terms of run time and the precision required of the input) has the drawback of being very unstable numerically. It often requires a numeric precision level in the thousands of digits to reliably recover relations in modest-sized test problems. We present here a new algorithm for finding integer relations, which we have named the "PSLQ" algorithm. It is proved in this paper that the PSLQ algorithm terminates with a relation in a number of iterations that is bounded by a polynomial in it. Because this algorithm employs a numerically stable matrix reduction procedure, it is free from the numerical difficulties, that plague other integer relation algorithms. Furthermore, its stability admits an efficient implementation with lower run times oil average than other algorithms currently in Use. Finally, this stability can be used to prove that relation bounds obtained from computer runs using this algorithm are numerically accurate.

  12. A new real-time tsunami detection algorithm

    NASA Astrophysics Data System (ADS)

    Chierici, Francesco; Embriaco, Davide; Pignagnoli, Luca

    2017-01-01

    Real-time tsunami detection algorithms play a key role in any Tsunami Early Warning System. We have developed a new algorithm for tsunami detection based on the real-time tide removal and real-time band-pass filtering of seabed pressure recordings. The algorithm greatly increases the tsunami detection probability, shortens the detection delay and enhances detection reliability with respect to the most widely used tsunami detection algorithm, while containing the computational cost. The algorithm is designed to be used also in autonomous early warning systems with a set of input parameters and procedures which can be reconfigured in real time. We have also developed a methodology based on Monte Carlo simulations to test the tsunami detection algorithms. The algorithm performance is estimated by defining and evaluating statistical parameters, namely the detection probability, the detection delay, which are functions of the tsunami amplitude and wavelength, and the occurring rate of false alarms. Pressure data sets acquired by Bottom Pressure Recorders in different locations and environmental conditions have been used in order to consider real working scenarios in the test. We also present an application of the algorithm to the tsunami event which occurred at Haida Gwaii on 28 October 2012 using data recorded by the Bullseye underwater node of Ocean Networks Canada. The algorithm successfully ran for test purpose in year-long missions onboard abyssal observatories, deployed in the Gulf of Cadiz and in the Western Ionian Sea.

  13. Confidence intervals for expected moments algorithm flood quantile estimates

    USGS Publications Warehouse

    Cohn, T.A.; Lane, W.L.; Stedinger, J.R.

    2001-01-01

    Historical and paleoflood information can substantially improve flood frequency estimates if appropriate statistical procedures are properly applied. However, the Federal guidelines for flood frequency analysis, set forth in Bulletin 17B, rely on an inefficient "weighting" procedure that fails to take advantage of historical and paleoflood information. This has led researchers to propose several more efficient alternatives including the Expected Moments Algorithm (EMA), which is attractive because it retains Bulletin 17B's statistical structure (method of moments with the Log Pearson Type 3 distribution) and thus can be easily integrated into flood analyses employing the rest of the Bulletin 17B approach. The practical utility of EMA, however, has been limited because no closed-form method has been available for quantifying the uncertainty of EMA-based flood quantile estimates. This paper addresses that concern by providing analytical expressions for the asymptotic variance of EMA flood-quantile estimators and confidence intervals for flood quantile estimates. Monte Carlo simulations demonstrate the properties of such confidence intervals for sites where a 25- to 100-year streamgage record is augmented by 50 to 150 years of historical information. The experiments show that the confidence intervals, though not exact, should be acceptable for most purposes.

  14. CHASTE: incorporating a novel multi-scale spatial and temporal algorithm into a large-scale open source library.

    PubMed

    Bernabeu, Miguel O; Bordas, Rafel; Pathmanathan, Pras; Pitt-Francis, Joe; Cooper, Jonathan; Garny, Alan; Gavaghan, David J; Rodriguez, Blanca; Southern, James A; Whiteley, Jonathan P

    2009-05-28

    Recent work has described the software engineering and computational infrastructure that has been set up as part of the Cancer, Heart and Soft Tissue Environment (CHASTE) project. CHASTE is an open source software package that currently has heart and cancer modelling functionality. This software has been written using a programming paradigm imported from the commercial sector and has resulted in a code that has been subject to a far more rigorous testing procedure than that is usual in this field. In this paper, we explain how new functionality may be incorporated into CHASTE. Whiteley has developed a numerical algorithm for solving the bidomain equations that uses the multi-scale (MS) nature of the physiology modelled to enhance computational efficiency. Using a simple geometry in two dimensions and a purpose-built code, this algorithm was reported to give an increase in computational efficiency of more than two orders of magnitude. In this paper, we begin by reviewing numerical methods currently in use for solving the bidomain equations, explaining how these methods may be developed to use the MS algorithm discussed above. We then demonstrate the use of this algorithm within the CHASTE framework for solving the monodomain and bidomain equations in a three-dimensional realistic heart geometry. Finally, we discuss how CHASTE may be developed to include new physiological functionality--such as modelling a beating heart and fluid flow in the heart--and how new algorithms aimed at increasing the efficiency of the code may be incorporated.

  15. Research on Knowledge Based Programming and Algorithm Design.

    DTIC Science & Technology

    1981-08-01

    34prime finding" (including the Sieve of Eratosthenes and linear time prime finding). This research is described in sections 6,7,8, and 9. 4 ii. Summary of...algorithm and several variants on prime finding including the Sieve of Eratosthenes and a more sophisticated linear-time algorithm. In these additional

  16. An improved algorithm for evaluating trellis phase codes

    NASA Technical Reports Server (NTRS)

    Mulligan, M. G.; Wilson, S. G.

    1984-01-01

    A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.

  17. An improved algorithm for evaluating trellis phase codes

    NASA Technical Reports Server (NTRS)

    Mulligan, M. G.; Wilson, S. G.

    1982-01-01

    A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.

  18. Cell list algorithms for nonequilibrium molecular dynamics

    NASA Astrophysics Data System (ADS)

    Dobson, Matthew; Fox, Ian; Saracino, Alexandra

    2016-06-01

    We present two modifications of the standard cell list algorithm that handle molecular dynamics simulations with deforming periodic geometry. Such geometry naturally arises in the simulation of homogeneous, linear nonequilibrium flow modeled with periodic boundary conditions, and recent progress has been made developing boundary conditions suitable for general 3D flows of this type. Previous works focused on the planar flows handled by Lees-Edwards or Kraynik-Reinelt boundary conditions, while the new versions of the cell list algorithm presented here are formulated to handle the general 3D deforming simulation geometry. As in the case of equilibrium, for short-ranged pairwise interactions, the cell list algorithm reduces the computational complexity of the force computation from O(N2) to O(N), where N is the total number of particles in the simulation box. We include a comparison of the complexity and efficiency of the two proposed modifications of the standard algorithm.

  19. Five-dimensional Janis-Newman algorithm

    NASA Astrophysics Data System (ADS)

    Erbin, Harold; Heurtier, Lucien

    2015-08-01

    The Janis-Newman algorithm has been shown to be successful in finding new stationary solutions of four-dimensional gravity. Attempts for a generalization to higher dimensions have already been found for the restricted cases with only one angular momentum. In this paper we propose an extension of this algorithm to five-dimensions with two angular momenta—using the prescription of Giampieri—through two specific examples, that are the Myers-Perry and BMPV black holes. We also discuss possible enlargements of our prescriptions to other dimensions and maximal number of angular momenta, and show how dimensions higher than six appear to be much more challenging to treat within this framework. Nonetheless this general algorithm provides a unification of the formulation in d=3,4,5 of the Janis-Newman algorithm, from which several examples are exposed, including the BTZ black hole.

  20. Aerodynamic Shape Optimization using an Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Hoist, Terry L.; Pulliam, Thomas H.

    2003-01-01

    A method for aerodynamic shape optimization based on an evolutionary algorithm approach is presented and demonstrated. Results are presented for a number of model problems to access the effect of algorithm parameters on convergence efficiency and reliability. A transonic viscous airfoil optimization problem-both single and two-objective variations is used as the basis for a preliminary comparison with an adjoint-gradient optimizer. The evolutionary algorithm is coupled with a transonic full potential flow solver and is used to optimize the inviscid flow about transonic wings including multi-objective and multi-discipline solutions that lead to the generation of pareto fronts. The results indicate that the evolutionary algorithm approach is easy to implement, flexible in application and extremely reliable.

  1. Aerodynamic Shape Optimization using an Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)

    2003-01-01

    A method for aerodynamic shape optimization based on an evolutionary algorithm approach is presented and demonstrated. Results are presented for a number of model problems to access the effect of algorithm parameters on convergence efficiency and reliability. A transonic viscous airfoil optimization problem, both single and two-objective variations, is used as the basis for a preliminary comparison with an adjoint-gradient optimizer. The evolutionary algorithm is coupled with a transonic full potential flow solver and is used to optimize the inviscid flow about transonic wings including multi-objective and multi-discipline solutions that lead to the generation of pareto fronts. The results indicate that the evolutionary algorithm approach is easy to implement, flexible in application and extremely reliable.

  2. Parallelization of a blind deconvolution algorithm

    NASA Astrophysics Data System (ADS)

    Matson, Charles L.; Borelli, Kathy J.

    2006-09-01

    Often it is of interest to deblur imagery in order to obtain higher-resolution images. Deblurring requires knowledge of the blurring function - information that is often not available separately from the blurred imagery. Blind deconvolution algorithms overcome this problem by jointly estimating both the high-resolution image and the blurring function from the blurred imagery. Because blind deconvolution algorithms are iterative in nature, they can take minutes to days to deblur an image depending how many frames of data are used for the deblurring and the platforms on which the algorithms are executed. Here we present our progress in parallelizing a blind deconvolution algorithm to increase its execution speed. This progress includes sub-frame parallelization and a code structure that is not specialized to a specific computer hardware architecture.

  3. Testing and Development of the Onsite Earthquake Early Warning Algorithm to Reduce Event Uncertainties

    NASA Astrophysics Data System (ADS)

    Andrews, J. R.; Cochran, E. S.; Hauksson, E.; Felizardo, C.; Liu, T.; Ross, Z.; Heaton, T. H.

    2015-12-01

    Primary metrics for measuring earthquake early warning (EEW) system and algorithm performance are the rate of false alarms and the uncertainty in earthquake parameters. The Onsite algorithm, currently one of three EEW algorithms implemented in ShakeAlert, uses the ground-motion period parameter (τc) and peak initial displacement parameter (Pd) to estimate the magnitude and expected ground shaking of an ongoing earthquake. It is the only algorithm originally designed to issue single station alerts, necessitating that results from individual stations be as reliable and accurate as possible.The ShakeAlert system has been undergoing testing on continuous real-time data in California for several years, and the latest version of the Onsite algorithm for several months. This permits analysis of the response to a range of signals, from environmental noise to hardware testing and maintenance procedures to moderate or large earthquake signals at varying distances from the networks. We find that our existing discriminator, relying only on τc and Pd, while performing well to exclude large teleseismic events, is less effective for moderate regional events and can also incorrectly exclude data from local events. Motivated by these experiences, we use a collection of waveforms from potentially problematic 'noise' events and real earthquakes to explore methods to discriminate real and false events, using the ground motion and period parameters available in Onsite's processing methodology. Once an event is correctly identified, a magnitude and location estimate is critical to determining the expected ground shaking. Scatter in the measured parameters translates to higher than desired uncertainty in Onsite's current calculations We present an overview of alternative methods, including incorporation of polarization information, to improve parameter determination for a test suite including both large (M4 to M7) events and three years of small to moderate events across California.

  4. Surface cleanliness measurement procedure

    DOEpatents

    Schroder, Mark Stewart; Woodmansee, Donald Ernest; Beadie, Douglas Frank

    2002-01-01

    A procedure and tools for quantifying surface cleanliness are described. Cleanliness of a target surface is quantified by wiping a prescribed area of the surface with a flexible, bright white cloth swatch, preferably mounted on a special tool. The cloth picks up a substantial amount of any particulate surface contamination. The amount of contamination is determined by measuring the reflectivity loss of the cloth before and after wiping on the contaminated system and comparing that loss to a previous calibration with similar contamination. In the alternative, a visual comparison of the contaminated cloth to a contamination key provides an indication of the surface cleanliness.

  5. Radiometric correction procedure study

    NASA Technical Reports Server (NTRS)

    Colby, C.; Sands, R.; Murphrey, S.

    1978-01-01

    A comparison of MSS radiometric processing techniques identified as a preferred radiometric processing technique a procedure which equalizes the mean and standard deviation of detector-specific histograms of uncalibrated scene data. Evaluation of MSS calibration data demonstrated that the relationship between detector responses is essentially linear over the range of intensities typically observed in MSS data, and that the calibration wedge data possess a high degree of temporal stability. An analysis of the preferred radiometric processing technique showed that it could be incorporated into the MDP-MSS system without a major redesign of the system, and with minimal impact on system throughput.

  6. Vascular Access Procedures

    MedlinePlus

    ... conditions, allergies and medications you’re taking, including herbal supplements and aspirin. You may be advised to stop ... doctor all medications that you are taking, including herbal supplements, and if you have any allergies, especially to ...

  7. Advanced crew procedures development techniques: Procedures and performance program description

    NASA Technical Reports Server (NTRS)

    Arbet, J. D.; Mangiaracina, A. A.

    1975-01-01

    The Procedures and Performance Program (PPP) for operation in conjunction with the Shuttle Procedures Simulator (SPS) is described. The PPP user interface, the SPS/PPP interface, and the PPP applications software are discussed.

  8. Regulations and Procedures Manual

    SciTech Connect

    Young, Lydia J.

    2011-07-25

    The purpose of the Regulations and Procedures Manual (RPM) is to provide LBNL personnel with a reference to University and Lawrence Berkeley National Laboratory (LBNL or Laboratory) policies and regulations by outlining normal practices and answering most policy questions that arise in the day-to-day operations of Laboratory organizations. Much of the information in this manual has been condensed from detail provided in LBNL procedure manuals, Department of Energy (DOE) directives, and Contract DE-AC02-05CH11231. This manual is not intended, however, to replace any of those documents. RPM sections on personnel apply only to employees who are not represented by unions. Personnel policies pertaining to employees represented by unions may be found in their labor agreements. Questions concerning policy interpretation should be directed to the LBNL organization responsible for the particular policy. A link to the Managers Responsible for RPM Sections is available on the RPM home page. If it is not clear which organization is responsible for a policy, please contact Requirements Manager Lydia Young or the RPM Editor.

  9. Designing Flight Deck Procedures

    NASA Technical Reports Server (NTRS)

    Degani, Asaf; Wiener, Earl

    2005-01-01

    Three reports address the design of flight-deck procedures and various aspects of human interaction with cockpit systems that have direct impact on flight safety. One report, On the Typography of Flight- Deck Documentation, discusses basic research about typography and the kind of information needed by designers of flight deck documentation. Flight crews reading poorly designed documentation may easily overlook a crucial item on the checklist. The report surveys and summarizes the available literature regarding the design and typographical aspects of printed material. It focuses on typographical factors such as proper typefaces, character height, use of lower- and upper-case characters, line length, and spacing. Graphical aspects such as layout, color coding, fonts, and character contrast are discussed; and several cockpit conditions such as lighting levels and glare are addressed, as well as usage factors such as angular alignment, paper quality, and colors. Most of the insights and recommendations discussed in this report are transferable to paperless cockpit systems of the future and computer-based procedure displays (e.g., "electronic flight bag") in aerospace systems and similar systems that are used in other industries such as medical, nuclear systems, maritime operations, and military systems.

  10. Regulations and Procedures Manual

    SciTech Connect

    Young, Lydia

    2010-09-30

    The purpose of the Regulations and Procedures Manual (RPM) is to provide Laboratory personnel with a reference to University and Lawrence Berkeley National Laboratory policies and regulations by outlining the normal practices and answering most policy questions that arise in the day-to-day operations of Laboratory departments. Much of the information in this manual has been condensed from detail provided in Laboratory procedure manuals, Department of Energy (DOE) directives, and Contract DE-AC02-05CH11231. This manual is not intended, however, to replace any of those documents. The sections on personnel apply only to employees who are not represented by unions. Personnel policies pertaining to employees represented by unions may be found in their labor agreements. Questions concerning policy interpretation should be directed to the department responsible for the particular policy. A link to the Managers Responsible for RPM Sections is available on the RPM home page. If it is not clear which department should be called, please contact the Associate Laboratory Director of Operations.

  11. Evolutionary pattern search algorithms

    SciTech Connect

    Hart, W.E.

    1995-09-19

    This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimental analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.

  12. Quantum Algorithms, Symmetry, and Fourier Analysis

    NASA Astrophysics Data System (ADS)

    Denney, Aaron

    I describe the role of symmetry in two quantum algorithms, with a focus on how that symmetry is made manifest by the Fourier transform. The Fourier transform can be considered in a wider context than the familiar one of functions on Rn or Z/nZ ; instead it can be defined for an arbitrary group where it is known as representation theory.. The first quantum algorithm solves an instance of the hidden subgroup problem—distinguishing conjugates of the Borel subgroup from each other in groups related to PSL(2; q). I use the symmetry of the subgroups under consideration to reduce the problem to a mild extension of a previously solved problem. This generalizes a result of Moore, Rockmore, Russel and Schulman by switching to a more natural measurement that also applies to prime powers. In contrast to the first algorithm, the second quantum algorithm is an attempt to use naturally continuous spaces. Quantum walks have proved to be a useful tool for designing quantum algorithms. The natural equivalent to continuous time quantum walks is evolution with the Schrödinger equation, under the kinetic energy Hamiltonian for a massive particle. I take advantage of quantum interference to find the center of spherical shells in high dimensions. Any implementation would be likely to take place on a discrete grid, using the ability of a digital quantum computer to simulate the evolution of a quantum system. In addition, I use ideas from the second algorithm on a different set of starting states, and find that quantum evolution can be used to sample from the evolute of a plane curve. The method of stationary phase is used to determine scaling exponents characterizing the precision and probability of success for this procedure.

  13. Algorithmization in Learning and Instruction.

    ERIC Educational Resources Information Center

    Landa, L. N.

    An introduction to the theory of algorithms reviews the theoretical issues of teaching algorithms, the logical and psychological problems of devising algorithms of identification, and the selection of efficient algorithms; and then relates all of these to the classroom teaching process. It also descirbes some major research on the effectiveness of…

  14. Cognition and procedure representational requirements for predictive human performance models

    NASA Technical Reports Server (NTRS)

    Corker, K.

    1992-01-01

    Models and modeling environments for human performance are becoming significant contributors to early system design and analysis procedures. Issues of levels of automation, physical environment, informational environment, and manning requirements are being addressed by such man/machine analysis systems. The research reported here investigates the close interaction between models of human cognition and models that described procedural performance. We describe a methodology for the decomposition of aircrew procedures that supports interaction with models of cognition on the basis of procedures observed; that serves to identify cockpit/avionics information sources and crew information requirements; and that provides the structure to support methods for function allocation among crew and aiding systems. Our approach is to develop an object-oriented, modular, executable software representation of the aircrew, the aircraft, and the procedures necessary to satisfy flight-phase goals. We then encode in a time-based language, taxonomies of the conceptual, relational, and procedural constraints among the cockpit avionics and control system and the aircrew. We have designed and implemented a goals/procedures hierarchic representation sufficient to describe procedural flow in the cockpit. We then execute the procedural representation in simulation software and calculate the values of the flight instruments, aircraft state variables and crew resources using the constraints available from the relationship taxonomies. The system provides a flexible, extensible, manipulative and executable representation of aircrew and procedures that is generally applicable to crew/procedure task-analysis. The representation supports developed methods of intent inference, and is extensible to include issues of information requirements and functional allocation. We are attempting to link the procedural representation to models of cognitive functions to establish several intent inference methods

  15. Paradigms for Realizing Machine Learning Algorithms.

    PubMed

    Agneeswaran, Vijay Srinivas; Tonpay, Pranay; Tiwary, Jayati

    2013-12-01

    The article explains the three generations of machine learning algorithms-with all three trying to operate on big data. The first generation tools are SAS, SPSS, etc., while second generation realizations include Mahout and RapidMiner (that work over Hadoop), and the third generation paradigms include Spark and GraphLab, among others. The essence of the article is that for a number of machine learning algorithms, it is important to look beyond the Hadoop's Map-Reduce paradigm in order to make them work on big data. A number of promising contenders have emerged in the third generation that can be exploited to realize deep analytics on big data.

  16. Algorithms for the Computation of Debris Risks

    NASA Technical Reports Server (NTRS)

    Matney, Mark

    2017-01-01

    Determining the risks from space debris involve a number of statistical calculations. These calculations inevitably involve assumptions about geometry - including the physical geometry of orbits and the geometry of non-spherical satellites. A number of tools have been developed in NASA's Orbital Debris Program Office to handle these calculations; many of which have never been published before. These include algorithms that are used in NASA's Orbital Debris Engineering Model ORDEM 3.0, as well as other tools useful for computing orbital collision rates and ground casualty risks. This paper will present an introduction to these algorithms and the assumptions upon which they are based.

  17. Attenuation correction effects on SPECT/CT procedures: phantoms studies.

    PubMed

    Oliveira, M L; Seren, M E G; Rocha, F C; Brunetto, S Q; Ramos, C D; Button, V L S N

    2013-01-01

    Attenuation correction is widely used in SPECT/CT (Single Photon Emission Computed Tomography) procedures, especially for imaging of the thorax region. Different compensation methods have been developed and introduced into clinical practice. Most of them use attenuation maps obtained using transmission scanning systems. However, this gives extra dose of radiation to the patient. The purpose of this study was to identify when attenuation correction is really important during SPECT/CT procedures.For this purpose, we used Jaszczak phantom and phantom with three line sources, filled with technetium ((99m)-Tc), with scattering materials, like air, water and acrylic, in different detectors configurations. In all images acquired were applied analytic and iterative reconstruction algorithms; the last one with or without attenuation correction. We analyzed parameters such as eccentricity, contrast and spatial resolution in the images.The best reconstruction algorithm on average was iterative, for images with 128 × 128 and 64 × 64 matrixes. The analytical algorithm was effective only to improve eccentricity in 64 × 64 matrix and matrix in contrast 128 × 128 with low statistics. Turning to the clinical routine examinations, on average, for 128 × 128 matrix and low statistics counting, the best algorithm was the iterative, without attenuation correction,improving in 150% the three parameters analyzed and, for the same matrix size, but with high statistical counting, iterative algorithm with attenuation correction was 25% better than that without correction. We can conclude that using the iterative algorithm with attenuation correction in the water, and its extra dose given, is not justified for the procedures of low statistic counting, being relevant only if the intention is to prioritize contrast in acquisitions with high statistic counting.

  18. Avoiding the Enumeration of Infeasible Elementary Flux Modes by Including Transcriptional Regulatory Rules in the Enumeration Process Saves Computational Costs.

    PubMed

    Jungreuthmayer, Christian; Ruckerbauer, David E; Gerstl, Matthias P; Hanscho, Michael; Zanghellini, Jürgen

    2015-01-01

    Despite the significant progress made in recent years, the computation of the complete set of elementary flux modes of large or even genome-scale metabolic networks is still impossible. We introduce a novel approach to speed up the calculation of elementary flux modes by including transcriptional regulatory information into the analysis of metabolic networks. Taking into account gene regulation dramatically reduces the solution space and allows the presented algorithm to constantly eliminate biologically infeasible modes at an early stage of the computation procedure. Thereby, computational costs, such as runtime, memory usage, and disk space, are extremely reduced. Moreover, we show that the application of transcriptional rules identifies non-trivial system-wide effects on metabolism. Using the presented algorithm pushes the size of metabolic networks that can be studied by elementary flux modes to new and much higher limits without the loss of predictive quality. This makes unbiased, system-wide predictions in large scale metabolic networks possible without resorting to any optimization principle.

  19. Avoiding the Enumeration of Infeasible Elementary Flux Modes by Including Transcriptional Regulatory Rules in the Enumeration Process Saves Computational Costs

    PubMed Central

    Jungreuthmayer, Christian; Ruckerbauer, David E.; Gerstl, Matthias P.; Hanscho, Michael; Zanghellini, Jürgen

    2015-01-01

    Despite the significant progress made in recent years, the computation of the complete set of elementary flux modes of large or even genome-scale metabolic networks is still impossible. We introduce a novel approach to speed up the calculation of elementary flux modes by including transcriptional regulatory information into the analysis of metabolic networks. Taking into account gene regulation dramatically reduces the solution space and allows the presented algorithm to constantly eliminate biologically infeasible modes at an early stage of the computation procedure. Thereby, computational costs, such as runtime, memory usage, and disk space, are extremely reduced. Moreover, we show that the application of transcriptional rules identifies non-trivial system-wide effects on metabolism. Using the presented algorithm pushes the size of metabolic networks that can be studied by elementary flux modes to new and much higher limits without the loss of predictive quality. This makes unbiased, system-wide predictions in large scale metabolic networks possible without resorting to any optimization principle. PMID:26091045

  20. Windprofiler optimization using digital deconvolution procedures

    NASA Astrophysics Data System (ADS)

    Hocking, W. K.; Hocking, A.; Hocking, D. G.; Garbanzo-Salas, M.

    2014-10-01

    Digital improvements to data acquisition procedures used for windprofiler radars have the potential for improving the height coverage at optimum resolution, and permit improved height resolution. A few newer systems already use this capability. Real-time deconvolution procedures offer even further optimization, and this has not been effectively employed in recent years. In this paper we demonstrate the advantages of combining these features, with particular emphasis on the advantages of real-time deconvolution. Using several multi-core CPUs, we have been able to achieve speeds of up to 40 GHz from a standard commercial motherboard, allowing data to be digitized and processed without the need for any type of hardware except for a transmitter (and associated drivers), a receiver and a digitizer. No Digital Signal Processor chips are needed, allowing great flexibility with analysis algorithms. By using deconvolution procedures, we have then been able to not only optimize height resolution, but also have been able to make advances in dealing with spectral contaminants like ground echoes and other near-zero-Hz spectral contamination. Our results also demonstrate the ability to produce fine-resolution measurements, revealing small-scale structures within the backscattered echoes that were previously not possible to see. Resolutions of 30 m are possible for VHF radars. Furthermore, our deconvolution technique allows the removal of range-aliasing effects in real time, a major bonus in many instances. Results are shown using new radars in Canada and Costa Rica.