In-Trail Procedure (ITP) Algorithm Design
NASA Technical Reports Server (NTRS)
Munoz, Cesar A.; Siminiceanu, Radu I.
2007-01-01
The primary objective of this document is to provide a detailed description of the In-Trail Procedure (ITP) algorithm, which is part of the Airborne Traffic Situational Awareness In-Trail Procedure (ATSA-ITP) application. To this end, the document presents a high level description of the ITP Algorithm and a prototype implementation of this algorithm in the programming language C.
Using an admittance algorithm for bone drilling procedures.
Accini, Fernando; Díaz, Iñaki; Gil, Jorge Juan
2016-01-01
Bone drilling is a common procedure in many types of surgeries, including orthopedic, neurological and otologic surgeries. Several technologies and control algorithms have been developed to help the surgeon automatically stop the drill before it goes through the boundary of the tissue being drilled. However, most of them rely on thrust force and cutting torque to detect bone layer transitions which has many drawbacks that affect the reliability of the process. This paper describes in detail a bone-drilling algorithm based only on the position control of the drill bit that overcomes such problems and presents additional advantages. The implication of each component of the algorithm in the drilling procedure is analyzed and the efficacy of the algorithm is experimentally validated with two types of bones. PMID:26516110
A dynamic programming algorithm for RNA structure prediction including pseudoknots.
Rivas, E; Eddy, S R
1999-02-01
We describe a dynamic programming algorithm for predicting optimal RNA secondary structure, including pseudoknots. The algorithm has a worst case complexity of O(N6) in time and O(N4) in storage. The description of the algorithm is complex, which led us to adopt a useful graphical representation (Feynman diagrams) borrowed from quantum field theory. We present an implementation of the algorithm that generates the optimal minimum energy structure for a single RNA sequence, using standard RNA folding thermodynamic parameters augmented by a few parameters describing the thermodynamic stability of pseudoknots. We demonstrate the properties of the algorithm by using it to predict structures for several small pseudoknotted and non-pseudoknotted RNAs. Although the time and memory demands of the algorithm are steep, we believe this is the first algorithm to be able to fold optimal (minimum energy) pseudoknotted RNAs with the accepted RNA thermodynamic model.
A computational procedure for multibody systems including flexible beam dynamics
NASA Technical Reports Server (NTRS)
Downer, J. D.; Park, K. C.; Chiou, J. C.
1990-01-01
A computational procedure suitable for the solution of equations of motions for flexible multibody systems has been developed. The flexible beams are modeled using a fully nonlinear theory which accounts for both finite rotations and large deformations. The present formulation incorporates physical measures of conjugate Cauchy stress and covariant strain increments. As a consequence, the beam model can easily be interfaced with real-time strain measurements and feedback control systems. A distinct feature of the present work is the computational preservation of total energy for undamped systems; this is obtained via an objective strain increment/stress update procedure combined with an energy-conserving time integration algorithm which contains an accurate update of angular orientations. The procedure is demonstrated via several example problems.
Simulation of Accident Sequences Including Emergency Operating Procedures
Queral, Cesar; Exposito, Antonio; Hortal, Javier
2004-07-01
Operator actions play an important role in accident sequences. However, design analysis (Safety Analysis Report, SAR) seldom includes consideration of operator actions, although they are required by compulsory Emergency Operating Procedures (EOP) to perform some checks and actions from the very beginning of the accident. The basic aim of the project is to develop a procedure validation system which consists of the combination of three elements: a plant transient simulation code TRETA (a C based modular program) developed by the CSN, a computerized procedure system COPMA-III (Java technology based program) developed by the OECD-Halden Reactor Project and adapted for simulation with the contribution of our group and a software interface that provides the communication between COPMA-III and TRETA. The new combined system is going to be applied in a pilot study in order to analyze sequences initiated by secondary side breaks in a Pressurized Water Reactors (PWR) plant. (authors)
Chemical Compatibility Testing Final Report Including Test Plans and Procedures
NIMITZ,JONATHAN S.; ALLRED,RONALD E.; GORDON,BRENT W.; NIGREY,PAUL J.; MCCONNELL,PAUL E.
2001-07-01
This report provides an independent assessment of information on mixed waste streams, chemical compatibility information on polymers, and standard test methods for polymer properties. It includes a technology review of mixed low-level waste (LLW) streams and material compatibilities, validation for the plan to test the compatibility of simulated mixed wastes with potential seal and liner materials, and the test plan itself. Potential packaging materials were reviewed and evaluated for compatibility with expected hazardous wastes. The chemical and physical property measurements required for testing container materials were determined. Test methodologies for evaluating compatibility were collected and reviewed for applicability. A test plan to meet US Department of Energy and Environmental Protection Agency requirements was developed. The expected wastes were compared with the chemical resistances of polymers, the top-ranking polymers were selected for testing, and the most applicable test methods for candidate seal and liner materials were determined. Five recommended solutions to simulate mixed LLW streams are described. The test plan includes descriptions of test materials, test procedures, data collection protocols, safety and environmental considerations, and quality assurance procedures. The recommended order of testing to be conducted is specified.
Advances in pleural disease management including updated procedural coding.
Haas, Andrew R; Sterman, Daniel H
2014-08-01
Over 1.5 million pleural effusions occur in the United States every year as a consequence of a variety of inflammatory, infectious, and malignant conditions. Although rarely fatal in isolation, pleural effusions are often a marker of a serious underlying medical condition and contribute to significant patient morbidity, quality-of-life reduction, and mortality. Pleural effusion management centers on pleural fluid drainage to relieve symptoms and to investigate pleural fluid accumulation etiology. Many recent studies have demonstrated important advances in pleural disease management approaches for a variety of pleural fluid etiologies, including malignant pleural effusion, complicated parapneumonic effusion and empyema, and chest tube size. The last decade has seen greater implementation of real-time imaging assistance for pleural effusion management and increasing use of smaller bore percutaneous chest tubes. This article will briefly review recent pleural effusion management literature and update the latest changes in common procedural terminology billing codes as reflected in the changing landscape of imaging use and percutaneous approaches to pleural disease management.
A computational procedure for multibody systems including flexible beam dynamics
NASA Technical Reports Server (NTRS)
Downer, J. D.; Park, K. C.; Chiou, J. C.
1990-01-01
A computational procedure suitable for the solution of equations of motions for flexible multibody systems has been developed. A fully nonlinear continuum approach capable of accounting for both finite rotations and large deformations has been used to model a flexible beam component. The beam kinematics are referred directly to an inertial reference frame such that the degrees of freedom embody both the rigid and flexible deformation motions. As such, the beam inertia expression is identical to that of rigid body dynamics. The nonlinear coupling between gross body motion and elastic deformation is contained in the internal force expression. Numerical solution procedures for the integration of spatial kinematic systems can be directily applied to the generalized coordinates of both the rigid and flexible components. An accurate computation of the internal force term which is invariant to rigid motions is incorporated into the general solution procedure.
Benchmarking Procedures for High-Throughput Context Specific Reconstruction Algorithms
Pacheco, Maria P.; Pfau, Thomas; Sauter, Thomas
2016-01-01
Recent progress in high-throughput data acquisition has shifted the focus from data generation to processing and understanding of how to integrate collected information. Context specific reconstruction based on generic genome scale models like ReconX or HMR has the potential to become a diagnostic and treatment tool tailored to the analysis of specific individuals. The respective computational algorithms require a high level of predictive power, robustness and sensitivity. Although multiple context specific reconstruction algorithms were published in the last 10 years, only a fraction of them is suitable for model building based on human high-throughput data. Beside other reasons, this might be due to problems arising from the limitation to only one metabolic target function or arbitrary thresholding. This review describes and analyses common validation methods used for testing model building algorithms. Two major methods can be distinguished: consistency testing and comparison based testing. The first is concerned with robustness against noise, e.g., missing data due to the impossibility to distinguish between the signal and the background of non-specific binding of probes in a microarray experiment, and whether distinct sets of input expressed genes corresponding to i.e., different tissues yield distinct models. The latter covers methods comparing sets of functionalities, comparison with existing networks or additional databases. We test those methods on several available algorithms and deduce properties of these algorithms that can be compared with future developments. The set of tests performed, can therefore serve as a benchmarking procedure for future algorithms. PMID:26834640
Benchmarking Procedures for High-Throughput Context Specific Reconstruction Algorithms.
Pacheco, Maria P; Pfau, Thomas; Sauter, Thomas
2015-01-01
Recent progress in high-throughput data acquisition has shifted the focus from data generation to processing and understanding of how to integrate collected information. Context specific reconstruction based on generic genome scale models like ReconX or HMR has the potential to become a diagnostic and treatment tool tailored to the analysis of specific individuals. The respective computational algorithms require a high level of predictive power, robustness and sensitivity. Although multiple context specific reconstruction algorithms were published in the last 10 years, only a fraction of them is suitable for model building based on human high-throughput data. Beside other reasons, this might be due to problems arising from the limitation to only one metabolic target function or arbitrary thresholding. This review describes and analyses common validation methods used for testing model building algorithms. Two major methods can be distinguished: consistency testing and comparison based testing. The first is concerned with robustness against noise, e.g., missing data due to the impossibility to distinguish between the signal and the background of non-specific binding of probes in a microarray experiment, and whether distinct sets of input expressed genes corresponding to i.e., different tissues yield distinct models. The latter covers methods comparing sets of functionalities, comparison with existing networks or additional databases. We test those methods on several available algorithms and deduce properties of these algorithms that can be compared with future developments. The set of tests performed, can therefore serve as a benchmarking procedure for future algorithms.
78 FR 57639 - Request for Comments on Pediatric Planned Procedure Algorithm
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-19
... Procedure Algorithm AGENCY: Agency for Healthcare Research and Quality (AHRQ), HHS. ACTION: Notice of request for comments on pediatric planned procedure algorithm from the members of the public. SUMMARY... from the public on an algorithm for identifying pediatric planned procedures as part of the...
Dipole splitting algorithm: A practical algorithm to use the dipole subtraction procedure
NASA Astrophysics Data System (ADS)
Hasegawa, K.
2015-11-01
The Catani-Seymour dipole subtraction is a general and powerful procedure to calculate the QCD next-to-leading order corrections for collider observables. We clearly define a practical algorithm to use the dipole subtraction. The algorithm is called the dipole splitting algorithm (DSA). The DSA is applied to an arbitrary process by following well defined steps. The subtraction terms created by the DSA can be summarized in a compact form by tables. We present a template for the summary tables. One advantage of the DSA is to allow a straightforward algorithm to prove the consistency relation of all the subtraction terms. The proof algorithm is presented in the following paper [K. Hasegawa, arXiv:1409.4174]. We demonstrate the DSA in two collider processes, pp to μ -μ + and 2 jets. Further, as a confirmation of the DSA, it is shown that the analytical results obtained by the DSA in the Drell-Yan process exactly agree with the well known results obtained by the traditional method.
30 CFR 250.1933 - What procedures must be included for reporting unsafe working conditions?
Code of Federal Regulations, 2013 CFR
2013-07-01
... SHELF Safety and Environmental Management Systems (SEMS) § 250.1933 What procedures must be included for... 30 Mineral Resources 2 2013-07-01 2013-07-01 false What procedures must be included for reporting unsafe working conditions? 250.1933 Section 250.1933 Mineral Resources BUREAU OF SAFETY AND...
30 CFR 250.1933 - What procedures must be included for reporting unsafe working conditions?
Code of Federal Regulations, 2014 CFR
2014-07-01
... SHELF Safety and Environmental Management Systems (SEMS) § 250.1933 What procedures must be included for... 30 Mineral Resources 2 2014-07-01 2014-07-01 false What procedures must be included for reporting unsafe working conditions? 250.1933 Section 250.1933 Mineral Resources BUREAU OF SAFETY AND...
A Re-Usable Algorithm for Teaching Procedural Skills.
ERIC Educational Resources Information Center
Jones, Mark K.; And Others
The design of a re-usable instructional algorithm for computer based instruction (CBI) is described. The prototype is implemented on IBM PC compatibles running the Windows(TM) graphical environment, using the prototyping tool ToolBook(TM). The algorithm is designed to reduce development and life cycle costs for CBI by providing an authoring…
An algorithm for computing nucleic acid base-pairing probabilities including pseudoknots.
Dirks, Robert M; Pierce, Niles A
2004-07-30
Given a nucleic acid sequence, a recent algorithm allows the calculation of the partition function over secondary structure space including a class of physically relevant pseudoknots. Here, we present a method for computing base-pairing probabilities starting from the output of this partition function algorithm. The approach relies on the calculation of recursion probabilities that are computed by backtracking through the partition function algorithm, applying a particular transformation at each step. This transformation is applicable to any partition function algorithm that follows the same basic dynamic programming paradigm. Base-pairing probabilities are useful for analyzing the equilibrium ensemble properties of natural and engineered nucleic acids, as demonstrated for a human telomerase RNA and a synthetic DNA nanostructure. PMID:15139042
Should Title 24 Ventilation Requirements Be Amended to include an Indoor Air Quality Procedure?
Dutton, Spencer M.; Mendell, Mark J.; Chan, Wanyu R.
2013-05-13
Minimum outdoor air ventilation rates (VRs) for buildings are specified in standards, including California?s Title 24 standards. The ASHRAE ventilation standard includes two options for mechanically-ventilated buildings ? a prescriptive ventilation rate procedure (VRP) that specifies minimum VRs that vary among occupancy classes, and a performance-based indoor air quality procedure (IAQP) that may result in lower VRs than the VRP, with associated energy savings, if IAQ meeting specified criteria can be demonstrated. The California Energy Commission has been considering the addition of an IAQP to the Title 24 standards. This paper, based on a review of prior data and new analyses of the IAQP, evaluates four future options for Title 24: no IAQP; adding an alternate VRP, adding an equivalent indoor air quality procedure (EIAQP), and adding an improved ASHRAE-like IAQP. Criteria were established for selecting among options, and feedback was obtained in a workshop of stakeholders. Based on this review, the addition of an alternate VRP is recommended. This procedure would allow lower minimum VRs if a specified set of actions were taken to maintain acceptable IAQ. An alternate VRP could also be a valuable supplement to ASHRAE?s ventilation standard.
An Evaluation of a Flight Deck Interval Management Algorithm Including Delayed Target Trajectories
NASA Technical Reports Server (NTRS)
Swieringa, Kurt A.; Underwood, Matthew C.; Barmore, Bryan; Leonard, Robert D.
2014-01-01
NASA's first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature air traffic management technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise timebased scheduling in the terminal airspace; Controller Managed Spacing (CMS), which provides controllers with decision support tools enabling precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain precise in-trail spacing. During high demand operations, TMA-TM may produce a schedule and corresponding aircraft trajectories that include delay to ensure that a particular aircraft will be properly spaced from other aircraft at each schedule waypoint. These delayed trajectories are not communicated to the automation onboard the aircraft, forcing the IM aircraft to use the published speeds to estimate the target aircraft's estimated time of arrival. As a result, the aircraft performing IM operations may follow an aircraft whose TMA-TM generated trajectories have substantial speed deviations from the speeds expected by the spacing algorithm. Previous spacing algorithms were not designed to handle this magnitude of uncertainty. A simulation was conducted to examine a modified spacing algorithm with the ability to follow aircraft flying delayed trajectories. The simulation investigated the use of the new spacing algorithm with various delayed speed profiles and wind conditions, as well as several other variables designed to simulate real-life variability. The results and conclusions of this study indicate that the new spacing algorithm generally exhibits good performance; however, some types of target aircraft speed profiles can cause the spacing algorithm to command less than optimal speed control behavior.
NASA Technical Reports Server (NTRS)
Lennington, R. K.; Johnson, J. K.
1979-01-01
An efficient procedure which clusters data using a completely unsupervised clustering algorithm and then uses labeled pixels to label the resulting clusters or perform a stratified estimate using the clusters as strata is developed. Three clustering algorithms, CLASSY, AMOEBA, and ISOCLS, are compared for efficiency. Three stratified estimation schemes and three labeling schemes are also considered and compared.
Algorithms and Programs for Strong Gravitational Lensing In Kerr Space-time Including Polarization
NASA Astrophysics Data System (ADS)
Chen, Bin; Kantowski, Ronald; Dai, Xinyu; Baron, Eddie; Maddumage, Prasad
2015-05-01
Active galactic nuclei (AGNs) and quasars are important astrophysical objects to understand. Recently, microlensing observations have constrained the size of the quasar X-ray emission region to be of the order of 10 gravitational radii of the central supermassive black hole. For distances within a few gravitational radii, light paths are strongly bent by the strong gravity field of the central black hole. If the central black hole has nonzero angular momentum (spin), then a photon’s polarization plane will be rotated by the gravitational Faraday effect. The observed X-ray flux and polarization will then be influenced significantly by the strong gravity field near the source. Consequently, linear gravitational lensing theory is inadequate for such extreme circumstances. We present simple algorithms computing the strong lensing effects of Kerr black holes, including the effects on polarization. Our algorithms are realized in a program “KERTAP” in two versions: MATLAB and Python. The key ingredients of KERTAP are a graphic user interface, a backward ray-tracing algorithm, a polarization propagator dealing with gravitational Faraday rotation, and algorithms computing observables such as flux magnification and polarization angles. Our algorithms can be easily realized in other programming languages such as FORTRAN, C, and C++. The MATLAB version of KERTAP is parallelized using the MATLAB Parallel Computing Toolbox and the Distributed Computing Server. The Python code was sped up using Cython and supports full implementation of MPI using the “mpi4py” package. As an example, we investigate the inclination angle dependence of the observed polarization and the strong lensing magnification of AGN X-ray emission. We conclude that it is possible to perform complex numerical-relativity related computations using interpreted languages such as MATLAB and Python.
ALGORITHMS AND PROGRAMS FOR STRONG GRAVITATIONAL LENSING IN KERR SPACE-TIME INCLUDING POLARIZATION
Chen, Bin; Maddumage, Prasad; Kantowski, Ronald; Dai, Xinyu; Baron, Eddie
2015-05-15
Active galactic nuclei (AGNs) and quasars are important astrophysical objects to understand. Recently, microlensing observations have constrained the size of the quasar X-ray emission region to be of the order of 10 gravitational radii of the central supermassive black hole. For distances within a few gravitational radii, light paths are strongly bent by the strong gravity field of the central black hole. If the central black hole has nonzero angular momentum (spin), then a photon’s polarization plane will be rotated by the gravitational Faraday effect. The observed X-ray flux and polarization will then be influenced significantly by the strong gravity field near the source. Consequently, linear gravitational lensing theory is inadequate for such extreme circumstances. We present simple algorithms computing the strong lensing effects of Kerr black holes, including the effects on polarization. Our algorithms are realized in a program “KERTAP” in two versions: MATLAB and Python. The key ingredients of KERTAP are a graphic user interface, a backward ray-tracing algorithm, a polarization propagator dealing with gravitational Faraday rotation, and algorithms computing observables such as flux magnification and polarization angles. Our algorithms can be easily realized in other programming languages such as FORTRAN, C, and C++. The MATLAB version of KERTAP is parallelized using the MATLAB Parallel Computing Toolbox and the Distributed Computing Server. The Python code was sped up using Cython and supports full implementation of MPI using the “mpi4py” package. As an example, we investigate the inclination angle dependence of the observed polarization and the strong lensing magnification of AGN X-ray emission. We conclude that it is possible to perform complex numerical-relativity related computations using interpreted languages such as MATLAB and Python.
Timmins, S.
1991-01-01
Walker Branch Watershed is a forested, research watershed marked throughout by a 264 ft grid that was surveyed in 1967 using the Oak Ridge National Laboratory (X-10) coordinate system. The Tennessee Valley Authority (TVA) prepared a contour map of the watershed in 1987, and an ARC/INFO{trademark} version of the TVA topographic map with the X-10 grid superimposed has since been used as the primary geographic information system (GIS) data base for the watershed. However, because of inaccuracies observed in mapped locations of some grid markers and permanent research plots, portions of the watershed were resurveyed in 1989 and an extensive investigation of the coordinates used in creating both the TVA map and ARC/INFO data base and of coordinate transformation procedures currently in use on the Oak Ridge Reservation was conducted. They determined that the positional errors resulted from the field orientation of the blazed grid rather than problems in mapmaking. In resurveying the watershed, previously surveyed control points were located or noted as missing, and 25 new control points along the perimeter roads were surveyed. In addition, 67 of 156 grid line intersections (pegs) were physically located and their positions relative to mapped landmarks were recorded. As a result, coordinates for the Walker Branch Watershed grid lines and permanent research plots were revised, and a revised map of the watershed was produced. In conjunction with this work, existing procedures for converting between the local grid systems, Tennessee state plane, and the 1927 and 1983 North American Datums were updated and compiled along with illustrative examples and relevant historical information. Alternative algorithms were developed for several coordinate conversions commonly used on the Oak Ridge Reservation.
Best Estimate Radiation Flux Value-Added Procedure. Algorithm Operational Details and Explanations
Shi, Y.; Long, C. N.
2002-10-01
This document describes some specifics of the algorithm for best estimate evaluation of radiation fluxes at Southern Great Plains (SGP) Central Facility (CF). It uses the data available from the three co-located surface radiometer platforms at the SGP CF to automatically determine the best estimate of the irradiance measurements available. The Best Estimate Flux (BEFlux) value-added procedure (VAP) was previously named Best Estimate ShortWave (BESW) VAP, which included all of the broadband and spectral shortwave (SW) measurements for the SGP CF. In BESW, multiple measurements of the same quantities were handled simply by designating one as the primary measurement and using all others to merely fill in any gaps. Thus, this “BESW” is better termed “most continuous,” since no additional quality assessment was applied. We modified the algorithm in BESW to use the average of the closest two measurements as the best estimate when possible, if these measurements pass all quality assessment criteria. Furthermore, we included longwave (LW) fields in the best estimate evaluation to include all major components of the surface radiative energy budget, and renamed the VAP to Best Estimate Flux (BEFLUX1LONG).
Best Estimate Radiation Flux Value-Added Procedure: Algorithm Operational Details and Explanations
Shi, Y; Long, CN
2002-10-01
This document describes some specifics of the algorithm for best estimate evaluation of radiation fluxes at Southern Great Plains (SGP) Central Facility (CF). It uses the data available from the three co-located surface radiometer platforms at the SGP CF to automatically determine the best estimate of the irradiance measurements available. The Best Estimate Flux (BEFlux) value-added procedure (VAP) was previously named Best Estimate ShortWave (BESW) VAP, which included all of the broadband and spectral shortwave (SW) measurements for the SGP CF. In BESW, multiple measurements of the same quantities were handled simply by designating one as the primary measurement and using all others to merely fill in any gaps. Thus, this “BESW” is better termed “most continuous,” since no additional quality assessment was applied. We modified the algorithm in BESW to use the average of the closest two measurements as the best estimate when possible, if these measurements pass all quality assessment criteria. Furthermore, we included longwave (LW) fields in the best estimate evaluation to include all major components of the surface radiative energy budget, and renamed the VAP to Best Estimate Flux (BEFLUX1LONG).
34 CFR 299.11 - What items are included in the complaint procedures?
Code of Federal Regulations, 2010 CFR
2010-07-01
... violations of section 14503 (participation of private school children), the Secretary will follow the... complaint procedures to parents of students, and appropriate private school officials or...
Viscous microstructural dampers with aligned holes: design procedure including the edge correction.
Homentcovschi, Dorel; Miles, Ronald N
2007-09-01
The paper is a continuation of the works "Modelling of viscous damping of perforated planar micromechanical structures. Applications in acoustics" [Homentcovschi and Miles, J. Acoust. Soc. Am. 116, 2939-2947 (2004)] and "Viscous Damping of Perforated Planar Micromechanical Structures" [Homentcovschi and Miles, Sensors Actuators, A119, 544-552 (2005)] where design formulas for the case of an offset (staggered) system of holes was provided. The present work contains design formulas for perforated planar microstructures used in MEMS devices (such as proof-masses in accelerometers, backplates in microphones, micromechanical switches, resonators, tunable microoptical interferometers, etc.) in the case of aligned (nonstaggered) holes of circular and square section. The given formulas assure a minimum total damping coefficient (including the squeeze film damping and the direct and indirect resistance of the holes) for an assigned open area. The paper also gives a simple edge correction, making it possible to consider real (finite) perforated planar microstructures. The proposed edge correction is validated by comparison with the results obtained by FEM simulations: the relative error is found to be smaller than 0.04%. By putting together the design formulas with the edge correction a simple integrated design procedure for obtaining viscous perforated dampers with assigned properties is obtained. PMID:17927414
Viscous microstructural dampers with aligned holes: design procedure including the edge correction.
Homentcovschi, Dorel; Miles, Ronald N
2007-09-01
The paper is a continuation of the works "Modelling of viscous damping of perforated planar micromechanical structures. Applications in acoustics" [Homentcovschi and Miles, J. Acoust. Soc. Am. 116, 2939-2947 (2004)] and "Viscous Damping of Perforated Planar Micromechanical Structures" [Homentcovschi and Miles, Sensors Actuators, A119, 544-552 (2005)] where design formulas for the case of an offset (staggered) system of holes was provided. The present work contains design formulas for perforated planar microstructures used in MEMS devices (such as proof-masses in accelerometers, backplates in microphones, micromechanical switches, resonators, tunable microoptical interferometers, etc.) in the case of aligned (nonstaggered) holes of circular and square section. The given formulas assure a minimum total damping coefficient (including the squeeze film damping and the direct and indirect resistance of the holes) for an assigned open area. The paper also gives a simple edge correction, making it possible to consider real (finite) perforated planar microstructures. The proposed edge correction is validated by comparison with the results obtained by FEM simulations: the relative error is found to be smaller than 0.04%. By putting together the design formulas with the edge correction a simple integrated design procedure for obtaining viscous perforated dampers with assigned properties is obtained.
Code of Federal Regulations, 2012 CFR
2012-07-01
... IMPACT AID PROGRAMS Special Provisions for Local Educational Agencies That Claim Children Residing on... 34 Education 1 2012-07-01 2012-07-01 false What provisions must be included in a local educational... educational agency's Indian policies and procedures? (a) An LEA's Indian policies and procedures (IPPs)...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 34 Education 1 2011-07-01 2011-07-01 false What provisions must be included in a local educational... IMPACT AID PROGRAMS Special Provisions for Local Educational Agencies That Claim Children Residing on... educational agency's Indian policies and procedures? (a) An LEA's Indian policies and procedures (IPPs)...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 1 2010-07-01 2010-07-01 false What provisions must be included in a local educational... IMPACT AID PROGRAMS Special Provisions for Local Educational Agencies That Claim Children Residing on... educational agency's Indian policies and procedures? (a) An LEA's Indian policies and procedures (IPPs)...
Why McNemar's Procedure Needs to Be Included in the Business Statistics Curriculum
ERIC Educational Resources Information Center
Berenson, Mark L.; Koppel, Nicole B.
2005-01-01
In business research situations it is often of interest to examine the differences in the responses in repeated measurements of the same subjects or from among matched or paired subjects. A simple and useful procedure for comparing differences between proportions in two related samples was devised by McNemar (1947) nearly 60 years ago. Although…
Belwin Edward, J; Rajasekar, N; Sathiyasekar, K; Senthilnathan, N; Sarjila, R
2013-09-01
Obtaining optimal power flow solution is a strenuous task for any power system engineer. The inclusion of FACTS devices in the power system network adds to its complexity. The dual objective of OPF with fuel cost minimization along with FACTS device location for IEEE 30 bus is considered and solved using proposed Enhanced Bacterial Foraging algorithm (EBFA). The conventional Bacterial Foraging Algorithm (BFA) has the difficulty of optimal parameter selection. Hence, in this paper, BFA is enhanced by including Nelder-Mead (NM) algorithm for better performance. A MATLAB code for EBFA is developed and the problem of optimal power flow with inclusion of FACTS devices is solved. After several run with different initial values, it is found that the inclusion of FACTS devices such as SVC and TCSC in the network reduces the generation cost along with increased voltage stability limits. It is also observed that, the proposed algorithm requires lesser computational time compared to earlier proposed algorithms.
NASA Astrophysics Data System (ADS)
Schneider, Florian; Rascher, Rolf; Stamp, Richard; Smith, Gordon
2013-09-01
The modern optical industry requires objects with complex topographical structures. Free-form shaped objects are of large interest in many branches, especially for size reduced, modern lifestyle products like digital cameras. State of the art multi-axes-coordinate measurement machines (CMM), like the topographical measurement machine TII-3D, are by principle suitable to measure free-form shaped objects. The only limitation is the software package. This paper may illustrate a simple way to enhance coordinate measurement machines in order to add a free-form function. Next to a coordinate measurement machine, only a state of the art CAD† system and a simple piece of software are necessary. For this paper, the CAD software CREO‡ had been used. CREO enables the user to develop a 3D object in two different ways. With the first method, the user might design the shape by drawing one or more 2D sketches and put an envelope around. Using the second method, the user could define one or more formulas in the editor to describe the favoured surface. Both procedures lead to the required three-dimensional shape. However, further features of CREO enable the user to export the XYZ-coordinates of the created surface. A special designed software tool, developed with Matlab§, converts the XYZ-file into a measurement matrix which can be used as a reference file. Finally the result of the free-form measurement, carried out with a CMM, has to be loaded into the software tool and both files will be computed. The result is an error profile which provides the deviation between the measurement and the target-geometry.
ERIC Educational Resources Information Center
Ontario Dept. of Education, Toronto. School Planning and Building Research Section.
Regulations are presented pertaining to the planning and construction of colleges of applied arts and technology including basic principles involved in planning such facilities. Material from a wide variety of sources is condensed in outline form regarding the following topics--(1) college students, staff, programs, (2) the area and its needs, (3)…
A procedure for testing the quality of LANDSAT atmospheric correction algorithms
NASA Technical Reports Server (NTRS)
Dias, L. A. V. (Principal Investigator); Vijaykumar, N. L.; Neto, G. C.
1982-01-01
There are two basic methods for testing the quality of an algorithm to minimize atmospheric effects on LANDSAT imagery: (1) test the results a posteriori, using ground truth or control points; (2) use a method based on image data plus estimation of additional ground and/or atmospheric parameters. A procedure based on the second method is described. In order to select the parameters, initially the image contrast is examined for a series of parameter combinations. The contrast improves for better corrections. In addition the correlation coefficient between two subimages, taken at different times, of the same scene is used for parameter's selection. The regions to be correlated should not have changed considerably in time. A few examples using this proposed procedure are presented.
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.; Johnston, Christopher O.
2011-01-01
Implementations of a model for equilibrium, steady-state ablation boundary conditions are tested for the purpose of providing strong coupling with a hypersonic flow solver. The objective is to remove correction factors or film cooling approximations that are usually applied in coupled implementations of the flow solver and the ablation response. Three test cases are considered - the IRV-2, the Galileo probe, and a notional slender, blunted cone launched at 10 km/s from the Earth's surface. A successive substitution is employed and the order of succession is varied as a function of surface temperature to obtain converged solutions. The implementation is tested on a specified trajectory for the IRV-2 to compute shape change under the approximation of steady-state ablation. Issues associated with stability of the shape change algorithm caused by explicit time step limits are also discussed.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 45 Public Welfare 2 2010-10-01 2010-10-01 false What administrative and management procedures must a Tribe or Tribal organization include in a Tribal IV-D plan? 309.75 Section 309.75 Public Welfare...), ADMINISTRATION FOR CHILDREN AND FAMILIES, DEPARTMENT OF HEALTH AND HUMAN SERVICES TRIBAL CHILD...
Cassini VIMS observations of the Galilean satellites including the VIMS calibration procedure
McCord, T.B.; Coradini, A.; Hibbitts, C.A.; Capaccioni, F.; Hansen, G.B.; Filacchione, G.; Clark, R.N.; Cerroni, P.; Brown, R.H.; Baines, K.H.; Bellucci, G.; Bibring, J.-P.; Buratti, B.J.; Bussoletti, E.; Combes, M.; Cruikshank, D.P.; Drossart, P.; Formisano, V.; Jaumann, R.; Langevin, Y.; Matson, D.L.; Nelson, R.M.; Nicholson, P.D.; Sicardy, B.; Sotin, C.
2004-01-01
The Visual and Infrared Mapping Spectrometer (VIMS) observed the Galilean satellites during the Cassini spacecraft's 2000/2001 flyby of Jupiter, providing compositional and thermal information about their surfaces. The Cassini spacecraft approached the jovian system no closer than about 126 Jupiter radii, about 9 million kilometers, at a phase angle of < 90 ??, resulting in only sub-pixel observations by VIMS of the Galilean satellites. Nevertheless, most of the spectral features discovered by the Near Infrared Mapping Spectrometer (NIMS) aboard the Galileo spacecraft during more than four years of observations have been identified in the VIMS data analyzed so far, including a possible 13C absorption. In addition, VIMS made observations in the visible part of the spectrum and at several new phase angles for all the Galilean satellites and the calculated phase functions are presented. In the process of analyzing these data, the VIMS radiometric and spectral calibrations were better determined in preparation for entry into the Saturn system. Treatment of these data is presented as an example of the VIMS data reduction, calibration and analysis process and a detailed explanation is given of the calibration process applied to the Jupiter data. ?? 2004 Elsevier Inc. All rights reserved.
Kuppan, T.
1995-05-01
Design formulas and calculation procedure for the design of fixed tubesheets of shell and tube heat exchangers are included in Appendix AA--Nonmandatory of ASME Boiler and Pressure Vessel code, Section 8, Division 1. To minimize the number of calculations, charts are provided as part of the design procedure. This article provides alternate charts for certain parameters and the original version of the charts are extended for larger values of tubesheet design parameter. Numerical values are given in tabular form for certain functions used in plotting the design charts. This will help to do design calculations without referring to the charts.
NASA Astrophysics Data System (ADS)
Tufail, Mudassir; Cousin, Bernard
1997-10-01
Ensuring end-to-end bounded delay and fair allocation of bandwidth to a backlogged session are no more the only criterias for declaring a queue service scheme good. With the evolution of packet-switched networks, more and more distributed and multimedia applications are being developed. These applications demand that service offered to them should be homogeneously distributed at all instants contrarily to back-to-back packet's serving in WFQ scheme. There are two reasons for this demand of homogeneous service: (1) In feedback based congestion control algorithms, sources constantly sample the network state using the feedback from the receiver. The source modifies its emission rate in accordance to the feedback message. A reliable feedback message is only possible if the packet service is homogeneous. (2) In multicast applications, where packet replication is performed at switches, replicated packets are probable to be served at different rates if service to them, at different output ports, is not homogeneous. This is not desirable for such applications as the phenomena of packet replication to different multicast branches, at a switch, has to be carried out at a homogeneous speed for the following two important reasons: (1) heterogeneous service rates of replicated multicast packets result in different feedback informations, from different destinations (of same multicast session), and thus lead to unstable and less efficient network control. (2) in a switch architecture, the buffer requirement can be reduced if replication and serving of multicast packets are done at a homogeneous rate. Thus, there is a need of a service discipline which not only serve the applications at no less than their guaranteed rates but also assures a homogeneous service to packets. The homogeneous service to an application may precisely be translated in terms of maintaining a good inter-packets spacing. EWFQ scheme is identical to WFQ scheme expect that a packet is stamped with delayed
Gousheh, S.S.
1996-01-01
I have used the shooting method to find the eigenvalues (bound state energies) of a set of strongly coupled Schroedinger type equations. I have discussed the advantages of the shooting method when the potentials include {delta}-functions. I have also discussed some points which are universal in these kind of problems, whose use make the algorithm much more efficient. These points include mapping the domain of the ODE into a finite one, using the asymptotic form of the solutions, best use of the normalization freedom, and converting the {delta}-functions into boundary conditions.
NASA Technical Reports Server (NTRS)
Tappa, M. J.; Mills, R. D.; Ware, B.; Simon, J. I.
2014-01-01
The isotopic compositions of elements are often used to characterize nucelosynthetic contributions in early Solar System objects. Coordinated multiple middle-mass elements with differing volatilities may provide information regarding the location of condensation of early Solar System solids. Here we detail new procedures that we have developed to make high-precision multi-isotope measurements of chromium and calcium using thermal ionization mass spectrometry, and characterize a suite of chondritic and terrestrial material including two fragments of the Chelyabinsk LL-chondrite.
NASA Astrophysics Data System (ADS)
Batzias, F. A.; Sidiras, D. K.; Giannopoulos, Ch.; Spetsidis, I.
2009-08-01
This work deals with a methodological framework designed/developed under the form of a spatio-temporal algorithmic procedure for environmental policymaking at local level. The procedure includes 25 activity stages and 9 decision nodes, putting emphasis on (i) mapping on GIS layers water supply/demand and modeling of aquatic pollution coming from point and non-point sources, (ii) environmental monitoring by periodically measuring the main pollutants in situ and in the laboratory, (iii) design of environmental projects, decomposition of them into sub-projects and combination of the latter to form attainable alternatives, (iv) multicriteria ranking of alternatives, according to a modified Delphi method, by using as criteria the expected environmental benefit, the attitude of inhabitants, the priority within the programme of regional development, the capital required for the investment and the operating cost, and (v) knowledge Base (KB) operation/enrichment, functioning in combination with a data mining mechanism to extract knowledge/information/data from external Bases. An implementation is presented referring to the Municipality of Arkalochori in the Greek island of Crete.
A procedure for the reliability improvement of the oblique ionograms automatic scaling algorithm
NASA Astrophysics Data System (ADS)
Ippolito, Alessandro; Scotto, Carlo; Sabbagh, Dario; Sgrigna, Vittorio; Maher, Phillip
2016-05-01
A procedure made by the combined use of the Oblique Ionogram Automatic Scaling Algorithm (OIASA) and Autoscala program is presented. Using Martyn's equivalent path theorem, 384 oblique soundings from a high-quality data set have been converted into vertical ionograms and analyzed by Autoscala program. The ionograms pertain to the radio link between Curtin W.A. (CUR) and Alice Springs N.T. (MTE), Australia, geographical coordinates (17.60°S; 123.82°E) and (23.52°S; 133.68°E), respectively. The critical frequency foF2 values extracted from the converted vertical ionograms by Autoscala were then compared with the foF2 values derived from the maximum usable frequencies (MUFs) provided by OIASA. A quality factor Q for the MUF values autoscaled by OIASA has been identified. Q represents the difference between the foF2 value scaled by Autoscala from the converted vertical ionogram and the foF2 value obtained applying the secant law to the MUF provided by OIASA. Using the receiver operating characteristic curve, an appropriate threshold level Qt was chosen for Q to improve the performance of OIASA.
Antes, Iris
2010-04-01
Molecular docking programs play an important role in drug development and many well-established methods exist. However, there are two situations for which the performance of most approaches is still not satisfactory, namely inclusion of receptor flexibility and docking of large, flexible ligands like peptides. In this publication a new approach is presented for docking peptides into flexible receptors. For this purpose a two step procedure was developed: first, the protein-peptide conformational space is scanned and approximate ligand poses are identified and second, the identified ligand poses are refined by a new molecular dynamics-based method, optimized potential molecular dynamics (OPMD). The OPMD approach uses soft-core potentials for the protein-peptide interactions and applies a new optimization scheme to the soft-core potential. Comparison with refinement results obtained by conventional molecular dynamics and a soft-core scaling approach shows significant improvements in the sampling capability for the OPMD method. Thus, the number of starting poses needed for successful refinement is much lower than for the other methods. The algorithm was evaluated on 15 protein-peptide complexes with 2-16mer peptides. Docking poses with peptide RMSD values <2.10 A from the equilibrated experimental structures were obtained in all cases. For four systems docking into the unbound receptor structures was performed, leading to peptide RMSD values <2.12 A. Using a specifically fitted scoring function in 11 of 15 cases the best scoring poses featured a peptide RMSD < or = 2.10 A.
NASA Technical Reports Server (NTRS)
Truong, T. K.; Hsu, I. S.; Eastman, W. L.; Reed, I. S.
1987-01-01
It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial and the error evaluator polynomial in Berlekamp's key equation needed to decode a Reed-Solomon (RS) code. A simplified procedure is developed and proved to correct erasures as well as errors by replacing the initial condition of the Euclidean algorithm by the erasure locator polynomial and the Forney syndrome polynomial. By this means, the errata locator polynomial and the errata evaluator polynomial can be obtained, simultaneously and simply, by the Euclidean algorithm only. With this improved technique the complexity of time domain RS decoders for correcting both errors and erasures is reduced substantially from previous approaches. As a consequence, decoders for correcting both errors and erasures of RS codes can be made more modular, regular, simple, and naturally suitable for both VLSI and software implementation. An example illustrating this modified decoding procedure is given for a (15, 9) RS code.
NASA Technical Reports Server (NTRS)
Kankam, M. David; Benjamin, Owen
1991-01-01
The development of computer software for performance prediction and analysis of voltage-fed, variable-frequency AC drives for space power applications is discussed. The AC drives discussed include the pulse width modulated inverter (PWMI), a six-step inverter and the pulse density modulated inverter (PDMI), each individually connected to a wound-rotor induction motor. Various d-q transformation models of the induction motor are incorporated for user-selection of the most applicable model for the intended purpose. Simulation results of selected AC drives correlate satisfactorily with published results. Future additions to the algorithm are indicated. These improvements should enhance the applicability of the computer program to the design and analysis of space power systems.
NASA Astrophysics Data System (ADS)
Liolios, K.; Tsihrintzis, V.; Angelidis, P.; Georgiev, K.; Georgiev, I.
2016-10-01
Current developments on modeling of groundwater flow and contaminant transport and removal in the porous media of Horizontal Subsurface Flow Constructed Wetlands (HSF CWs) are first reviewed in a short way. The two usual environmental engineering approaches, the black-box and the process-based one, are briefly presented. Next, recent research results obtained by using these two approaches are briefly discussed as application examples, where emphasis is given to the evaluation of the optimal design and operation parameters concerning HSF CWs. For the black-box approach, the use of Artificial Neural Networks is discussed for the formulation of models, which predict the removal performance of HSF CWs. A novel mathematical prove is presented, which concerns the dependence of the first-order removal coefficient on the Temperature and the Hydraulic Residence Time. For the process-based approach, an application example is first discussed which concerns procedures to evaluate the optimal range of values for the removal coefficient, dependent on either the Temperature or the Hydraulic Residence Time. This evaluation is based on simulating available experimental results of pilot-scale units operated in Democritus University of Thrace, Xanthi, Greece. Further, in a second example, a novel enlargement of the system of Partial Differential Equations is presented, in order to include geothermal effects. Finally, in a third example, the case of parameters uncertainty concerning biodegradation procedures is considered and the use of upper and a novel approach is presented, which concerns the upper and the lower solution bound for the practical draft design of HSF CWs.
Breuer, Christian; Lucas, Martin; Schütze, Frank-Walter; Claus, Peter
2007-01-01
A multi-criteria optimisation procedure based on genetic algorithms is carried out in search of advanced heterogeneous catalysts for total oxidation. Simple but flexible software routines have been created to be applied within a search space of more then 150,000 individuals. The general catalyst design includes mono-, bi- and trimetallic compositions assembled out of 49 different metals and depleted on an Al2O3 support in up to nine amount levels. As an efficient tool for high-throughput screening and perfectly matched to the requirements of heterogeneous gas phase catalysis - especially for applications technically run in honeycomb structures - the multi-channel monolith reactor is implemented to evaluate the catalyst performances. Out of a multi-component feed-gas, the conversion rates of carbon monoxide (CO) and a model hydrocarbon (HC) are monitored in parallel. In combination with further restrictions to preparation and pre-treatment a primary screening can be conducted, promising to provide results close to technically applied catalysts. Presented are the resulting performances of the optimisation process for the first catalyst generations and the prospect of its auto-adaptation to specified optimisation goals. PMID:17266517
NASA Astrophysics Data System (ADS)
Tokunaga, Yoshitaka
This paper presents estimation techniques of machine parameters for power transformer using design procedure of transformer and genetic algorithm with real coding. Especially, it is very difficult to obtain machine parameters for transformers in customers' facilities. Using estimation techniques, machine parameters could be calculated from the only nameplate data of these transformers. Subsequently, EMTP-ATP simulation of the inrush current was carried out using machine parameters estimated by techniques developed in this study and simulation results were reproduced measured waveforms.
NASA Technical Reports Server (NTRS)
Stocker, Erich Franz
2004-01-01
TRMM has been an imminently successful mission from an engineering standpoint but even more from a science standpoint. An important part of this science success has been the careful quality control of the TRMM standard products. This paper will present the quality monitoring efforts that the TRMM Science Data and Information System (TSDIS) conducts on a routine basis. The paper will detail parameter trending, geolocation quality control and the procedures to support the preparation of next versions of the algorithm used for reprocessing.
NASA Astrophysics Data System (ADS)
Lee, Kangjun; Jeon, Gwanggil; Jeong, Jechang
2009-05-01
The H.264/AVC baseline profile is used in many applications, including digital multimedia broadcasting, Internet protocol television, and storage devices, while the MPEG-2 main profile is widely used in applications, such as high-definition television and digital versatile disks. The MPEG-2 main profile supports B pictures for bidirectional motion prediction. Therefore, transcoding the MPEG-2 main profile to the H.264/AVC baseline is necessary for universal multimedia access. In the cascaded pixel domain transcoder architecture, the calculation of the rate distortion cost as part of the mode decision process in the H.264/AVC encoder requires extremely complex computations. To reduce the complexity inherent in the implementation of a real-time transcoder, we propose a fast mode decision algorithm based on complexity information from the reference region that is used for motion compensation. In this study, an adaptive mode decision process was used based on the modes assigned to the reference regions. Simulation results indicated that a significant reduction in complexity was achieved without significant degradation of video quality.
Retinoids: Literature Review and Suggested Algorithm for Use Prior to Facial Resurfacing Procedures
Buchanan, Patrick J; Gilman, Robert H
2016-01-01
Vitamin A-containing products have been used topically since the early 1940s to treat various skin conditions. To date, there are four generations of retinoids, a family of Vitamin A-containing compounds. Tretinoin, all-trans-retinoic acid, is a first-generation, naturally occurring, retinoid. It is available, commercially, as a gel or cream. The authors conducted a complete review of all studies, clinical- and basic science-based studies, within the literature involving tretinoin treatment recommendations for impending facial procedures. The literature currently lacks definitive recommendations for the use of tretinoin-containing products prior to undergoing facial procedures. Tretinoin pretreatment regimens vary greatly in terms of the strength of retinoid used, the length of the pre-procedure treatment, and the ideal time to stop treatment before the procedure. Based on the current literature and personal experience, the authors set forth a set of guidelines for the use of tretinoin prior to various facial procedures. PMID:27761082
Code of Federal Regulations, 2012 CFR
2012-07-01
... 32 National Defense 1 2012-07-01 2012-07-01 false Procedures for Special Educational Programs... Appendix B to Part 80 National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE PERSONNEL..., not merely to provide a single general intelligence quotient. 3. The evaluation shall be conducted...
Ticehurst, John R; Aird, Deborah Z; Dam, Lisa M; Borek, Anita P; Hargrove, John T; Carroll, Karen C
2006-03-01
We evaluated a two-step algorithm for detecting toxigenic Clostridium difficile: an enzyme immunoassay for glutamate dehydrogenase antigen (Ag-EIA) and then, for antigen-positive specimens, a concurrent cell culture cytotoxicity neutralization assay (CCNA). Antigen-negative results were > or = 99% predictive of CCNA negativity. Because the Ag-EIA reduced cell culture workload by approximately 75 to 80% and two-step testing was complete in < or = 3 days, we decided that this algorithm would be effective. Over 6 months, our laboratories' expenses were US dollar 143,000 less than if CCNA alone had been performed on all 5,887 specimens.
Development of AN Algorithmic Procedure for the Detection of Conjugate Fragments
NASA Astrophysics Data System (ADS)
Filippas, D.; Georgopoulo, A.
2013-07-01
The rapid development of Computer Vision has contributed to the widening of the techniques and methods utilized by archaeologists for the digitization and reconstruction of historic objects by automating the matching of fragments, small or large. This paper proposes a novel method for the detection of conjugate fragments, based mainly on their geometry. Subsequently the application of the Fragmatch algorithm is presented, with an extensive analysis of both of its parts; the global and the partial matching of surfaces. The method proposed is based on the comparison of vectors and surfaces, performed linearly, for simplicity and speed. A series of simulations have been performed in order to test the limits of the algorithm for the noise and the accuracy of scanning, for the number of scan points, as well as for the wear of the surfaces and the diversity of shapes. Problems that have been encountered during the application of these examples are interpreted and ways of dealing with them are being proposed. In addition a practical application is presented to test the algorithm in real conditions. Finally, the key points of this work are being mentioned, followed by an analysis of the advantages and disadvantages of the proposed Fragmatch algorithm along with proposals for future work.
NASA Astrophysics Data System (ADS)
Zemcov, Michael; Crill, Brendan; Ryan, Matthew; Staniszewski, Zak
2016-06-01
Mega-pixel charge-integrating detectors are common in near-IR imaging applications. Optimal signal-to-noise ratio estimates of the photocurrents, which are particularly important in the low-signal regime, are produced by fitting linear models to sequential reads of the charge on the detector. Algorithms that solve this problem have a long history, but can be computationally intensive. Furthermore, the cosmic ray background is appreciable for these detectors in Earth orbit, particularly above the Earth’s magnetic poles and the South Atlantic Anomaly, and on-board reduction routines must be capable of flagging affected pixels. In this paper, we present an algorithm that generates optimal photocurrent estimates and flags random transient charge generation from cosmic rays, and is specifically designed to fit on a computationally restricted platform. We take as a case study the Spectro-Photometer for the History of the Universe, Epoch of Reionization, and Ices Explorer (SPHEREx), a NASA Small Explorer astrophysics experiment concept, and show that the algorithm can easily fit in the resource-constrained environment of such a restricted platform. Detailed simulations of the input astrophysical signals and detector array performance are used to characterize the fitting routines in the presence of complex noise properties and charge transients. We use both Hubble Space Telescope Wide Field Camera-3 and Wide-field Infrared Survey Explorer to develop an empirical understanding of the susceptibility of near-IR detectors in low earth orbit and build a model for realistic cosmic ray energy spectra and rates. We show that our algorithm generates an unbiased estimate of the true photocurrent that is identical to that from a standard line fitting package, and characterize the rate, energy, and timing of both detected and undetected transient events. This algorithm has significant potential for imaging with charge-integrating detectors in astrophysics, earth science, and remote
Hipp, Jason D.; Cheng, Jerome Y.; Toner, Mehmet; Tompkins, Ronald G.; Balis, Ulysses J.
2011-01-01
Introduction: Historically, effective clinical utilization of image analysis and pattern recognition algorithms in pathology has been hampered by two critical limitations: 1) the availability of digital whole slide imagery data sets and 2) a relative domain knowledge deficit in terms of application of such algorithms, on the part of practicing pathologists. With the advent of the recent and rapid adoption of whole slide imaging solutions, the former limitation has been largely resolved. However, with the expectation that it is unlikely for the general cohort of contemporary pathologists to gain advanced image analysis skills in the short term, the latter problem remains, thus underscoring the need for a class of algorithm that has the concurrent properties of image domain (or organ system) independence and extreme ease of use, without the need for specialized training or expertise. Results: In this report, we present a novel, general case pattern recognition algorithm, Spatially Invariant Vector Quantization (SIVQ), that overcomes the aforementioned knowledge deficit. Fundamentally based on conventional Vector Quantization (VQ) pattern recognition approaches, SIVQ gains its superior performance and essentially zero-training workflow model from its use of ring vectors, which exhibit continuous symmetry, as opposed to square or rectangular vectors, which do not. By use of the stochastic matching properties inherent in continuous symmetry, a single ring vector can exhibit as much as a millionfold improvement in matching possibilities, as opposed to conventional VQ vectors. SIVQ was utilized to demonstrate rapid and highly precise pattern recognition capability in a broad range of gross and microscopic use-case settings. Conclusion: With the performance of SIVQ observed thus far, we find evidence that indeed there exist classes of image analysis/pattern recognition algorithms suitable for deployment in settings where pathologists alone can effectively incorporate their
Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM☆
López, J.D.; Litvak, V.; Espinosa, J.J.; Friston, K.; Barnes, G.R.
2014-01-01
The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy—an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. PMID:24041874
NASA Astrophysics Data System (ADS)
Khatibinia, M.; Salajegheh, E.; Salajegheh, J.; Fadaee, M. J.
2013-10-01
A new discrete gravitational search algorithm (DGSA) and a metamodelling framework are introduced for reliability-based design optimization (RBDO) of reinforced concrete structures. The RBDO of structures with soil-structure interaction (SSI) effects is investigated in accordance with performance-based design. The proposed DGSA is based on the standard gravitational search algorithm (GSA) to optimize the structural cost under deterministic and probabilistic constraints. The Monte-Carlo simulation (MCS) method is considered as the most reliable method for estimating the probabilities of reliability. In order to reduce the computational time of MCS, the proposed metamodelling framework is employed to predict the responses of the SSI system in the RBDO procedure. The metamodel consists of a weighted least squares support vector machine (WLS-SVM) and a wavelet kernel function, which is called WWLS-SVM. Numerical results demonstrate the efficiency and computational advantages of DGSA and the proposed metamodel for RBDO of reinforced concrete structures.
Dora, Carlos; Racioppi, Francesca
2003-01-01
From the mid-1990s, research began to highlight the importance of a wide range of health impacts of transport policy decisions. The Third Ministerial Conference on Environment and Health adopted a Charter on Transport, Environment and Health based on four main components: bringing awareness of the nature, magnitude and costs of the health impacts of transport into intergovernmental processes; strengthening the arguments for integration of health into transport policies by developing in-depth analysis of the evidence; developing national case studies; and engaging ministries of environment, health and transport as well as intergovernmental and nongovernmental organizations. Negotiation of the Charter was based on two converging processes: the political process involved the interaction of stakeholders in transport, health and environment in Europe, which helped to frame the issues and the approaches to respond to them; the scientific process involved an international group of experts who produced state-of- the-art reviews of the health impacts resulting from transportation activities, identifying gaps in existing knowledge and methodological tools, specifying the policy implications of their findings, and suggesting possible targets for health improvements. Health arguments were used to strengthen environmental ones, clarify costs and benefits, and raise issues of health equity. The European experience shows that HIA can fulfil the need for simple procedures to be systematically applied to decisions regarding transport strategies at national, regional and local levels. Gaps were identified concerning models for quantifying health impacts and capacity building on how to use such tools. PMID:12894322
Code of Federal Regulations, 2014 CFR
2014-07-01
... determined individually, based upon the preschool child's or child's performance, behavior, and needs when... (Including Related Services) for Preschool Children and Children With Disabilities (3-21 years Inclusive) B... DISABILITIES AND THEIR FAMILIES, AND SPECIAL EDUCATION CHILDREN WITH DISABILITIES WITHIN THE SECTION 6...
NASA Astrophysics Data System (ADS)
Biswas, Papun; Chakraborti, Debjani
2010-10-01
This paper describes how the genetic algorithms (GAs) can be efficiently used to fuzzy goal programming (FGP) formulation of optimal power flow problems having multiple objectives. In the proposed approach, the different constraints, various relationships of optimal power flow calculations are fuzzily described. In the model formulation of the problem, the membership functions of the defined fuzzy goals are characterized first for measuring the degree of achievement of the aspiration levels of the goals specified in the decision making context. Then, the achievement function for minimizing the regret for under-deviations from the highest membership value (unity) of the defined membership goals to the extent possible on the basis of priorities is constructed for optimal power flow problems. In the solution process, the GA method is employed to the FGP formulation of the problem for achievement of the highest membership value (unity) of the defined membership functions to the extent possible in the decision making environment. In the GA based solution search process, the conventional Roulette wheel selection scheme, arithmetic crossover and random mutation are taken into consideration to reach a satisfactory decision. The developed method has been tested on IEEE 6-generator 30-bus System. Numerical results show that this method is promising for handling uncertain constraints in practical power systems.
Cristofolini, Andrea; Latini, Chiara; Borghi, Carlo A.
2011-02-01
This paper presents a technique for improving the convergence rate of a generalized minimum residual (GMRES) algorithm applied for the solution of a algebraic system produced by the discretization of an electrodynamic problem with a tensorial electrical conductivity. The electrodynamic solver considered in this work is a part of a magnetohydrodynamic (MHD) code in the low magnetic Reynolds number approximation. The code has been developed for the analysis of MHD interaction during the re-entry phase of a space vehicle. This application is a promising technique intensively investigated for the shock mitigation and the vehicle control in the higher layers of a planetary atmosphere. The medium in the considered application is a low density plasma, characterized by a tensorial conductivity. This is a result of the behavior of the free electric charges, which tend to drift in a direction perpendicular both to the electric field and to the magnetic field. In the given approximation, the electrodynamics is described by an elliptical partial differential equation, which is solved by means of a finite element approach. The linear system obtained by discretizing the problem is solved by means of a GMRES iterative method with an incomplete LU factorization threshold preconditioning. The convergence of the solver appears to be strongly affected by the tensorial characteristic of the conductivity. In order to deal with this feature, the bandwidth reduction in the coefficient matrix is considered and a novel technique is proposed and discussed. First, the standard reverse Cuthill-McKee (RCM) procedure has been applied to the problem. Then a modification of the RCM procedure (the weighted RCM procedure, WRCM) has been developed. In the last approach, the reordering is performed taking into account the relation between the mesh geometry and the magnetic field direction. In order to investigate the effectiveness of the methods, two cases are considered. The RCM and WRCM procedures
NASA Astrophysics Data System (ADS)
Hsu, Chih-Ming
2014-12-01
Portfolio optimisation is an important issue in the field of investment/financial decision-making and has received considerable attention from both researchers and practitioners. However, besides portfolio optimisation, a complete investment procedure should also include the selection of profitable investment targets and determine the optimal timing for buying/selling the investment targets. In this study, an integrated procedure using data envelopment analysis (DEA), artificial bee colony (ABC) and genetic programming (GP) is proposed to resolve a portfolio optimisation problem. The proposed procedure is evaluated through a case study on investing in stocks in the semiconductor sub-section of the Taiwan stock market for 4 years. The potential average 6-month return on investment of 9.31% from 1 November 2007 to 31 October 2011 indicates that the proposed procedure can be considered a feasible and effective tool for making outstanding investment plans, and thus making profits in the Taiwan stock market. Moreover, it is a strategy that can help investors to make profits even when the overall stock market suffers a loss.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
ERIC Educational Resources Information Center
Ceulemans, Eva; Van Mechelen, Iven; Leenen, Iwin
2007-01-01
Hierarchical classes models are quasi-order retaining Boolean decomposition models for N-way N-mode binary data. To fit these models to data, rationally started alternating least squares (or, equivalently, alternating least absolute deviations) algorithms have been proposed. Extensive simulation studies showed that these algorithms succeed quite…
NASA Astrophysics Data System (ADS)
Leblanc, T.; Haefele, A.; Sica, R. J.; van Gijsel, A.
2014-12-01
A new lidar data processing algorithm for the retrieval of ozone, temperature and water vapor has been developed for centralized use within the Network for the Detection of Atmospheric Composition Change (NDACC) and the GCOS Reference Upper Air Network (GRUAN). The program is written with the objective that raw data from a large number of lidar instruments can be analyzed consistently. The uncertainty budget includes 13 sources of uncertainty that are explicitly propagated taking into account vertical and inter-channel dependencies. Several standardized definitions of vertical resolution can be used, leading to a maximum flexibility, and to the production of tropospheric ozone, stratospheric ozone, middle atmospheric temperature and tropospheric water vapor profiles optimized for multiple user needs such as long-term monitoring, process studies and model and satellite validation. A review of the program's functionalities as well as the first retrieved products will be presented.
IJff, Marjoliek A; Huijbregts, Klaas ML; van Marwijk, Harm WJ; Beekman, Aartjan TF; Hakkaart-van Roijen, Leona; Rutten, Frans F; Unützer, Jürgen; van der Feltz-Cornelis, Christina M
2007-01-01
Background Depressive disorder is currently one of the most burdensome disorders worldwide. Evidence-based treatments for depressive disorder are already available, but these are used insufficiently, and with less positive results than possible. Earlier research in the USA has shown good results in the treatment of depressive disorder based on a collaborative care approach with Problem Solving Treatment and an antidepressant treatment algorithm, and research in the UK has also shown good results with Problem Solving Treatment. These treatment strategies may also work very well in the Netherlands too, even though health care systems differ between countries. Methods/design This study is a two-armed randomised clinical trial, with randomization on patient-level. The aim of the trial is to evaluate the treatment of depressive disorder in primary care in the Netherlands by means of an adapted collaborative care framework, including contracting and adherence-improving strategies, combined with Problem Solving Treatment and antidepressant medication according to a treatment algorithm. Forty general practices will be randomised to either the intervention group or the control group. Included will be patients who are diagnosed with moderate to severe depression, based on DSM-IV criteria, and stratified according to comorbid chronic physical illness. Patients in the intervention group will receive treatment based on the collaborative care approach, and patients in the control group will receive care as usual. Baseline measurements and follow up measures (3, 6, 9 and 12 months) are assessed using questionnaires and an interview. The primary outcome measure is severity of depressive symptoms, according to the PHQ9. Secondary outcome measures are remission as measured with the PHQ9 and the IDS-SR, and cost-effectiveness measured with the TiC-P, the EQ-5D and the SF-36. Discussion In this study, an American model to enhance care for patients with a depressive disorder, the
NASA Astrophysics Data System (ADS)
Turner, Gren; Rawlins, Barry; Wragg, Joanna; Lark, Murray
2014-05-01
Aggregate stability is an important physical indicator of soil quality and influences the potential for erosive losses from the landscape, so methods are required to measure it rapidly and cost-effectively. Previously we demonstrated a novel method for quantifying the stability of soil aggregates using a laser granulometer (Rawlins et al., 2012). We have developed our method further to mimic field conditions more closely by incorporating a procedure for pre-wetting aggregates (for 30 minutes on a filter paper) prior to applying the test. The first measurement of particle-size distribution is made on the water stable aggregates after these have been added to circulating water (aggregate size range 1000 to 2000 µm). The second measurement is made on the disaggregated material after the circulating aggregates have been disrupted with ultrasound (sonication). We then compute the difference between the mean weight diameters (MWD) of these two size distributions; we refer to this value as the disaggregation reduction (DR; µm). Soils with more stable aggregates, which are resistant to both slaking and mechanical breakdown by the hydrodynamic forces during circulation, have larger values of DR. We made repeated analyses of DR using an aggregate reference material (RM; a paleosol with well-characterised disaggregation properties) and used this throughout our analyses to demonstrate our approach was reproducible. We applied our modified technique - and also the previous technique in which dry aggregates were used - to a set of 60 topsoil samples (depth 0-15 cm) from cultivated land across a large region (10 000 km2) of eastern England. We wished to investigate: (i) any differences in aggregate stability (DR measurements) using dry or pre-wet aggregates, and (ii) the dominant controls on the stability of aggregates in water using wet aggregates, including variations in mineralogy and soil organic carbon (SOC) content, and any interaction between them. The sixty soil
Lu, Zhonghua; Arikatla, Venkata S; Han, Zhongqing; Allen, Brian F.; De, Suvranu
2014-01-01
Background High-frequency electricity is used in a majority of surgical interventions. However, modern computer-based training and simulation systems rely on physically unrealistic models that fail to capture the interplay of the electrical, mechanical and thermal properties of biological tissue. Methods We present a real-time and physically realistic simulation of electrosurgery, by modeling the electrical, thermal and mechanical properties as three iteratively solved finite element models. To provide sub-finite-element graphical rendering of vaporized tissue, a dual mesh dynamic triangulation algorithm based on isotherms is proposed. The block compressed row storage (BCRS) structure is shown to be critical in allowing computationally efficient changes in the tissue topology due to vaporization. Results We have demonstrated our physics based electrosurgery cutting algorithm through various examples. Our matrix manipulation algorithms designed for topology changes have shown low computational cost. Conclusions Our simulator offers substantially greater physical fidelity compared to previous simulators that use simple geometry-based heat characterization. PMID:24357156
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena
2014-01-01
The AIRS Science Team Version-6 AIRS/AMSU retrieval algorithm is now operational at the Goddard DISC. AIRS Version-6 level-2 products are generated near real-time at the Goddard DISC and all level-2 and level-3 products are available starting from September 2002. This paper describes some of the significant improvements in retrieval methodology contained in the Version-6 retrieval algorithm compared to that previously used in Version-5. In particular, the AIRS Science Team made major improvements with regard to the algorithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the cloud clearing and retrieval procedures; and 3) derive error estimates and use them for Quality Control. Significant improvements have also been made in the generation of cloud parameters. In addition to the basic AIRS/AMSU mode, Version-6 also operates in an AIRS Only (AO) mode which produces results almost as good as those of the full AIRS/AMSU mode. This paper also demonstrates the improvements of some AIRS Version-6 and Version-6 AO products compared to those obtained using Version-5.
NASA Astrophysics Data System (ADS)
Susskind, Joel; Blaisdell, John M.; Iredell, Lena
2014-01-01
The atmospheric infrared sounder (AIRS) science team version-6 AIRS/advanced microwave sounding unit (AMSU) retrieval algorithm is now operational at the Goddard Data and Information Services Center (DISC). AIRS version-6 level-2 products are generated near real time at the Goddard DISC and all level-2 and level-3 products are available starting from September 2002. Some of the significant improvements in retrieval methodology contained in the version-6 retrieval algorithm compared to that previously used in version-5 are described. In particular, the AIRS science team made major improvements with regard to the algorithms used to (1) derive surface skin temperature and surface spectral emissivity; (2) generate the initial state used to start the cloud clearing and retrieval procedures; and (3) derive error estimates and use them for quality control. Significant improvements have also been made in the generation of cloud parameters. In addition to the basic AIRS/AMSU mode, version-6 also operates in an AIRS only (AO) mode, which produces results almost as good as those of the full AIRS/AMSU mode. The improvements of some AIRS version-6 and version-6 AO products compared to those obtained using version-5 are also demonstrated.
Xia, S L; Zhang, X B; Zhou, J S; Gao, X
2015-08-01
The radial approach is widely used in the treatment of patients with coronary artery disease. We conducted a meta-analysis of published results on the efficacy and safety of the left and right radial approaches in patients undergoing percutaneous coronary procedures. A systematic search of reference databases was conducted, and data from 14 randomized controlled trials involving 6870 participants were analyzed. The left radial approach was associated with significant reductions in fluoroscopy time [standardized mean difference (SMD)=-0.14, 95% confidence interval (CI)=-0.19 to -0.09; P<0.00001] and contrast volume (SMD=-0.07, 95%CI=-0.12 to -0.02; P=0.009). There were no significant differences in rate of procedural failure of the left and the right radial approaches [risk ratios (RR)=0.98; 95%CI=0.77-1.25; P=0.88] or procedural time (SMD=-0.05, 95%CI=0.17-0.06; P=0.38). Tortuosity of the subclavian artery (RR=0.27, 95%CI=0.14-0.50; P<0.0001) was reported more frequently with the right radial approach. A greater number of catheters were used with the left than with the right radial approach (SMD=0.25, 95%CI=0.04-0.46; P=0.02). We conclude that the left radial approach is as safe as the right radial approach, and that the left radial approach should be recommended for use in percutaneous coronary procedures, especially in percutaneous coronary angiograms.
Reller, Megan E; Lema, Clara A; Perl, Trish M; Cai, Mian; Ross, Tracy L; Speck, Kathleen A; Carroll, Karen C
2007-11-01
We examined the incremental yield of stool culture (with toxin testing on isolates) versus our two-step algorithm for optimal detection of toxigenic Clostridium difficile. Per the two-step algorithm, stools were screened for C. difficile-associated glutamate dehydrogenase (GDH) antigen and, if positive, tested for toxin by a direct (stool) cell culture cytotoxicity neutralization assay (CCNA). In parallel, stools were cultured for C. difficile and tested for toxin by both indirect (isolate) CCNA and conventional PCR if the direct CCNA was negative. The "gold standard" for toxigenic C. difficile was detection of C. difficile by the GDH screen or by culture and toxin production by direct or indirect CCNA. We tested 439 specimens from 439 patients. GDH screening detected all culture-positive specimens. The sensitivity of the two-step algorithm was 77% (95% confidence interval [CI], 70 to 84%), and that of culture was 87% (95% CI, 80 to 92%). PCR results correlated completely with those of CCNA testing on isolates (29/29 positive and 32/32 negative, respectively). We conclude that GDH is an excellent screening test and that culture with isolate CCNA testing detects an additional 23% of toxigenic C. difficile missed by direct CCNA. Since culture is tedious and also detects nontoxigenic C. difficile, we conclude that culture is most useful (i) when the direct CCNA is negative but a high clinical suspicion of toxigenic C. difficile remains, (ii) in the evaluation of new diagnostic tests for toxigenic C. difficile (where the best reference standard is essential), and (iii) in epidemiologic studies (where the availability of an isolate allows for strain typing and antimicrobial susceptibility testing).
Romijn, C.A.; Luttik, R.; van de Meent, D.; Slooff, W.; Canton, J.H. , Bilthoven )
1993-08-01
Effect assessment on secondary poisoning can be an asset to effect assessments on direct poisoning in setting quality criteria for the environment. This study presents an algorithm for effect assessment on secondary poisoning. The water-fish-fish-eating bird or mammal pathway was analyzed as an example of a secondary poisoning pathway. Parameters used in this algorithm are the bioconcentration factor for fish (BCF) and the no-observed-effect concentration for the group of fish-eating birds and mammals (NOECfish-eater). For the derivation of reliable BCFs preference is given to the use of experimentally derived BCFs over QSAR estimates. NOECs for fish eaters are derived by extrapolating toxicity data on single species. Because data on fish-eating species are seldom available, toxicity data on all birds and mammalian species were used. The proposed algorithm (MAR = NOECfish-eater/BCF) was used to calculate MARS (maximum acceptable risk levels) for the compounds lindane, dieldrin, cadmium, mercury, PCB153, and PCB118. By subsequently, comparing these MARs to MARs derived by effect assessment for aquatic organisms, it was concluded that for methyl mercury and PCB153 secondary poisoning of fish-eating birds and mammals could be a critical pathway. For these compounds, effects on populations of fish-eating birds and mammals can occur at levels in surface water below the MAR calculated for aquatic ecosystems. Secondary poisoning of fish-eating birds and mammals is not likely to occur for cadmium at levels in water below the MAR calculated for aquatic ecosystems.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Relating to Public Welfare OFFICE OF CHILD SUPPORT ENFORCEMENT (CHILD SUPPORT ENFORCEMENT PROGRAM), ADMINISTRATION FOR CHILDREN AND FAMILIES, DEPARTMENT OF HEALTH AND HUMAN SERVICES TRIBAL CHILD SUPPORT... Tribal IV-D agency and that are designed to protect the privacy rights of the parties, including:...
NASA Astrophysics Data System (ADS)
Rana, Vijay; Gill, Kamaljit; Rudin, Stephen; Bednarek, Daniel R.
2012-03-01
The current version of the real-time skin-dose-tracking system (DTS) we have developed assumes the exposure is contained within the collimated beam and is uniform except for inverse-square variation. This study investigates the significance of factors that contribute to beam non-uniformity such as the heel effect and backscatter from the patient to areas of the skin inside and outside the collimated beam. Dose-calibrated Gafchromic film (XR-RV3, ISP) was placed in the beam in the plane of the patient table at a position 15 cm tube-side of isocenter on a Toshiba Infinix C-Arm system. Separate exposures were made with the film in contact with a block of 20-cm solid water providing backscatter and with the film suspended in air without backscatter, both with and without the table in the beam. The film was scanned to obtain dose profiles and comparison of the profiles for the various conditions allowed a determination of field non-uniformity and backscatter contribution. With the solid-water phantom and with the collimator opened completely for the 20-cm mode, the dose profile decreased by about 40% on the anode side of the field. Backscatter falloff at the beam edge was about 10% from the center and extra-beam backscatter decreased slowly with distance from the field, being about 3% of the beam maximum at 6 cm from the edge. Determination of the magnitude of these factors will allow them to be included in the skin-dose-distribution calculation and should provide a more accurate determination of peak-skin dose for the DTS.
Rana, Vijay; Gill, Kamaljit; Rudin, Stephen; Bednarek, Daniel R
2012-02-23
The current version of the real-time skin-dose-tracking system (DTS) we have developed assumes the exposure is contained within the collimated beam and is uniform except for inverse-square variation. This study investigates the significance of factors that contribute to beam non-uniformity such as the heel effect and backscatter from the patient to areas of the skin inside and outside the collimated beam. Dose-calibrated Gafchromic film (XR-RV3, ISP) was placed in the beam in the plane of the patient table at a position 15 cm tube-side of isocenter on a Toshiba Infinix C-Arm system. Separate exposures were made with the film in contact with a block of 20-cm solid water providing backscatter and with the film suspended in air without backscatter, both with and without the table in the beam. The film was scanned to obtain dose profiles and comparison of the profiles for the various conditions allowed a determination of field non-uniformity and backscatter contribution. With the solid-water phantom and with the collimator opened completely for the 20-cm mode, the dose profile decreased by about 40% on the anode side of the field. Backscatter falloff at the beam edge was about 10% from the center and extra-beam backscatter decreased slowly with distance from the field, being about 3% of the beam maximum at 6 cm from the edge. Determination of the magnitude of these factors will allow them to be included in the skin-dose-distribution calculation and should provide a more accurate determination of peak-skin dose for the DTS.
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
Algorithm That Synthesizes Other Algorithms for Hashing
NASA Technical Reports Server (NTRS)
James, Mark
2010-01-01
An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the
Algorithmic advances in stochastic programming
Morton, D.P.
1993-07-01
Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.
Sadygov, Rovshan G; Maroto, Fernando Martin; Hühmer, Andreas F R
2006-12-15
We present an algorithmic approach to align three-dimensional chromatographic surfaces of LC-MS data of complex mixture samples. The approach consists of two steps. In the first step, we prealign chromatographic profiles: two-dimensional projections of chromatographic surfaces. This is accomplished by correlation analysis using fast Fourier transforms. In this step, a temporal offset that maximizes the overlap and dot product between two chromatographic profiles is determined. In the second step, the algorithm generates correlation matrix elements between full mass scans of the reference and sample chromatographic surfaces. The temporal offset from the first step indicates a range of the mass scans that are possibly correlated, then the correlation matrix is calculated only for these mass scans. The correlation matrix carries information on highly correlated scans, but it does not itself determine the scan or time alignment. Alignment is determined as a path in the correlation matrix that maximizes the sum of the correlation matrix elements. The computational complexity of the optimal path generation problem is reduced by the use of dynamic programming. The program produces time-aligned surfaces. The use of the temporal offset from the first step in the second step reduces the computation time for generating the correlation matrix and speeds up the process. The algorithm has been implemented in a program, ChromAlign, developed in C++ language for the .NET2 environment in WINDOWS XP. In this work, we demonstrate the applications of ChromAlign to alignment of LC-MS surfaces of several datasets: a mixture of known proteins, samples from digests of surface proteins of T-cells, and samples prepared from digests of cerebrospinal fluid. ChromAlign accurately aligns the LC-MS surfaces we studied. In these examples, we discuss various aspects of the alignment by ChromAlign, such as constant time axis shifts and warping of chromatographic surfaces.
NASA Astrophysics Data System (ADS)
Bonifazi, Giuseppe; Serranti, Silvia
2014-03-01
In secondary raw materials and recycling sectors, the products quality represents, more and more, the key issue to pursuit in order to be competitive in a more and more demanding market, where quality standards and products certification play a preheminent role. These goals assume particular importance when recycling actions are applied. Recovered products, resulting from waste materials, and/or dismissed products processing, are, in fact, always seen with a certain suspect. An adequate response of the industry to the market can only be given through the utilization of equipment and procedures ensuring pure, high-quality production, and efficient work and cost. All these goals can be reached adopting not only more efficient equipment and layouts, but also introducing new processing logics able to realize a full control of the handled material flow streams fulfilling, at the same time, i) an easy management of the procedures, ii) an efficient use of the energy, iii) the definition and set up of reliable and robust procedures, iv) the possibility to implement network connectivity capabilities finalized to a remote monitoring and control of the processes and v) a full data storage, analysis and retrieving. Furthermore the ongoing legislation and regulation require the implementation of recycling infrastructure characterised by high resources efficiency and low environmental impacts, both aspects being strongly linked to the waste materials and/or dismissed products original characteristics. For these reasons an optimal recycling infrastructure design primarily requires a full knowledge of the characteristics of the input waste. What previously outlined requires the introduction of a new important concept to apply in solid waste recycling, the recycling-oriented characterization, that is the set of actions addressed to strategically determine selected attributes, in order to get goaloriented data on waste for the development, implementation or improvement of recycling
Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.
2005-01-01
A genetic algorithm approach suitable for solving multi-objective problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding Pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the Pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide Pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.
Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.
2004-01-01
A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.
Lof, Marie; Hannestad, Ulf; Forsum, Elisabet
2003-11-01
According to the report of the World Health Organization (1985), total energy expenditure (TEE) in human subjects can be calculated as BMR x physical activity level (PAL). However, other reports have pointed out limitations in the suggested procedure related to the % body fat of the subjects. The purpose of the present study was to evaluate the World Health Organization (1985) procedure in thirty-four healthy women with BMI 18-39 kg/m2. BMR and TEE were measured using indirect calorimetry (BMRmeas) and the doubly-labelled water method (TEEref) respectively. When assessed using the doubly-labelled water and skinfold-thickness methods, the women had 34 (SD 8) and 33 (SD 6) % body fat respectively. On the basis of guidelines provided by the World Health Organization (1985), 1.64 was selected to represent the average PAL of the women. Furthermore, PAL was also assessed by means of an accelerometer (PALacc), heart-rate recordings (PAL(HR)) and a questionnaire (PALq). These estimates were: PALacc 1.71 (SD 0.17), PAL(HR) 1.76 (SD 0.24), PALq 1.86 (SD 0.27). These values were lower than TEEref/BMRref, which was 1.98 (SD 0.21). BMR assessed using equations recommended by the World Health Organization (1985) (BMRpredicted) overestimated BMR by 594 (SD 431) kJ/24 h. However, when TEE was calculated as BMRpredicted x PALacc, BMRpredicted x PAL(HR) and BMRpredicted x PALq respectively, average results were in agreement with TEEref. Furthermore, TEE values based on BMRpredicted and PALacc, PAL(HR), PALq as well as on PAL = 1.64, minus TEEref, were significantly correlated with body fatness. When the same PAL value (1.64) was used for all subjects, this correlation was particularly strong. Thus, the World Health Organization (1985) procedure may give TEE results that are biased with respect to the body fatness of subjects.
Exact Algorithms for Coloring Graphs While Avoiding Monochromatic Cycles
NASA Astrophysics Data System (ADS)
Talla Nobibon, Fabrice; Hurkens, Cor; Leus, Roel; Spieksma, Frits C. R.
We consider the problem of deciding whether a given directed graph can be vertex partitioned into two acyclic subgraphs. Applications of this problem include testing rationality of collective consumption behavior, a subject in micro-economics. We identify classes of directed graphs for which the problem is easy and prove that the existence of a constant factor approximation algorithm is unlikely for an optimization version which maximizes the number of vertices that can be colored using two colors while avoiding monochromatic cycles. We present three exact algorithms, namely an integer-programming algorithm based on cycle identification, a backtracking algorithm, and a branch-and-check algorithm. We compare these three algorithms both on real-life instances and on randomly generated graphs. We find that for the latter set of graphs, every algorithm solves instances of considerable size within few seconds; however, the CPU time of the integer-programming algorithm increases with the number of vertices in the graph while that of the two other procedures does not. For every algorithm, we also study empirically the transition from a high to a low probability of YES answer as function of a parameter of the problem. For real-life instances, the integer-programming algorithm fails to solve the largest instance after one hour while the other two algorithms solve it in about ten minutes.
Algorithm to assemble pathways from processes
Mittenthal, J.E.
1996-12-31
To understand or to modify a biological pathway, the first step is to determine the patterns of coupling among its processes that are compatible with its input-output relation. Algorithms for this purpose have been devised for metabolic pathways, in which the reactions typically leave the enzymes unmodified. As shown here, one of these algorithms can also assemble molecular networks in which reactions modify proteins, if the proteins are included among the inputs to the reactions. Thus one procedure suffices to assemble pathways for metabolism, cytoplasmic signal transduction, and gene regulation. 9 refs., 3 figs.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony
1990-01-01
The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.
1990-01-01
Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
Al-Massaedh, Ayat Allah; Pyell, Ute
2013-04-19
A new synthesis procedure for highly crosslinked macroporous amphiphilic N-adamantyl-functionalized mixed-mode acrylamide-based monolithic stationary phases for capillary electrochromatography (CEC) is investigated employing solubilization of the hydrophobic monomer by complexation with a cyclodextrin. N-(1-adamantyl)acrylamide is synthesized and characterized as a hydrophobic monomer forming a water soluble-inclusion complex with statistically methylated-β-cyclodextrin. The stoichiometry, the complex formation constant and the spatial arrangement of the formed complex are determined. Mixed-mode monolithic stationary phases are synthesized by in situ free radical copolymerization of cyclodextrin-solubilized N-adamantyl acrylamide, a water soluble crosslinker (piperazinediacrylamide), a hydrophilic monomer (methacrylamide), and a negatively charged monomer (vinylsulfonic acid) in aqueous medium in bind silane-pretreated fused silica capillaries. The synthesized monolithic stationary phases are amphiphilic and can be employed in the reversed- and in the normal-phase mode (depending on the composition of the mobile phase), which is demonstrated with polar and non-polar analytes. Observations made with polar analytes and polar mobile phase can only be explained by a mixed-mode retention mechanism. The influence of the total monomer concentration (%T) on the chromatographic properties, the electroosmotic mobility, and on the specific permeability is investigated. With a homologues series of alkylphenones it is confirmed that the hydrophobicity (methylene selectivity) of the stationary phase increases with increasing mass fraction of N-(1-adamantyl)acrylamide in the synthesis mixture. PMID:23489493
Public Sector Impasse Procedures.
ERIC Educational Resources Information Center
Vadakin, James C.
The subject of collective bargaining negotiation impasse procedures in the public sector, which includes public school systems, is a broad one. In this speech, the author introduces the various procedures, explains how they are used, and lists their advantages and disadvantages. Procedures discussed are mediation, fact-finding, arbitration,…
Algorithmic Procedure for Finding Semantically Related Journals.
ERIC Educational Resources Information Center
Pudovkin, Alexander I.; Garfield, Eugene
2002-01-01
Using citations, papers and references as parameters a relatedness factor (RF) is computed for a series of journals. Sorting these journals by the RF produces a list of journals most closely related to a specified starting journal. The method appears to select a set of journals that are semantically most similar to the target journal. The…
Algorithmic commonalities in the parallel environment
NASA Technical Reports Server (NTRS)
Mcanulty, Michael A.; Wainer, Michael S.
1987-01-01
The ultimate aim of this project was to analyze procedures from substantially different application areas to discover what is either common or peculiar in the process of conversion to the Massively Parallel Processor (MPP). Three areas were identified: molecular dynamic simulation, production systems (rule systems), and various graphics and vision algorithms. To date, only selected graphics procedures have been investigated. They are the most readily available, and produce the most visible results. These include simple polygon patch rendering, raycasting against a constructive solid geometric model, and stochastic or fractal based textured surface algorithms. Only the simplest of conversion strategies, mapping a major loop to the array, has been investigated so far. It is not entirely satisfactory.
The hierarchical algorithms--theory and applications
NASA Astrophysics Data System (ADS)
Su, Zheng-Yao
Monte Carlo simulations are one of the most important numerical techniques for investigating statistical physical systems. Among these systems, spin models are a typical example which also play an essential role in constructing the abstract mechanism for various complex systems. Unfortunately, traditional Monte Carlo algorithms are afflicted with "critical slowing down" near continuous phase transitions and the efficiency of the Monte Carlo simulation goes to zero as the size of the lattice is increased. To combat critical slowing down, a very different type of collective-mode algorithm, in contrast to the traditional single-spin-flipmode, was proposed by Swendsen and Wang in 1987 for Potts spin models. Since then, there has been an explosion of work attempting to understand, improve, or generalize it. In these so-called "cluster" algorithms, clusters of spin are regarded as one template and are updated at each step of the Monte Carlo procedure. In implementing these algorithms the cluster labeling is a major time-consuming bottleneck and is also isomorphic to the problem of computing connected components of an undirected graph seen in other application areas, such as pattern recognition.A number of cluster labeling algorithms for sequential computers have long existed. However, the dynamic irregular nature of clusters complicates the task of finding good parallel algorithms and this is particularly true on SIMD (single-instruction-multiple-data machines. Our design of the Hierarchical Cluster Labeling Algorithm aims at alleviating this problem by building a hierarchical structure on the problem domain and by incorporating local and nonlocal communication schemes. We present an estimate for the computational complexity of cluster labeling and prove the key features of this algorithm (such as lower computational complexity, data locality, and easy implementation) compared with the methods formerly known. In particular, this algorithm can be viewed as a generalized
Procedural Quantum Programming
NASA Astrophysics Data System (ADS)
Ömer, Bernhard
2002-09-01
While classical computing science has developed a variety of methods and programming languages around the concept of the universal computer, the typical description of quantum algorithms still uses a purely mathematical, non-constructive formalism which makes no difference between a hydrogen atom and a quantum computer. This paper investigates, how the concept of procedural programming languages, the most widely used classical formalism for describing and implementing algorithms, can be adopted to the field of quantum computing, and how non-classical features like the reversibility of unitary transformations, the non-observability of quantum states or the lack of copy and erase operations can be reflected semantically. It introduces the key concepts of procedural quantum programming (hybrid target architecture, operator hierarchy, quantum data types, memory management, etc.) and presents the experimental language QCL, which implements these principles.
Competing Sudakov veto algorithms
NASA Astrophysics Data System (ADS)
Kleiss, Ronald; Verheyen, Rob
2016-07-01
We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.
Semioptimal practicable algorithmic cooling
NASA Astrophysics Data System (ADS)
Elias, Yuval; Mor, Tal; Weinstein, Yossi
2011-04-01
Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon’s entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.
FOHI-D: An iterative Hirshfeld procedure including atomic dipoles
Geldof, D.; Blockhuys, F.; Van Alsenoy, C.; Krishtal, A.
2014-04-14
In this work, a new partitioning method based on the FOHI method (fractional occupation Hirshfeld-I method) will be discussed. The new FOHI-D method uses an iterative scheme in which both the atomic charge and atomic dipole are calculated self-consistently. In order to induce the dipole moment on the atom, an electric field is applied during the atomic SCF calculations. Based on two sets of molecules, the atomic charge and intrinsic atomic dipole moment of hydrogen and chlorine atoms are compared using the iterative Hirshfeld (HI) method, the iterative Stockholder atoms (ISA) method, the FOHI method, and the FOHI-D method. The results obtained are further analyzed as a function of the group electronegativity of Boyd et al. [J. Am. Chem. Soc. 110, 4182 (1988); Boyd et al., J. Am. Chem. Soc. 114, 1652 (1992)] and De Proft et al. [J. Phys. Chem. 97, 1826 (1993)]. The molecular electrostatic potential (ESP) based on the HI, ISA, FOHI, and FOHI-D charges is compared with the ab initio ESP. Finally, the effect of adding HI, ISA, FOHI, and FOHI-D atomic dipoles to the multipole expansion as a function of the precision of the ESP is analyzed.
FOHI-D: an iterative Hirshfeld procedure including atomic dipoles.
Geldof, D; Krishtal, A; Blockhuys, F; Van Alsenoy, C
2014-04-14
In this work, a new partitioning method based on the FOHI method (fractional occupation Hirshfeld-I method) will be discussed. The new FOHI-D method uses an iterative scheme in which both the atomic charge and atomic dipole are calculated self-consistently. In order to induce the dipole moment on the atom, an electric field is applied during the atomic SCF calculations. Based on two sets of molecules, the atomic charge and intrinsic atomic dipole moment of hydrogen and chlorine atoms are compared using the iterative Hirshfeld (HI) method, the iterative Stockholder atoms (ISA) method, the FOHI method, and the FOHI-D method. The results obtained are further analyzed as a function of the group electronegativity of Boyd et al. [J. Am. Chem. Soc. 110, 4182 (1988); Boyd et al., J. Am. Chem. Soc. 114, 1652 (1992)] and De Proft et al. [J. Phys. Chem. 97, 1826 (1993)]. The molecular electrostatic potential (ESP) based on the HI, ISA, FOHI, and FOHI-D charges is compared with the ab initio ESP. Finally, the effect of adding HI, ISA, FOHI, and FOHI-D atomic dipoles to the multipole expansion as a function of the precision of the ESP is analyzed.
34 CFR 303.15 - Include; including.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 2 2010-07-01 2010-07-01 false Include; including. 303.15 Section 303.15 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF SPECIAL EDUCATION AND REHABILITATIVE SERVICES, DEPARTMENT OF EDUCATION EARLY INTERVENTION PROGRAM FOR INFANTS AND TODDLERS...
Algorithms Could Automate Cancer Diagnosis
NASA Technical Reports Server (NTRS)
Baky, A. A.; Winkler, D. G.
1982-01-01
Five new algorithms are a complete statistical procedure for quantifying cell abnormalities from digitized images. Procedure could be basis for automated detection and diagnosis of cancer. Objective of procedure is to assign each cell an atypia status index (ASI), which quantifies level of abnormality. It is possible that ASI values will be accurate and economical enough to allow diagnoses to be made quickly and accurately by computer processing of laboratory specimens extracted from patients.
New Results in Astrodynamics Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Coverstone-Carroll, V.; Hartmann, J. W.; Williams, S. N.; Mason, W. J.
1998-01-01
Generic algorithms have gained popularity as an effective procedure for obtaining solutions to traditionally difficult space mission optimization problems. In this paper, a brief survey of the use of genetic algorithms to solve astrodynamics problems is presented and is followed by new results obtained from applying a Pareto genetic algorithm to the optimization of low-thrust interplanetary spacecraft missions.
Procedural pediatric dermatology.
Metz, Brandie J
2013-04-01
Due to many factors, including parental anxiety, a child's inability to understand the necessity of a procedure and a child's unwillingness to cooperate, it can be much more challenging to perform dermatologic procedures in children. This article reviews pre-procedural preparation of patients and parents, techniques for minimizing injection-related pain and optimal timing of surgical intervention. The risks and benefits of general anesthesia in the setting of pediatric dermatologic procedures are discussed. Additionally, the surgical approach to a few specific types of birthmarks is addressed.
Performance evaluation of image processing algorithms on the GPU.
Castaño-Díez, Daniel; Moser, Dominik; Schoenegger, Andreas; Pruggnaller, Sabine; Frangakis, Achilleas S
2008-10-01
The graphics processing unit (GPU), which originally was used exclusively for visualization purposes, has evolved into an extremely powerful co-processor. In the meanwhile, through the development of elaborate interfaces, the GPU can be used to process data and deal with computationally intensive applications. The speed-up factors attained compared to the central processing unit (CPU) are dependent on the particular application, as the GPU architecture gives the best performance for algorithms that exhibit high data parallelism and high arithmetic intensity. Here, we evaluate the performance of the GPU on a number of common algorithms used for three-dimensional image processing. The algorithms were developed on a new software platform called "CUDA", which allows a direct translation from C code to the GPU. The implemented algorithms include spatial transformations, real-space and Fourier operations, as well as pattern recognition procedures, reconstruction algorithms and classification procedures. In our implementation, the direct porting of C code in the GPU achieves typical acceleration values in the order of 10-20 times compared to a state-of-the-art conventional processor, but they vary depending on the type of the algorithm. The gained speed-up comes with no additional costs, since the software runs on the GPU of the graphics card of common workstations.
Temperature Corrected Bootstrap Algorithm
NASA Technical Reports Server (NTRS)
Comiso, Joey C.; Zwally, H. Jay
1997-01-01
A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.
The E-MS Algorithm: Model Selection with Incomplete Data
Jiang, Jiming; Nguyen, Thuan; Rao, J. Sunil
2014-01-01
We propose a procedure associated with the idea of the E-M algorithm for model selection in the presence of missing data. The idea extends the concept of parameters to include both the model and the parameters under the model, and thus allows the model to be part of the E-M iterations. We develop the procedure, known as the E-MS algorithm, under the assumption that the class of candidate models is finite. Some special cases of the procedure are considered, including E-MS with the generalized information criteria (GIC), and E-MS with the adaptive fence (AF; Jiang et al. 2008). We prove numerical convergence of the E-MS algorithm as well as consistency in model selection of the limiting model of the E-MS convergence, for E-MS with GIC and E-MS with AF. We study the impact on model selection of different missing data mechanisms. Furthermore, we carry out extensive simulation studies on the finite-sample performance of the E-MS with comparisons to other procedures. The methodology is also illustrated on a real data analysis involving QTL mapping for an agricultural study on barley grains. PMID:26783375
An affine projection algorithm using grouping selection of input vectors
NASA Astrophysics Data System (ADS)
Shin, JaeWook; Kong, NamWoong; Park, PooGyeon
2011-10-01
This paper present an affine projection algorithm (APA) using grouping selection of input vectors. To improve the performance of conventional APA, the proposed algorithm adjusts the number of the input vectors using two procedures: grouping procedure and selection procedure. In grouping procedure, the some input vectors that have overlapping information for update is grouped using normalized inner product. Then, few input vectors that have enough information for for coefficient update is selected using steady-state mean square error (MSE) in selection procedure. Finally, the filter coefficients update using selected input vectors. The experimental results show that the proposed algorithm has small steady-state estimation errors comparing with the existing algorithms.
Ramponi, Denise R
2016-01-01
Dental problems are a common complaint in emergency departments in the United States. There are a wide variety of dental issues addressed in emergency department visits such as dental caries, loose teeth, dental trauma, gingival infections, and dry socket syndrome. Review of the most common dental blocks and dental procedures will allow the practitioner the opportunity to make the patient more comfortable and reduce the amount of analgesia the patient will need upon discharge. Familiarity with the dental equipment, tooth, and mouth anatomy will help prepare the practitioner for to perform these dental procedures. PMID:27482994
Johnson, Lynn M; Strawderman, Robert L
2012-09-20
This paper proposes an estimation procedure for the semiparametric accelerated failure time frailty model that combines smoothing with an Expectation and Maximization-like algorithm for estimating equations. The resulting algorithm permits simultaneous estimation of the regression parameter, the baseline cumulative hazard, and the parameter indexing a general frailty distribution. We develop novel moment-based estimators for the frailty parameter, including a generalized method of moments estimator. Standard error estimates for all parameters are easily obtained using a randomly weighted bootstrap procedure. For the commonly used gamma frailty distribution, the proposed algorithm is very easy to implement using widely available numerical methods. Simulation results demonstrate that the algorithm performs very well in this setting. We re-analyz several previously analyzed data sets for illustrative purposes.
Code of Federal Regulations, 2012 CFR
2012-07-01
... INVESTIGATIONS LAW ENFORCEMENT REPORTING Victim and Witness Assistance Procedures § 635.35 Procedures. (a) As... to support victims of spouse abuse. Victim Advocacy services include crisis intervention,...
Code of Federal Regulations, 2014 CFR
2014-07-01
... INVESTIGATIONS LAW ENFORCEMENT REPORTING Victim and Witness Assistance Procedures § 635.35 Procedures. (a) As... to support victims of spouse abuse. Victim Advocacy services include crisis intervention,...
Code of Federal Regulations, 2010 CFR
2010-07-01
... INVESTIGATIONS LAW ENFORCEMENT REPORTING Victim and Witness Assistance Procedures § 635.35 Procedures. (a) As... to support victims of spouse abuse. Victim Advocacy services include crisis intervention,...
Code of Federal Regulations, 2013 CFR
2013-07-01
... INVESTIGATIONS LAW ENFORCEMENT REPORTING Victim and Witness Assistance Procedures § 635.35 Procedures. (a) As... to support victims of spouse abuse. Victim Advocacy services include crisis intervention,...
Code of Federal Regulations, 2011 CFR
2011-07-01
... INVESTIGATIONS LAW ENFORCEMENT REPORTING Victim and Witness Assistance Procedures § 635.35 Procedures. (a) As... to support victims of spouse abuse. Victim Advocacy services include crisis intervention,...
YAMPA: Yet Another Matching Pursuit Algorithm for compressive sensing
NASA Astrophysics Data System (ADS)
Lodhi, Muhammad A.; Voronin, Sergey; Bajwa, Waheed U.
2016-05-01
State-of-the-art sparse recovery methods often rely on the restricted isometry property for their theoretical guarantees. However, they cannot explicitly incorporate metrics such as restricted isometry constants within their recovery procedures due to the computational intractability of calculating such metrics. This paper formulates an iterative algorithm, termed yet another matching pursuit algorithm (YAMPA), for recovery of sparse signals from compressive measurements. YAMPA differs from other pursuit algorithms in that: (i) it adapts to the measurement matrix using a threshold that is explicitly dependent on two computable coherence metrics of the matrix, and (ii) it does not require knowledge of the signal sparsity. Performance comparisons of YAMPA against other matching pursuit and approximate message passing algorithms are made for several types of measurement matrices. These results show that while state-of-the-art approximate message passing algorithms outperform other algorithms (including YAMPA) in the case of well-conditioned random matrices, they completely break down in the case of ill-conditioned measurement matrices. On the other hand, YAMPA and comparable pursuit algorithms not only result in reasonable performance for well-conditioned matrices, but their performance also degrades gracefully for ill-conditioned matrices. The paper also shows that YAMPA uniformly outperforms other pursuit algorithms for the case of thresholding parameters chosen in a clairvoyant fashion. Further, when combined with a simple and fast technique for selecting thresholding parameters in the case of ill-conditioned matrices, YAMPA outperforms other pursuit algorithms in the regime of low undersampling, although some of these algorithms can outperform YAMPA in the regime of high undersampling in this setting.
Component evaluation testing and analysis algorithms.
Hart, Darren M.; Merchant, Bion John
2011-10-01
The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.
A parallel algorithm for the non-symmetric eigenvalue problem
Dongarra, J.; Sidani, M. |
1991-12-01
This paper describes a parallel algorithm for computing the eigenvalues and eigenvectors of a non-symmetric matrix. The algorithm is based on a divide-and-conquer procedure and uses an iterative refinement technique.
Motion Cueing Algorithm Development: New Motion Cueing Program Implementation and Tuning
NASA Technical Reports Server (NTRS)
Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.
2005-01-01
A computer program has been developed for the purpose of driving the NASA Langley Research Center Visual Motion Simulator (VMS). This program includes two new motion cueing algorithms, the optimal algorithm and the nonlinear algorithm. A general description of the program is given along with a description and flowcharts for each cueing algorithm, and also descriptions and flowcharts for subroutines used with the algorithms. Common block variable listings and a program listing are also provided. The new cueing algorithms have a nonlinear gain algorithm implemented that scales each aircraft degree-of-freedom input with a third-order polynomial. A description of the nonlinear gain algorithm is given along with past tuning experience and procedures for tuning the gain coefficient sets for each degree-of-freedom to produce the desired piloted performance. This algorithm tuning will be needed when the nonlinear motion cueing algorithm is implemented on a new motion system in the Cockpit Motion Facility (CMF) at the NASA Langley Research Center.
The Superior Lambert Algorithm
NASA Astrophysics Data System (ADS)
der, G.
2011-09-01
Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most
NASA Technical Reports Server (NTRS)
Groce, J. L.; Izumi, K. H.; Markham, C. H.; Schwab, R. W.; Thompson, J. L.
1986-01-01
The Local Flow Management/Profile Descent (LFM/PD) algorithm designed for the NASA Transport System Research Vehicle program is described. The algorithm provides fuel-efficient altitude and airspeed profiles consistent with ATC restrictions in a time-based metering environment over a fixed ground track. The model design constraints include accommodation of both published profile descent procedures and unpublished profile descents, incorporation of fuel efficiency as a flight profile criterion, operation within the performance capabilities of the Boeing 737-100 airplane with JT8D-7 engines, and conformity to standard air traffic navigation and control procedures. Holding and path stretching capabilities are included for long delay situations.
Multibody structural dynamics including translation between the bodies
NASA Astrophysics Data System (ADS)
Huston, R. L.; Passerello, C. E.
1980-11-01
New and recently developed concepts useful for obtaining and solving equations of motion of multibody mechanical systems with translation between the respective bodies of the system, is presented. The incorporation of translation effects make the analysis applicable to a much broader class of problems than was possible with previous analyses which are restricted to linked multibody systems. The concepts developed in the analysis include the use of Euler parameters, Lagrange's form of d'Alembert's principle, quasi-coordinates, relative coordinates, and body connection arrays. Procedures for the development of efficient computer algorithms for evaluating the coefficients of the governing equations of motion are outlined. The methods presented are directly applicable in the analysis of biodynamic and human models, finite segment cable models, mechanisms, manipulators and robots.
Pump apparatus including deconsolidator
Sonwane, Chandrashekhar; Saunders, Timothy; Fitzsimmons, Mark Andrew
2014-10-07
A pump apparatus includes a particulate pump that defines a passage that extends from an inlet to an outlet. A duct is in flow communication with the outlet. The duct includes a deconsolidator configured to fragment particle agglomerates received from the passage.
Baltayiannis, Nikolaos; Michail, Chandrinos; Lazaridis, George; Anagnostopoulos, Dimitrios; Baka, Sofia; Mpoukovinas, Ioannis; Karavasilis, Vasilis; Lampaki, Sofia; Papaiwannou, Antonis; Karavergou, Anastasia; Kioumis, Ioannis; Pitsiou, Georgia; Katsikogiannis, Nikolaos; Tsakiridis, Kosmas; Rapti, Aggeliki; Trakada, Georgia; Zissimopoulos, Athanasios; Zarogoulidis, Konstantinos
2015-01-01
Minimally invasive procedures, which include laparoscopic surgery, use state-of-the-art technology to reduce the damage to human tissue when performing surgery. Minimally invasive procedures require small “ports” from which the surgeon inserts thin tubes called trocars. Carbon dioxide gas may be used to inflate the area, creating a space between the internal organs and the skin. Then a miniature camera (usually a laparoscope or endoscope) is placed through one of the trocars so the surgical team can view the procedure as a magnified image on video monitors in the operating room. Specialized equipment is inserted through the trocars based on the type of surgery. There are some advanced minimally invasive surgical procedures that can be performed almost exclusively through a single point of entry—meaning only one small incision, like the “uniport” video-assisted thoracoscopic surgery (VATS). Not only do these procedures usually provide equivalent outcomes to traditional “open” surgery (which sometimes require a large incision), but minimally invasive procedures (using small incisions) may offer significant benefits as well: (I) faster recovery; (II) the patient remains for less days hospitalized; (III) less scarring and (IV) less pain. In our current mini review we will present the minimally invasive procedures for thoracic surgery. PMID:25861610
Object-oriented algorithmic laboratory for ordering sparse matrices
Kumfert, G K
2000-05-01
We focus on two known NP-hard problems that have applications in sparse matrix computations: the envelope/wavefront reduction problem and the fill reduction problem. Envelope/wavefront reducing orderings have a wide range of applications including profile and frontal solvers, incomplete factorization preconditioning, graph reordering for cache performance, gene sequencing, and spatial databases. Fill reducing orderings are generally limited to--but an inextricable part of--sparse matrix factorization. Our major contribution to this field is the design of new and improved heuristics for these NP-hard problems and their efficient implementation in a robust, cross-platform, object-oriented software package. In this body of research, we (1) examine current ordering algorithms, analyze their asymptotic complexity, and characterize their behavior in model problems, (2) introduce new and improved algorithms that address deficiencies found in previous heuristics, (3) implement an object-oriented library of these algorithms in a robust, modular fashion without significant loss of efficiency, and (4) extend our algorithms and software to address both generalized and constrained problems. We stress that the major contribution is the algorithms and the implementation; the whole being greater than the sum of its parts. The initial motivation for implementing our algorithms in object-oriented software was to manage the inherent complexity. During our research came the realization that the object-oriented implementation enabled new possibilities augmented algorithms that would not have been as natural to generalize from a procedural implementation. Some extensions are constructed from a family of related algorithmic components, thereby creating a poly-algorithm that can adapt its strategy to the properties of the specific problem instance dynamically. Other algorithms are tailored for special constraints by aggregating algorithmic components and having them collaboratively
Optical modulator including grapene
Liu, Ming; Yin, Xiaobo; Zhang, Xiang
2016-06-07
The present invention provides for a one or more layer graphene optical modulator. In a first exemplary embodiment the optical modulator includes an optical waveguide, a nanoscale oxide spacer adjacent to a working region of the waveguide, and a monolayer graphene sheet adjacent to the spacer. In a second exemplary embodiment, the optical modulator includes at least one pair of active media, where the pair includes an oxide spacer, a first monolayer graphene sheet adjacent to a first side of the spacer, and a second monolayer graphene sheet adjacent to a second side of the spacer, and at least one optical waveguide adjacent to the pair.
Surgical Procedures Needed to Eradicate Infection in Knee Septic Arthritis.
Dave, Omkar H; Patel, Karan A; Andersen, Clark R; Carmichael, Kelly D
2016-01-01
Septic arthritis of the knee is encountered on a regular basis by orthopedists and nonorthopedists. No established therapeutic algorithm exists for septic arthritis of the knee, and there is much variability in management. This study assessed the number of surgical procedures, arthroscopic or open, required to eradicate infection. The study was a retrospective analysis of 79 patients who were treated for septic knee arthritis from 1995 to 2011. Patients who were included in the study had native septic knee arthritis that had resolved with treatment consisting of irrigation and debridement, either open or arthroscopic. Logistic regression analysis was used to explore the relation between the interval between onset of symptoms and index surgery and the use of arthroscopy and the need for multiple procedures. Fifty-two patients met the inclusion criteria, and 53% were male, with average follow-up of 7.2 years (range, 1-16.2 years). Arthroscopic irrigation and debridement was performed in 70% of cases. On average, successful treatment required 1.3 procedures (SD, 0.6; range, 1-4 procedures). A significant relation (P=.012) was found between time from presentation to surgery and the need for multiple procedures. With arthroscopic irrigation and debridement, most patients with septic knee arthritis require only 1 surgical procedure to eradicate infection. The need for multiple procedures increases with time from onset of symptoms to surgery.
Problem solving with genetic algorithms and Splicer
NASA Technical Reports Server (NTRS)
Bayer, Steven E.; Wang, Lui
1991-01-01
Genetic algorithms are highly parallel, adaptive search procedures (i.e., problem-solving methods) loosely based on the processes of population genetics and Darwinian survival of the fittest. Genetic algorithms have proven useful in domains where other optimization techniques perform poorly. The main purpose of the paper is to discuss a NASA-sponsored software development project to develop a general-purpose tool for using genetic algorithms. The tool, called Splicer, can be used to solve a wide variety of optimization problems and is currently available from NASA and COSMIC. This discussion is preceded by an introduction to basic genetic algorithm concepts and a discussion of genetic algorithm applications.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
Power spectral estimation algorithms
NASA Technical Reports Server (NTRS)
Bhatia, Manjit S.
1989-01-01
Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.
Rapid Catalytic Template Searching as an Enzyme Function Prediction Procedure
Nilmeier, Jerome P.; Kirshner, Daniel A.; Wong, Sergio E.; Lightstone, Felice C.
2013-01-01
We present an enzyme protein function identification algorithm, Catalytic Site Identification (CatSId), based on identification of catalytic residues. The method is optimized for highly accurate template identification across a diverse template library and is also very efficient in regards to time and scalability of comparisons. The algorithm matches three-dimensional residue arrangements in a query protein to a library of manually annotated, catalytic residues – The Catalytic Site Atlas (CSA). Two main processes are involved. The first process is a rapid protein-to-template matching algorithm that scales quadratically with target protein size and linearly with template size. The second process incorporates a number of physical descriptors, including binding site predictions, in a logistic scoring procedure to re-score matches found in Process 1. This approach shows very good performance overall, with a Receiver-Operator-Characteristic Area Under Curve (AUC) of 0.971 for the training set evaluated. The procedure is able to process cofactors, ions, nonstandard residues, and point substitutions for residues and ions in a robust and integrated fashion. Sites with only two critical (catalytic) residues are challenging cases, resulting in AUCs of 0.9411 and 0.5413 for the training and test sets, respectively. The remaining sites show excellent performance with AUCs greater than 0.90 for both the training and test data on templates of size greater than two critical (catalytic) residues. The procedure has considerable promise for larger scale searches. PMID:23675414
Rapid catalytic template searching as an enzyme function prediction procedure.
Nilmeier, Jerome P; Kirshner, Daniel A; Wong, Sergio E; Lightstone, Felice C
2013-01-01
We present an enzyme protein function identification algorithm, Catalytic Site Identification (CatSId), based on identification of catalytic residues. The method is optimized for highly accurate template identification across a diverse template library and is also very efficient in regards to time and scalability of comparisons. The algorithm matches three-dimensional residue arrangements in a query protein to a library of manually annotated, catalytic residues--The Catalytic Site Atlas (CSA). Two main processes are involved. The first process is a rapid protein-to-template matching algorithm that scales quadratically with target protein size and linearly with template size. The second process incorporates a number of physical descriptors, including binding site predictions, in a logistic scoring procedure to re-score matches found in Process 1. This approach shows very good performance overall, with a Receiver-Operator-Characteristic Area Under Curve (AUC) of 0.971 for the training set evaluated. The procedure is able to process cofactors, ions, nonstandard residues, and point substitutions for residues and ions in a robust and integrated fashion. Sites with only two critical (catalytic) residues are challenging cases, resulting in AUCs of 0.9411 and 0.5413 for the training and test sets, respectively. The remaining sites show excellent performance with AUCs greater than 0.90 for both the training and test data on templates of size greater than two critical (catalytic) residues. The procedure has considerable promise for larger scale searches.
Algorithm for Identifying Erroneous Rain-Gauge Readings
NASA Technical Reports Server (NTRS)
Rickman, Doug
2005-01-01
An algorithm analyzes rain-gauge data to identify statistical outliers that could be deemed to be erroneous readings. Heretofore, analyses of this type have been performed in burdensome manual procedures that have involved subjective judgements. Sometimes, the analyses have included computational assistance for detecting values falling outside of arbitrary limits. The analyses have been performed without statistically valid knowledge of the spatial and temporal variations of precipitation within rain events. In contrast, the present algorithm makes it possible to automate such an analysis, makes the analysis objective, takes account of the spatial distribution of rain gauges in conjunction with the statistical nature of spatial variations in rainfall readings, and minimizes the use of arbitrary criteria. The algorithm implements an iterative process that involves nonparametric statistics.
NASA Astrophysics Data System (ADS)
Biazzo, Indaco; Braunstein, Alfredo; Zecchina, Riccardo
2012-08-01
We study the behavior of an algorithm derived from the cavity method for the prize-collecting steiner tree (PCST) problem on graphs. The algorithm is based on the zero temperature limit of the cavity equations and as such is formally simple (a fixed point equation resolved by iteration) and distributed (parallelizable). We provide a detailed comparison with state-of-the-art algorithms on a wide range of existing benchmarks, networks, and random graphs. Specifically, we consider an enhanced derivative of the Goemans-Williamson heuristics and the dhea solver, a branch and cut integer linear programming based approach. The comparison shows that the cavity algorithm outperforms the two algorithms in most large instances both in running time and quality of the solution. Finally we prove a few optimality properties of the solutions provided by our algorithm, including optimality under the two postprocessing procedures defined in the Goemans-Williamson derivative and global optimality in some limit cases.
Biazzo, Indaco; Braunstein, Alfredo; Zecchina, Riccardo
2012-08-01
We study the behavior of an algorithm derived from the cavity method for the prize-collecting steiner tree (PCST) problem on graphs. The algorithm is based on the zero temperature limit of the cavity equations and as such is formally simple (a fixed point equation resolved by iteration) and distributed (parallelizable). We provide a detailed comparison with state-of-the-art algorithms on a wide range of existing benchmarks, networks, and random graphs. Specifically, we consider an enhanced derivative of the Goemans-Williamson heuristics and the dhea solver, a branch and cut integer linear programming based approach. The comparison shows that the cavity algorithm outperforms the two algorithms in most large instances both in running time and quality of the solution. Finally we prove a few optimality properties of the solutions provided by our algorithm, including optimality under the two postprocessing procedures defined in the Goemans-Williamson derivative and global optimality in some limit cases.
ERIC Educational Resources Information Center
Veck, Wayne
2009-01-01
This paper attempts to make important connections between listening and inclusive education and the refusal to listen and exclusion. Two lines of argument are advanced. First, if educators and learners are to include each other within their educational institutions as unique individuals, then they will need to listen attentively to each other.…
Georgeff, M.P.; Lansky, A.L.
1986-10-01
Much of commonsense knowledge about the real world is in the form of procedures or sequences of actions for achieving particular goals. In this paper, a formalism is presented for representing such knowledge using the notion of process. A declarative semantics for the representation is given, which allows a user to state facts about the effects of doing things in the problem domain of interest. An operational semantics is also provided, which shows how this knowledge can be used to achieve particular goals or to form intentions regarding their achievement. Given both semantics, our formalism additionally serves as an executable specification language suitable for constructing complex systems. A system based on this formalism is described, and examples involving control of an autonomous robot and fault diagnosis for NASA's space shuttle are provided.
The Xmath Integration Algorithm
ERIC Educational Resources Information Center
Bringslid, Odd
2009-01-01
The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…
Quarantine document system indexing procedure
NASA Technical Reports Server (NTRS)
1972-01-01
The Quarantine Document System (QDS) is described including the indexing procedures and thesaurus of indexing terms. The QDS consists of these functional elements: acquisition, cataloging, indexing, storage, and retrieval. A complete listing of the collection, and the thesaurus are included.
Optimization of the double dosimetry algorithm for interventional cardiologists
NASA Astrophysics Data System (ADS)
Chumak, Vadim; Morgun, Artem; Bakhanova, Elena; Voloskiy, Vitalii; Borodynchik, Elena
2014-11-01
A double dosimetry method is recommended in interventional cardiology (IC) to assess occupational exposure; yet currently there is no common and universal algorithm for effective dose estimation. In this work, flexible and adaptive algorithm building methodology was developed and some specific algorithm applicable for typical irradiation conditions of IC procedures was obtained. It was shown that the obtained algorithm agrees well with experimental measurements and is less conservative compared to other known algorithms.
Bretland, P M
1988-01-01
The existing National Health Service financial system makes comprehensive costing of any service very difficult. A method of costing using modern commercial methods has been devised, classifying costs into variable, semi-variable and fixed and using the principle of overhead absorption for expenditure not readily allocated to individual procedures. It proved possible to establish a cost spectrum over the financial year 1984-85. The cheapest examinations were plain radiographs outside normal working hours, followed by plain radiographs, ultrasound, special procedures, fluoroscopy, nuclear medicine, angiography and angiographic interventional procedures in normal working hours. This differs from some published figures, particularly those in the Körner report. There was some overlap between fluoroscopic interventional and the cheaper nuclear medicine procedures, and between some of the more expensive nuclear medicine procedures and the cheaper angiographic ones. Only angiographic and the few more expensive nuclear medicine procedures exceed the cost of the inpatient day. The total cost of the imaging service to the district was about 4% of total hospital expenditure. It is shown that where more procedures are undertaken, the semi-variable and fixed (including capital) elements of the cost decrease (and vice versa) so that careful study is required to assess the value of proposed economies. The method is initially time-consuming and requires a computer system with 512 Kb of memory, but once the basic costing system is established in a department, detailed financial monitoring should become practicable. The necessity for a standard comprehensive costing procedure of this nature, based on sound cost accounting principles, appears inescapable, particularly in view of its potential application to management budgeting. PMID:3349241
Bretland, P M
1988-01-01
The existing National Health Service financial system makes comprehensive costing of any service very difficult. A method of costing using modern commercial methods has been devised, classifying costs into variable, semi-variable and fixed and using the principle of overhead absorption for expenditure not readily allocated to individual procedures. It proved possible to establish a cost spectrum over the financial year 1984-85. The cheapest examinations were plain radiographs outside normal working hours, followed by plain radiographs, ultrasound, special procedures, fluoroscopy, nuclear medicine, angiography and angiographic interventional procedures in normal working hours. This differs from some published figures, particularly those in the Körner report. There was some overlap between fluoroscopic interventional and the cheaper nuclear medicine procedures, and between some of the more expensive nuclear medicine procedures and the cheaper angiographic ones. Only angiographic and the few more expensive nuclear medicine procedures exceed the cost of the inpatient day. The total cost of the imaging service to the district was about 4% of total hospital expenditure. It is shown that where more procedures are undertaken, the semi-variable and fixed (including capital) elements of the cost decrease (and vice versa) so that careful study is required to assess the value of proposed economies. The method is initially time-consuming and requires a computer system with 512 Kb of memory, but once the basic costing system is established in a department, detailed financial monitoring should become practicable. The necessity for a standard comprehensive costing procedure of this nature, based on sound cost accounting principles, appears inescapable, particularly in view of its potential application to management budgeting.
Numerical Boundary Condition Procedures
NASA Technical Reports Server (NTRS)
1981-01-01
Topics include numerical procedures for treating inflow and outflow boundaries, steady and unsteady discontinuous surfaces, far field boundaries, and multiblock grids. In addition, the effects of numerical boundary approximations on stability, accuracy, and convergence rate of the numerical solution are discussed.
Parliamentary Procedure Made Easy.
ERIC Educational Resources Information Center
Hayden, Ellen T.
Based on the newly revised "Robert's Rules of Order," these self-contained learning activities will help students successfully and actively participate in school, social, civic, political, or professional organizations. There are 13 lessons. Topics studied include the what, why, and history of parliamentary procedure; characteristics of the ideal…
Procedures and Policies Manual
ERIC Educational Resources Information Center
Davis, Jane M.
2006-01-01
This document was developed by the Middle Tennessee State University James E. Walker Library Collection Management Department to provide policies and procedural guidelines for the cataloging and processing of bibliographic materials. This document includes policies for cataloging monographs, serials, government documents, machine-readable data…
Terrestrial photovoltaic measurement procedures
NASA Technical Reports Server (NTRS)
1977-01-01
Procedures for obtaining cell and array current-voltage measurements both outdoors in natural sunlight and indoors in simulated sunlight are presented. A description of the necessary apparatus and equipment is given for the calibration and use of reference solar cells. Some comments relating to concentration cell measurements, and a revised terrestrial solar spectrum for use in theoretical calculations, are included.
New correction procedures for the fast field program which extend its range
NASA Technical Reports Server (NTRS)
West, M.; Sack, R. A.
1990-01-01
A fast field program (FFP) algorithm was developed based on the method of Lee et al., for the prediction of sound pressure level from low frequency, high intensity sources. In order to permit accurate predictions at distances greater than 2 km, new correction procedures have had to be included in the algorithm. Certain functions, whose Hankel transforms can be determined analytically, are subtracted from the depth dependent Green's function. The distance response is then obtained as the sum of these transforms and the Fast Fourier Transformation (FFT) of the residual k dependent function. One procedure, which permits the elimination of most complex exponentials, has allowed significant changes in the structure of the FFP algorithm, which has resulted in a substantial reduction in computation time.
FORTRAN Algorithm for Image Processing
NASA Technical Reports Server (NTRS)
Roth, Don J.; Hull, David R.
1987-01-01
FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.
Smith, F A; Kroft, S H
1996-01-01
The idea of using patient samples as the basis for control procedures elicits a continuing fascination among laboratorians, particularly in the current environment of cost restriction. Average of normals (AON) procedures, although little used, have been carefully investigated at the theoretical level. The performance characteristics of Bull's algorithm have not been thoroughly delineated, however, despite its widespread use. The authors have generalized Bull's algorithm to use variably sized batches of patient samples and a range of exponential factors. For any given batch size, there is an optimal exponential factor to maximize the overall power of error detection. The optimized exponentially adjusted moving mean (EAMM) procedure, a variant of AON and Bull's algorithm, outperforms both parent procedures. As with any AON procedure, EAMM is most useful when the ratio of population variability to analytical variability (standard deviation ratio, SDR) is low.
A subzone reconstruction algorithm for efficient staggered compatible remapping
Starinshak, D.P. Owen, J.M.
2015-09-01
Staggered-grid Lagrangian hydrodynamics algorithms frequently make use of subzonal discretization of state variables for the purposes of improved numerical accuracy, generality to unstructured meshes, and exact conservation of mass, momentum, and energy. For Arbitrary Lagrangian–Eulerian (ALE) methods using a geometric overlay, it is difficult to remap subzonal variables in an accurate and efficient manner due to the number of subzone–subzone intersections that must be computed. This becomes prohibitive in the case of 3D, unstructured, polyhedral meshes. A new procedure is outlined in this paper to avoid direct subzonal remapping. The new algorithm reconstructs the spatial profile of a subzonal variable using remapped zonal and nodal representations of the data. The reconstruction procedure is cast as an under-constrained optimization problem. Enforcing conservation at each zone and node on the remapped mesh provides the set of equality constraints; the objective function corresponds to a quadratic variation per subzone between the values to be reconstructed and a set of target reference values. Numerical results for various pure-remapping and hydrodynamics tests are provided. Ideas for extending the algorithm to staggered-grid radiation-hydrodynamics are discussed as well as ideas for generalizing the algorithm to include inequality constraints.
Synthesis of Greedy Algorithms Using Dominance Relations
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2010-01-01
Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.
Experimental validation of clock synchronization algorithms
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.; Graham, R. Lynn
1992-01-01
The objective of this work is to validate mathematically derived clock synchronization theories and their associated algorithms through experiment. Two theories are considered, the Interactive Convergence Clock Synchronization Algorithm and the Midpoint Algorithm. Special clock circuitry was designed and built so that several operating conditions and failure modes (including malicious failures) could be tested. Both theories are shown to predict conservative upper bounds (i.e., measured values of clock skew were always less than the theory prediction). Insight gained during experimentation led to alternative derivations of the theories. These new theories accurately predict the behavior of the clock system. It is found that a 100 percent penalty is paid to tolerate worst-case failures. It is also shown that under optimal conditions (with minimum error and no failures) the clock skew can be as much as three clock ticks. Clock skew grows to six clock ticks when failures are present. Finally, it is concluded that one cannot rely solely on test procedures or theoretical analysis to predict worst-case conditions.
New algorithms for reconstructing phylogenetic trees
Dress, A.
1994-12-31
Since the time of Linne, classification of living beings into subspecies, species, orders, families etc. has been an important task in biology. With the advent of molecular biology, many more data have become available which can be exploited for this purpose using comparative sequence analysis, while the sheer amount of these data stored presently in biomolecular data bases make automated classification procedures unavoidable. Consequently, many algorithms have been developed in the last 25 years to support this task. In the lecture, an amazingly successful polynomial algorithm for analysing all sorts of distance data derived from sequence analysis (or elsewhere) will be presented which simultaneously highlights phylogenetic similarity and similarity caused by convergent evolution. In addition to sketching the mathematics on which the algorithm is based and discussing its implementation (including some interesting computer graphics aspects), various proper biological examples will be presented which stretch from the analysis of data relating to the origin of life and the first bifurcations into the various {open_quote}kingdoms of life{close_quote} to the analysis of data relating to, say, the phylogenetic history of mammals or that of the AIDS or the Influenca virus family.
Quasi-static solution algorithms for kinematically/materially nonlinear thermomechanical problems
NASA Technical Reports Server (NTRS)
Padovan, J.; Pai, S. S.
1984-01-01
This paper develops an algorithmic solution strategy which allows the handling of positive/indefinite stiffness characteristics associated with the pre- and post-buckling of structures subject to complex thermomechanical loading fields. The flexibility of the procedure is such that it can be applied to both finite difference and element-type simulations. Due to the generality of the algorithmic approach developed, both kinematic and thermal/mechanical type material nonlinearity including inelastic effects can be treated. This includes the possibility of handling completely general thermomechanical boundary conditions. To demonstrate the scheme, the results of several benchmark problems is presented.
NASA Technical Reports Server (NTRS)
Braun, W. R.
1981-01-01
Pseudo noise (PN) spread spectrum systems require a very accurate alignment between the PN code epochs at the transmitter and receiver. This synchronism is typically established through a two-step algorithm, including a coarse synchronization procedure and a fine synchronization procedure. A standard approach for the coarse synchronization is a sequential search over all code phases. The measurement of the power in the filtered signal is used to either accept or reject the code phase under test as the phase of the received PN code. This acquisition strategy, called a single dwell-time system, has been analyzed by Holmes and Chen (1977). A synopsis of the field of sequential analysis as it applies to the PN acquisition problem is provided. From this, the implementation of the variable dwell time algorithm as a sequential probability ratio test is developed. The performance of this algorithm is compared to the optimum detection algorithm and to the fixed dwell-time system.
Environmental Test Screening Procedure
NASA Technical Reports Server (NTRS)
Zeidler, Janet
2000-01-01
This procedure describes the methods to be used for environmental stress screening (ESS) of the Lightning Mapper Sensor (LMS) lens assembly. Unless otherwise specified, the procedures shall be completed in the order listed, prior to performance of the Acceptance Test Procedure (ATP). The first unit, S/N 001, will be subjected to the Qualification Vibration Levels, while the remainder will be tested at the Operational Level. Prior to ESS, all units will undergo Pre-ESS Functional Testing that includes measuring the on-axis and plus or minus 0.95 full field Modulation Transfer Function and Back Focal Length. Next, all units will undergo ESS testing, and then Acceptance testing per PR 460.
Practical pearls for oral procedures.
Davari, Parastoo; Fazel, Nasim
2016-01-01
We provide an overview of clinically relevant principles of oral surgical procedures required in the workup and management of oral mucosal diseases. An understanding of the fundamental concepts of how to perform safely and effectively minor oral procedures is important to the practicing dermatologist and can minimize the need for patient referrals. This chapter reviews the principles of minor oral procedures, including incisional, excisional, and punch biopsies, as well as minor salivary gland excision. Pre- and postoperative patient care is also discussed.
NASA Technical Reports Server (NTRS)
Nobbs, Steven G.
1995-01-01
An overview of the performance seeking control (PSC) algorithm and details of the important components of the algorithm are given. The onboard propulsion system models, the linear programming optimization, and engine control interface are described. The PSC algorithm receives input from various computers on the aircraft including the digital flight computer, digital engine control, and electronic inlet control. The PSC algorithm contains compact models of the propulsion system including the inlet, engine, and nozzle. The models compute propulsion system parameters, such as inlet drag and fan stall margin, which are not directly measurable in flight. The compact models also compute sensitivities of the propulsion system parameters to change in control variables. The engine model consists of a linear steady state variable model (SSVM) and a nonlinear model. The SSVM is updated with efficiency factors calculated in the engine model update logic, or Kalman filter. The efficiency factors are used to adjust the SSVM to match the actual engine. The propulsion system models are mathematically integrated to form an overall propulsion system model. The propulsion system model is then optimized using a linear programming optimization scheme. The goal of the optimization is determined from the selected PSC mode of operation. The resulting trims are used to compute a new operating point about which the optimization process is repeated. This process is continued until an overall (global) optimum is reached before applying the trims to the controllers.
14 CFR 21.441 - Procedure manual.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Procedure manual. 21.441 Section 21.441... of, a procedure manual containing— (1) The procedures for issuing STCs; and (2) The names, signatures... procedure manual; and (ii) Are to conduct inspections (including conformity and compliance inspections)...
Optimizing remediation of an unconfined aquifer using a hybrid algorithm.
Hsiao, Chin-Tsai; Chang, Liang-Cheng
2005-01-01
We present a novel hybrid algorithm, integrating a genetic algorithm (GA) and constrained differential dynamic programming (CDDP), to achieve remediation planning for an unconfined aquifer. The objective function includes both fixed and dynamic operation costs. GA determines the primary structure of the proposed algorithm, and a chromosome therein implemented by a series of binary digits represents a potential network design. The time-varying optimal operation cost associated with the network design is computed by the CDDP, in which is embedded a numerical transport model. Several computational approaches, including a chromosome bookkeeping procedure, are implemented to alleviate computational loading. Additionally, case studies that involve fixed and time-varying operating costs for confined and unconfined aquifers, respectively, are discussed to elucidate the effectiveness of the proposed algorithm. Simulation results indicate that the fixed costs markedly affect the optimal design, including the number and locations of the wells. Furthermore, the solution obtained using the confined approximation for an unconfined aquifer may be infeasible, as determined by an unconfined simulation.
A universal symmetry detection algorithm.
Maurer, Peter M
2015-01-01
Research on symmetry detection focuses on identifying and detecting new types of symmetry. The paper presents an algorithm that is capable of detecting any type of permutation-based symmetry, including many types for which there are no existing algorithms. General symmetry detection is library-based, but symmetries that can be parameterized, (i.e. total, partial, rotational, and dihedral symmetry), can be detected without using libraries. In many cases it is faster than existing techniques. Furthermore, it is simpler than most existing techniques, and can easily be incorporated into existing software. The algorithm can also be used with virtually any type of matrix-based symmetry, including conjugate symmetry.
Proposed first-generation WSQ bit allocation procedure
Bradley, J.N.; Brislawn, C.M.
1993-09-08
The Wavelet/Scalar Quantization (WSQ) gray-scale fingerprint image compression algorithm involves a symmetric wavelet transform (SWT) image decomposition followed by uniform scalar quantization of each subband. The algorithm is adaptive insofar as the bin widths for the scalar quantizers are image-specific and are included in the compressed image format. Since the decoder requires only the actual bin width values -- but not the method by which they were computed -- the standard allows for future refinements of the WSQ algorithm by improving the method used to select the scalar quantizer bin widths. This report proposes a bit allocation procedure for use with the first-generation WSQ encoder. In previous work a specific formula is provided for the relative sizes of the scalar quantizer bin widths in terms of the variances of the SWT subbands. An explicit specification for the constant of proportionality, q, that determines the absolute bin widths was not given. The actual compression ratio produced by the WSQ algorithm will generally vary from image to image depending on the amount of coding gain obtained by the run-length and Huffman coding, stages of the algorithm, but testing performed by the FBI established that WSQ compression produces archival quality images at compression ratios of around 20 to 1. The bit allocation procedure described in this report possesses a control parameter, r, that can be set by the user to achieve a predetermined amount of lossy compression, effectively giving the user control over the amount of distortion introduced by quantization noise. The variability observed in final compression ratios is thus due only to differences in lossless coding gain from image to image, chiefly a result of the varying amounts of blank background surrounding the print area in the images. Experimental results are presented that demonstrate the proposed method`s effectiveness.
HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN
While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...
Self-adaptive predictor-corrector algorithm for static nonlinear structural analysis
NASA Technical Reports Server (NTRS)
Padovan, J.
1981-01-01
A multiphase selfadaptive predictor corrector type algorithm was developed. This algorithm enables the solution of highly nonlinear structural responses including kinematic, kinetic and material effects as well as pro/post buckling behavior. The strategy involves three main phases: (1) the use of a warpable hyperelliptic constraint surface which serves to upperbound dependent iterate excursions during successive incremental Newton Ramphson (INR) type iterations; (20 uses an energy constraint to scale the generation of successive iterates so as to maintain the appropriate form of local convergence behavior; (3) the use of quality of convergence checks which enable various self adaptive modifications of the algorithmic structure when necessary. The restructuring is achieved by tightening various conditioning parameters as well as switch to different algorithmic levels to improve the convergence process. The capabilities of the procedure to handle various types of static nonlinear structural behavior are illustrated.
Comparative evaluation of tandem MS search algorithms using a target-decoy search strategy.
Balgley, Brian M; Laudeman, Tom; Yang, Li; Song, Tao; Lee, Cheng S
2007-09-01
Peptide identification of tandem mass spectra by a variety of available search algorithms forms the foundation for much of modern day mass spectrometry-based proteomics. Despite the critical importance of proper evaluation and interpretation of the results generated by these algorithms there is still little consistency in their application or understanding of their similarities and differences. A survey was conducted of four tandem mass spectrometry peptide identification search algorithms, including Mascot, Open Mass Spectrometry Search Algorithm, Sequest, and X! Tandem. The same input data, search parameters, and sequence library were used for the searches. Comparisons were based on commonly used scoring methodologies for each algorithm and on the results of a target-decoy approach to sequence library searching. The results indicated that there is little difference in the output of the algorithms so long as consistent scoring procedures are applied. The results showed that some commonly used scoring procedures may lead to excessive false discovery rates. Finally an alternative method for the determination of an optimal cutoff threshold is proposed.
NOSS altimeter algorithm specifications
NASA Technical Reports Server (NTRS)
Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.
1982-01-01
A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.
NASA Technical Reports Server (NTRS)
Tielking, John T.
1989-01-01
Two algorithms for obtaining static contact solutions are described in this presentation. Although they were derived for contact problems involving specific structures (a tire and a solid rubber cylinder), they are sufficiently general to be applied to other shell-of-revolution and solid-body contact problems. The shell-of-revolution contact algorithm is a method of obtaining a point load influence coefficient matrix for the portion of shell surface that is expected to carry a contact load. If the shell is sufficiently linear with respect to contact loading, a single influence coefficient matrix can be used to obtain a good approximation of the contact pressure distribution. Otherwise, the matrix will be updated to reflect nonlinear load-deflection behavior. The solid-body contact algorithm utilizes a Lagrange multiplier to include the contact constraint in a potential energy functional. The solution is found by applying the principle of minimum potential energy. The Lagrange multiplier is identified as the contact load resultant for a specific deflection. At present, only frictionless contact solutions have been obtained with these algorithms. A sliding tread element has been developed to calculate friction shear force in the contact region of the rolling shell-of-revolution tire model.
47 CFR 1.9005 - Included services.
Code of Federal Regulations, 2011 CFR
2011-10-01
... to 47 CFR 90.187(b)(2)(v)); (z) The 218-219 MHz band (part 95 of this chapter); (aa) The Local... Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Spectrum Leasing Scope and Authority § 1.9005 Included services. The spectrum leasing policies and rules of this subpart apply to...
Cubit Adaptive Meshing Algorithm Library
2004-09-01
CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMALs triangle meshing uses a 3D space advancing front method, the quadmore » meshing algorithm is based upon Sandias patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.« less
Development and Testing of Data Mining Algorithms for Earth Observation
NASA Technical Reports Server (NTRS)
Glymour, Clark
2005-01-01
The new algorithms developed under this project included a principled procedure for classification of objects, events or circumstances according to a target variable when a very large number of potential predictor variables is available but the number of cases that can be used for training a classifier is relatively small. These "high dimensional" problems require finding a minimal set of variables -called the Markov Blanket-- sufficient for predicting the value of the target variable. An algorithm, the Markov Blanket Fan Search, was developed, implemented and tested on both simulated and real data in conjunction with a graphical model classifier, which was also implemented. Another algorithm developed and implemented in TETRAD IV for time series elaborated on work by C. Granger and N. Swanson, which in turn exploited some of our earlier work. The algorithms in question learn a linear time series model from data. Given such a time series, the simultaneous residual covariances, after factoring out time dependencies, may provide information about causal processes that occur more rapidly than the time series representation allow, so called simultaneous or contemporaneous causal processes. Working with A. Monetta, a graduate student from Italy, we produced the correct statistics for estimating the contemporaneous causal structure from time series data using the TETRAD IV suite of algorithms. Two economists, David Bessler and Kevin Hoover, have independently published applications using TETRAD style algorithms to the same purpose. These implementations and algorithmic developments were separately used in two kinds of studies of climate data: Short time series of geographically proximate climate variables predicting agricultural effects in California, and longer duration climate measurements of temperature teleconnections.
Revised Unfilling Procedure for Solid Lithium Lenses
Leveling, A.; /Fermilab
2003-06-03
A procedure for unfilling used lithium lenses to has been described in Pbar Note 664. To date, the procedure has been used to disassemble lenses 20, 21, 17, 18, and 16. As a result of this work, some parts of the original procedure were found to be time consuming and ineffective. Modifications to the original procedure have been made to streamline the process and are discussed in this note. The revised procedure is included in this note.
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
Fast algorithms for combustion kinetics calculations: A comparison
NASA Technical Reports Server (NTRS)
Radhakrishnan, K.
1984-01-01
To identify the fastest algorithm currently available for the numerical integration of chemical kinetic rate equations, several algorithms were examined. Findings to date are summarized. The algorithms examined include two general-purpose codes EPISODE and LSODE and three special-purpose (for chemical kinetic calculations) codes CHEMEQ, CRK1D, and GCKP84. In addition, an explicit Runge-Kutta-Merson differential equation solver (IMSL Routine DASCRU) is used to illustrate the problems associated with integrating chemical kinetic rate equations by a classical method. Algorithms were applied to two test problems drawn from combustion kinetics. These problems included all three combustion regimes: induction, heat release and equilibration. Variations of the temperature and species mole fraction are given with time for test problems 1 and 2, respectively. Both test problems were integrated over a time interval of 1 ms in order to obtain near-equilibration of all species and temperature. Of the codes examined in this study, only CREK1D and GCDP84 were written explicitly for integrating exothermic, non-isothermal combustion rate equations. These therefore have built-in procedures for calculating the temperature.
Evaluation of Genetic Algorithm Concepts Using Model Problems. Part 2; Multi-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of simple model problems. Several new features including a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all optimization problems attempted. The binning algorithm generally provides pareto front quality enhancements and moderate convergence efficiency improvements for most of the model problems. The gene-space transformation procedure provides a large convergence efficiency enhancement for problems with non-convoluted pareto fronts and a degradation in efficiency for problems with convoluted pareto fronts. The most difficult problems --multi-mode search spaces with a large number of genes and convoluted pareto fronts-- require a large number of function evaluations for GA convergence, but always converge.
Benchmarking monthly homogenization algorithms
NASA Astrophysics Data System (ADS)
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.
2011-08-01
The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data
Investigations on antenna array calibration algorithms for direction-of-arrival estimation
NASA Astrophysics Data System (ADS)
Eberhardt, Michael; Eschlwech, Philipp; Biebl, Erwin
2016-09-01
Direction-of-arrival (DOA) estimation algorithms deliver very precise results based on good and extensive antenna array calibration. The better the array manifold including all disturbances is known, the better the DOA estimation result. A simplification or ideally an omission of the calibration procedure has been a long pursued goal in the history of array signal processing. This paper investigates the practicability of some well known calibration algorithms and gives a deeper insight into existing obstacles. Further analysis on the validity of the common used data model is presented. A new effect in modeling errors is revealed and simulation results substantiate this theory.
NASA Technical Reports Server (NTRS)
Colombo, Gianni; Settimo, Franco; Vernucci, Antonio
1988-01-01
After a short overview on the European tendencies about a Land Mobile Satellite Service, this paper describes an advanced system architecture, based on multiple spot-beams and on-board processing, capable of providing message and voice services over a wide European coverage, including some North-Africa and Middle-East countries. A remarkable problem associated with spot-beam configurations is the requirement for flexibility in the capacity offer to the various coverage areas. This means incorporating procedures for changing the on-board modulator-to-spot associations, respecting the constraints imposed by frequency reuse. After discussing the requirements of the rearrangement procedure, an on-purpose algorithm is presented. This paper is derived from work performed on contract to the European Space Agency (ESA).
Non-intrusive parameter identification procedure user's guide
NASA Technical Reports Server (NTRS)
Hanson, G. D.; Jewell, W. F.
1983-01-01
Written in standard FORTRAN, NAS is capable of identifying linear as well as nonlinear relations between input and output parameters; the only restriction is that the input/output relation be linear with respect to the unknown coefficients of the estimation equations. The output of the identification algorithm can be specified to be in either the time domain (i.e., the estimation equation coefficients) or in the frequency domain (i.e., a frequency response of the estimation equation). The frame length ("window") over which the identification procedure is to take place can be specified to be any portion of the input time history, thereby allowing the freedom to start and stop the identification procedure within a time history. There also is an option which allows a sliding window, which gives a moving average over the time history. The NAS software also includes the ability to identify several assumed solutions simultaneously for the same or different input data.
An Efficient Pattern Matching Algorithm
NASA Astrophysics Data System (ADS)
Sleit, Azzam; Almobaideen, Wesam; Baarah, Aladdin H.; Abusitta, Adel H.
In this study, we present an efficient algorithm for pattern matching based on the combination of hashing and search trees. The proposed solution is classified as an offline algorithm. Although, this study demonstrates the merits of the technique for text matching, it can be utilized for various forms of digital data including images, audio and video. The performance superiority of the proposed solution is validated analytically and experimentally.
NASA Astrophysics Data System (ADS)
Evertz, Hans Gerd
1998-03-01
Exciting new investigations have recently become possible for strongly correlated systems of spins, bosons, and fermions, through Quantum Monte Carlo simulations with the Loop Algorithm (H.G. Evertz, G. Lana, and M. Marcu, Phys. Rev. Lett. 70, 875 (1993).) (For a recent review see: H.G. Evertz, cond- mat/9707221.) and its generalizations. A review of this new method, its generalizations and its applications is given, including some new results. The Loop Algorithm is based on a formulation of physical models in an extended ensemble of worldlines and graphs, and is related to Swendsen-Wang cluster algorithms. It performs nonlocal changes of worldline configurations, determined by local stochastic decisions. It overcomes many of the difficulties of traditional worldline simulations. Computer time requirements are reduced by orders of magnitude, through a corresponding reduction in autocorrelations. The grand-canonical ensemble (e.g. varying winding numbers) is naturally simulated. The continuous time limit can be taken directly. Improved Estimators exist which further reduce the errors of measured quantities. The algorithm applies unchanged in any dimension and for varying bond-strengths. It becomes less efficient in the presence of strong site disorder or strong magnetic fields. It applies directly to locally XYZ-like spin, fermion, and hard-core boson models. It has been extended to the Hubbard and the tJ model and generalized to higher spin representations. There have already been several large scale applications, especially for Heisenberg-like models, including a high statistics continuous time calculation of quantum critical exponents on a regularly depleted two-dimensional lattice of up to 20000 spatial sites at temperatures down to T=0.01 J.
Optimal Design of Geodetic Network Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Vajedian, Sanaz; Bagheri, Hosein
2010-05-01
A geodetic network is a network which is measured exactly by techniques of terrestrial surveying based on measurement of angles and distances and can control stability of dams, towers and their around lands and can monitor deformation of surfaces. The main goals of an optimal geodetic network design process include finding proper location of control station (First order Design) as well as proper weight of observations (second order observation) in a way that satisfy all the criteria considered for quality of the network with itself is evaluated by the network's accuracy, reliability (internal and external), sensitivity and cost. The first-order design problem, can be dealt with as a numeric optimization problem. In this designing finding unknown coordinates of network stations is an important issue. For finding these unknown values, network geodetic observations that are angle and distance measurements must be entered in an adjustment method. In this regard, using inverse problem algorithms is needed. Inverse problem algorithms are methods to find optimal solutions for given problems and include classical and evolutionary computations. The classical approaches are analytical methods and are useful in finding the optimum solution of a continuous and differentiable function. Least squares (LS) method is one of the classical techniques that derive estimates for stochastic variables and their distribution parameters from observed samples. The evolutionary algorithms are adaptive procedures of optimization and search that find solutions to problems inspired by the mechanisms of natural evolution. These methods generate new points in the search space by applying operators to current points and statistically moving toward more optimal places in the search space. Genetic algorithm (GA) is an evolutionary algorithm considered in this paper. This algorithm starts with definition of initial population, and then the operators of selection, replication and variation are applied
Subspace scheduling and parallel implementation of non-systolic regular iterative algorithms
Roychowdhury, V.P.; Kailath, T.
1989-01-01
The study of Regular Iterative Algorithms (RIAs) was introduced in a seminal paper by Karp, Miller, and Winograd in 1967. In more recent years, the study of systolic architectures has led to a renewed interest in this class of algorithms, and the class of algorithms implementable on systolic arrays (as commonly understood) has been identified as a precise subclass of RIAs include matrix pivoting algorithms and certain forms of numerically stable two-dimensional filtering algorithms. It has been shown that the so-called hyperplanar scheduling for systolic algorithms can no longer be used to schedule and implement non-systolic RIAs. Based on the analysis of a so-called computability tree we generalize the concept of hyperplanar scheduling and determine linear subspaces in the index space of a given RIA such that all variables lying on the same subspace can be scheduled at the same time. This subspace scheduling technique is shown to be asymptotically optimal, and formal procedures are developed for designing processor arrays that will be compatible with our scheduling schemes. Explicit formulas for the schedule of a given variable are determined whenever possible; subspace scheduling is also applied to obtain lower dimensional processor arrays for systolic algorithms.
A fast optimization algorithm for multicriteria intensity modulated proton therapy planning
Chen Wei; Craft, David; Madden, Thomas M.; Zhang, Kewu; Kooy, Hanne M.; Herman, Gabor T.
2010-09-15
Purpose: To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. Methods: The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. Results: The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK's interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. Conclusions: The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.
Solar Eclipse Monitoring for Solar Energy Applications Using the Solar and Moon Position Algorithms
Reda, I.
2010-03-01
This report includes a procedure for implementing an algorithm (described by Jean Meeus) to calculate the moon's zenith angle with uncertainty of +/-0.001 degrees and azimuth angle with uncertainty of +/-0.003 degrees. The step-by-step format presented here simplifies the complicated steps Meeus describes to calculate the Moon's position, and focuses on the Moon instead of the planets and stars. It also introduces some changes to accommodate for solar radiation applications.
Final Technical Report "Multiscale Simulation Algorithms for Biochemical Systems"
Petzold, Linda R.
2012-10-25
Biochemical systems are inherently multiscale and stochastic. In microscopic systems formed by living cells, the small numbers of reactant molecules can result in dynamical behavior that is discrete and stochastic rather than continuous and deterministic. An analysis tool that respects these dynamical characteristics is the stochastic simulation algorithm (SSA, Gillespie, 1976), a numerical simulation procedure that is essentially exact for chemical systems that are spatially homogeneous or well stirred. Despite recent improvements, as a procedure that simulates every reaction event, the SSA is necessarily inefficient for most realistic problems. There are two main reasons for this, both arising from the multiscale nature of the underlying problem: (1) stiffness, i.e. the presence of multiple timescales, the fastest of which are stable; and (2) the need to include in the simulation both species that are present in relatively small quantities and should be modeled by a discrete stochastic process, and species that are present in larger quantities and are more efficiently modeled by a deterministic differential equation (or at some scale in between). This project has focused on the development of fast and adaptive algorithms, and the fun- damental theory upon which they must be based, for the multiscale simulation of biochemical systems. Areas addressed by this project include: (1) Theoretical and practical foundations for ac- celerated discrete stochastic simulation (tau-leaping); (2) Dealing with stiffness (fast reactions) in an efficient and well-justified manner in discrete stochastic simulation; (3) Development of adaptive multiscale algorithms for spatially homogeneous discrete stochastic simulation; (4) Development of high-performance SSA algorithms.
Improved algorithm for calculating the Chandrasekhar function
NASA Astrophysics Data System (ADS)
Jablonski, A.
2013-02-01
Theoretical models of electron transport in condensed matter require an effective source of the Chandrasekhar H(x,omega) function. A code providing the H(x,omega) function has to be both accurate and very fast. The current revision of the code published earlier [A. Jablonski, Comput. Phys. Commun. 183 (2012) 1773] decreased the running time, averaged over different pairs of arguments x and omega, by a factor of more than 20. The decrease of the running time in the range of small values of the argument x, less than 0.05, is even more pronounced, reaching a factor of 30. The accuracy of the current code is not affected, and is typically better than 12 decimal places. New version program summaryProgram title: CHANDRAS_v2 Catalogue identifier: AEMC_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMC_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 976 No. of bytes in distributed program, including test data, etc.: 11416 Distribution format: tar.gz Programming language: Fortran 90 Computer: Any computer with a Fortran 90 compiler Operating system: Windows 7, Windows XP, Unix/Linux RAM: 0.7 MB Classification: 2.4, 7.2 Catalogue identifier of previous version: AEMC_v1_0 Journal reference of previous version: Comput. Phys. Commun. 183 (2012) 1773 Does the new version supersede the old program: Yes Nature of problem: An attempt has been made to develop a subroutine that calculates the Chandrasekhar function with high accuracy, of at least 10 decimal places. Simultaneously, this subroutine should be very fast. Both requirements stem from the theory of electron transport in condensed matter. Solution method: Two algorithms were developed, each based on a different integral representation of the Chandrasekhar function. The final algorithm is edited by mixing these two
Portable Health Algorithms Test System
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.
2010-01-01
A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.
Program Implements Variable-Sampling Procedures
NASA Technical Reports Server (NTRS)
Huang, Zhaofeng
1995-01-01
MIL-STD-414 Variable Sampling Procedures (M414) computer program developed to automate calculations and acceptance/rejection procedures of MIL-STD-414, "Sampling Procedures and Tables for Inspection by Variables for Percent Defective." M414 automates entire calculation-and-decision process by use of computational algorithms determining threshold acceptability values for lots. Menu-driven and user-friendly. Reduces burden of manual operations, promoting variable-sampling practice in industry in lieu of "go/no-go" inspection. Written in BASIC.
Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.
1995-09-01
This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.
NWRA AVOSS Wake Vortex Prediction Algorithm. 3.1.1
NASA Technical Reports Server (NTRS)
Robins, R. E.; Delisi, D. P.; Hinton, David (Technical Monitor)
2002-01-01
This report provides a detailed description of the wake vortex prediction algorithm used in the Demonstration Version of NASA's Aircraft Vortex Spacing System (AVOSS). The report includes all equations used in the algorithm, an explanation of how to run the algorithm, and a discussion of how the source code for the algorithm is organized. Several appendices contain important supplementary information, including suggestions for enhancing the algorithm and results from test cases.
Interventional radiology neck procedures.
Zabala Landa, R M; Korta Gómez, I; Del Cura Rodríguez, J L
2016-05-01
Ultrasonography has become extremely useful in the evaluation of masses in the head and neck. It enables us to determine the anatomic location of the masses as well as the characteristics of the tissues that compose them, thus making it possible to orient the differential diagnosis toward inflammatory, neoplastic, congenital, traumatic, or vascular lesions, although it is necessary to use computed tomography or magnetic resonance imaging to determine the complete extension of certain lesions. The growing range of interventional procedures, mostly guided by ultrasonography, now includes biopsies, drainages, infiltrations, sclerosing treatments, and tumor ablation. PMID:27138033
Interventional radiology neck procedures.
Zabala Landa, R M; Korta Gómez, I; Del Cura Rodríguez, J L
2016-05-01
Ultrasonography has become extremely useful in the evaluation of masses in the head and neck. It enables us to determine the anatomic location of the masses as well as the characteristics of the tissues that compose them, thus making it possible to orient the differential diagnosis toward inflammatory, neoplastic, congenital, traumatic, or vascular lesions, although it is necessary to use computed tomography or magnetic resonance imaging to determine the complete extension of certain lesions. The growing range of interventional procedures, mostly guided by ultrasonography, now includes biopsies, drainages, infiltrations, sclerosing treatments, and tumor ablation.
A Frequency-Domain Substructure System Identification Algorithm
NASA Technical Reports Server (NTRS)
Blades, Eric L.; Craig, Roy R., Jr.
1996-01-01
A new frequency-domain system identification algorithm is presented for system identification of substructures, such as payloads to be flown aboard the Space Shuttle. In the vibration test, all interface degrees of freedom where the substructure is connected to the carrier structure are either subjected to active excitation or are supported by a test stand with the reaction forces measured. The measured frequency-response data is used to obtain a linear, viscous-damped model with all interface-degree of freedom entries included. This model can then be used to validate analytical substructure models. This procedure makes it possible to obtain not only the fixed-interface modal data associated with a Craig-Bampton substructure model, but also the data associated with constraint modes. With this proposed algorithm, multiple-boundary-condition tests are not required, and test-stand dynamics is accounted for without requiring a separate modal test or finite element modeling of the test stand. Numerical simulations are used in examining the algorithm's ability to estimate valid reduced-order structural models. The algorithm's performance when frequency-response data covering narrow and broad frequency bandwidths is used as input is explored. Its performance when noise is added to the frequency-response data and the use of different least squares solution techniques are also examined. The identified reduced-order models are also compared for accuracy with other test-analysis models and a formulation for a Craig-Bampton test-analysis model is also presented.
Proper bibeta ROC model: algorithm, software, and performance evaluation
NASA Astrophysics Data System (ADS)
Chen, Weijie; Hu, Nan
2016-03-01
Semi-parametric models are often used to fit data collected in receiver operating characteristic (ROC) experiments to obtain a smooth ROC curve and ROC parameters for statistical inference purposes. The proper bibeta model as recently proposed by Mossman and Peng enjoys several theoretical properties. In addition to having explicit density functions for the latent decision variable and an explicit functional form of the ROC curve, the two parameter bibeta model also has simple closed-form expressions for true-positive fraction (TPF), false-positive fraction (FPF), and the area under the ROC curve (AUC). In this work, we developed a computational algorithm and R package implementing this model for ROC curve fitting. Our algorithm can deal with any ordinal data (categorical or continuous). To improve accuracy, efficiency, and reliability of our software, we adopted several strategies in our computational algorithm including: (1) the LABROC4 categorization to obtain the true maximum likelihood estimation of the ROC parameters; (2) a principled approach to initializing parameters; (3) analytical first-order and second-order derivatives of the likelihood function; (4) an efficient optimization procedure (the L-BFGS algorithm in the R package "nlopt"); and (5) an analytical delta method to estimate the variance of the AUC. We evaluated the performance of our software with intensive simulation studies and compared with the conventional binormal and the proper binormal-likelihood-ratio models developed at the University of Chicago. Our simulation results indicate that our software is highly accurate, efficient, and reliable.
49 CFR 237.103 - Bridge inspection procedures.
Code of Federal Regulations, 2013 CFR
2013-10-01
... inspection procedures. (a) Each bridge management program shall specify the procedure to be used for... the railroad traffic moved over the bridge (including equipment weights, train frequency and...
49 CFR 237.103 - Bridge inspection procedures.
Code of Federal Regulations, 2014 CFR
2014-10-01
... inspection procedures. (a) Each bridge management program shall specify the procedure to be used for... the railroad traffic moved over the bridge (including equipment weights, train frequency and...
ERIC Educational Resources Information Center
Gerlach, Vernon S.; And Others
An algorithm is defined here as an unambiguous procedure which will always produce the correct result when applied to any problem of a given class of problems. This paper gives an extended discussion of the definition of an algorithm. It also explores in detail the elements of an algorithm, the representation of algorithms in standard prose, flow…
Practical pearls for oral procedures.
Davari, Parastoo; Fazel, Nasim
2016-01-01
We provide an overview of clinically relevant principles of oral surgical procedures required in the workup and management of oral mucosal diseases. An understanding of the fundamental concepts of how to perform safely and effectively minor oral procedures is important to the practicing dermatologist and can minimize the need for patient referrals. This chapter reviews the principles of minor oral procedures, including incisional, excisional, and punch biopsies, as well as minor salivary gland excision. Pre- and postoperative patient care is also discussed. PMID:27343958
Characterization of robotics parallel algorithms and mapping onto a reconfigurable SIMD machine
NASA Technical Reports Server (NTRS)
Lee, C. S. G.; Lin, C. T.
1989-01-01
The kinematics, dynamics, Jacobian, and their corresponding inverse computations are six essential problems in the control of robot manipulators. Efficient parallel algorithms for these computations are discussed and analyzed. Their characteristics are identified and a scheme on the mapping of these algorithms to a reconfigurable parallel architecture is presented. Based on the characteristics including type of parallelism, degree of parallelism, uniformity of the operations, fundamental operations, data dependencies, and communication requirement, it is shown that most of the algorithms for robotic computations possess highly regular properties and some common structures, especially the linear recursive structure. Moreover, they are well-suited to be implemented on a single-instruction-stream multiple-data-stream (SIMD) computer with reconfigurable interconnection network. The model of a reconfigurable dual network SIMD machine with internal direct feedback is introduced. A systematic procedure internal direct feedback is introduced. A systematic procedure to map these computations to the proposed machine is presented. A new scheduling problem for SIMD machines is investigated and a heuristic algorithm, called neighborhood scheduling, that reorders the processing sequence of subtasks to reduce the communication time is described. Mapping results of a benchmark algorithm are illustrated and discussed.
Parallel algorithms for matrix computations
Plemmons, R.J.
1990-01-01
The present conference on parallel algorithms for matrix computations encompasses both shared-memory systems and distributed-memory systems, as well as combinations of the two, to provide an overall perspective on parallel algorithms for both dense and sparse matrix computations in solving systems of linear equations, dense or structured problems related to least-squares computations, eigenvalue computations, singular-value computations, and rapid elliptic solvers. Specific issues addressed include the influence of parallel and vector architectures on algorithm design, computations for distributed-memory architectures such as hypercubes, solutions for sparse symmetric positive definite linear systems, symbolic and numeric factorizations, and triangular solutions. Also addressed are reference sources for parallel and vector numerical algorithms, sources for machine architectures, and sources for programming languages.
Mathematical algorithms for approximate reasoning
NASA Technical Reports Server (NTRS)
Murphy, John H.; Chay, Seung C.; Downs, Mary M.
1988-01-01
Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away
Reference Policies and Procedures Manual.
ERIC Educational Resources Information Center
George Mason Univ., Fairfax, VA.
This guide to services of the reference department of Fenwick Library, George Mason University, is intended for use by staff in the department, as well as the general public. Areas covered include (1) reference desk services to users; (2) reference desk support procedures; (3) off desk services; (4) collection development, including staff…
NASA Astrophysics Data System (ADS)
Bolognesi, Tommaso
2011-07-01
In the context of quantum gravity theories, several researchers have proposed causal sets as appropriate discrete models of spacetime. We investigate families of causal sets obtained from two simple models of computation - 2D Turing machines and network mobile automata - that operate on 'high-dimensional' supports, namely 2D arrays of cells and planar graphs, respectively. We study a number of quantitative and qualitative emergent properties of these causal sets, including dimension, curvature and localized structures, or 'particles'. We show how the possibility to detect and separate particles from background space depends on the choice between a global or local view at the causal set. Finally, we spot very rare cases of pseudo-randomness, or deterministic chaos; these exhibit a spontaneous phenomenon of 'causal compartmentation' that appears as a prerequisite for the occurrence of anything of physical interest in the evolution of spacetime.
Medical Service Clinical Laboratory Procedures--Bacteriology.
ERIC Educational Resources Information Center
Department of the Army, Washington, DC.
This manual presents laboratory procedures for the differentiation and identification of disease agents from clinical materials. Included are procedures for the collection of specimens, preparation of culture media, pure culture methods, cultivation of the microorganisms in natural and simulated natural environments, and procedures in…
36 CFR 908.32 - Review procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 36 Parks, Forests, and Public Property 3 2012-07-01 2012-07-01 false Review procedures. 908.32... DEVELOPMENT AREA Review Procedure § 908.32 Review procedures. (a) Upon receipt of a request for review, the... applicable regulations; (2) Information submitted by the applicant including the request for review and...
Procedures for Peer Review of Grant Applications
ERIC Educational Resources Information Center
US Department of Education, 2006
2006-01-01
This guide presents information on the procedures for peer review of grant applications. It begins with an overview of the review process for grant application submission and review. The review process includes: (1) pre-submission procedures that enable the Institute to plan for specific review sessions; (2) application processing procedures; (3)…
36 CFR 908.32 - Review procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Review procedures. 908.32... DEVELOPMENT AREA Review Procedure § 908.32 Review procedures. (a) Upon receipt of a request for review, the... applicable regulations; (2) Information submitted by the applicant including the request for review and...
Improvements of HITS Algorithms for Spam Links
NASA Astrophysics Data System (ADS)
Asano, Yasuhito; Tezuka, Yu; Nishizeki, Takao
The HITS algorithm proposed by Kleinberg is one of the representative methods of scoring Web pages by using hyperlinks. In the days when the algorithm was proposed, most of the pages given high score by the algorithm were really related to a given topic, and hence the algorithm could be used to find related pages. However, the algorithm and the variants including Bharat's improved HITS, abbreviated to BHITS, proposed by Bharat and Henzinger cannot be used to find related pages any more on today's Web, due to an increase of spam links. In this paper, we first propose three methods to find “linkfarms,” that is, sets of spam links forming a densely connected subgraph of a Web graph. We then present an algorithm, called a trust-score algorithm, to give high scores to pages which are not spam pages with a high probability. Combining the three methods and the trust-score algorithm with BHITS, we obtain several variants of the HITS algorithm. We ascertain by experiments that one of them, named TaN+BHITS using the trust-score algorithm and the method of finding linkfarms by employing name servers, is most suitable for finding related pages on today's Web. Our algorithms take time and memory no more than those required by the original HITS algorithm, and can be executed on a PC with a small amount of main memory.
A Short Survey of Document Structure Similarity Algorithms
Buttler, D
2004-02-27
This paper provides a brief survey of document structural similarity algorithms, including the optimal Tree Edit Distance algorithm and various approximation algorithms. The approximation algorithms include the simple weighted tag similarity algorithm, Fourier transforms of the structure, and a new application of the shingle technique to structural similarity. We show three surprising results. First, the Fourier transform technique proves to be the least accurate of any of approximation algorithms, while also being slowest. Second, optimal Tree Edit Distance algorithms may not be the best technique for clustering pages from different sites. Third, the simplest approximation to structure may be the most effective and efficient mechanism for many applications.
Library of Continuation Algorithms
2005-03-01
LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newtons method for their nonlinear solve.
Randomized approximate nearest neighbors algorithm.
Jones, Peter Wilcox; Osipov, Andrei; Rokhlin, Vladimir
2011-09-20
We present a randomized algorithm for the approximate nearest neighbor problem in d-dimensional Euclidean space. Given N points {x(j)} in R(d), the algorithm attempts to find k nearest neighbors for each of x(j), where k is a user-specified integer parameter. The algorithm is iterative, and its running time requirements are proportional to T·N·(d·(log d) + k·(d + log k)·(log N)) + N·k(2)·(d + log k), with T the number of iterations performed. The memory requirements of the procedure are of the order N·(d + k). A by-product of the scheme is a data structure, permitting a rapid search for the k nearest neighbors among {x(j)} for an arbitrary point x ∈ R(d). The cost of each such query is proportional to T·(d·(log d) + log(N/k)·k·(d + log k)), and the memory requirements for the requisite data structure are of the order N·(d + k) + T·(d + N). The algorithm utilizes random rotations and a basic divide-and-conquer scheme, followed by a local graph search. We analyze the scheme's behavior for certain types of distributions of {x(j)} and illustrate its performance via several numerical examples.
Genetic Algorithm for Optimization: Preprocessor and Algorithm
NASA Technical Reports Server (NTRS)
Sen, S. K.; Shaykhian, Gholam A.
2006-01-01
Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.
Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal
Terra, Ricardo Mingarini; Waisberg, Daniel Reis; de Almeida, José Luiz Jesus; Devido, Marcela Santana; Pêgo-Fernandes, Paulo Manuel; Jatene, Fabio Biscegli
2012-01-01
OBJECTIVE: We aimed to evaluate whether the inclusion of videothoracoscopy in a pleural empyema treatment algorithm would change the clinical outcome of such patients. METHODS: This study performed quality-improvement research. We conducted a retrospective review of patients who underwent pleural decortication for pleural empyema at our institution from 2002 to 2008. With the old algorithm (January 2002 to September 2005), open decortication was the procedure of choice, and videothoracoscopy was only performed in certain sporadic mid-stage cases. With the new algorithm (October 2005 to December 2008), videothoracoscopy became the first-line treatment option, whereas open decortication was only performed in patients with a thick pleural peel (>2 cm) observed by chest scan. The patients were divided into an old algorithm (n = 93) and new algorithm (n = 113) group and compared. The main outcome variables assessed included treatment failure (pleural space reintervention or death up to 60 days after medical discharge) and the occurrence of complications. RESULTS: Videothoracoscopy and open decortication were performed in 13 and 80 patients from the old algorithm group and in 81 and 32 patients from the new algorithm group, respectively (p<0.01). The patients in the new algorithm group were older (41±1 vs. 46.3±16.7 years, p = 0.014) and had higher Charlson Comorbidity Index scores [0(0-3) vs. 2(0-4), p = 0.032]. The occurrence of treatment failure was similar in both groups (19.35% vs. 24.77%, p = 0.35), although the complication rate was lower in the new algorithm group (48.3% vs. 33.6%, p = 0.04). CONCLUSIONS: The wider use of videothoracoscopy in pleural empyema treatment was associated with fewer complications and unaltered rates of mortality and reoperation even though more severely ill patients were subjected to videothoracoscopic surgery. PMID:22760892
Pipe Cleaning Operating Procedures
Clark, D.; Wu, J.; /Fermilab
1991-01-24
This cleaning procedure outlines the steps involved in cleaning the high purity argon lines associated with the DO calorimeters. The procedure is broken down into 7 cycles: system setup, initial flush, wash, first rinse, second rinse, final rinse and drying. The system setup involves preparing the pump cart, line to be cleaned, distilled water, and interconnecting hoses and fittings. The initial flush is an off-line flush of the pump cart and its plumbing in order to preclude contaminating the line. The wash cycle circulates the detergent solution (Micro) at 180 degrees Fahrenheit through the line to be cleaned. The first rinse is then intended to rid the line of the majority of detergent and only needs to run for 30 minutes and at ambient temperature. The second rinse (if necessary) should eliminate the remaining soap residue. The final rinse is then intended to be a check that there is no remaining soap or other foreign particles in the line, particularly metal 'chips.' The final rinse should be run at 180 degrees Fahrenheit for at least 90 minutes. The filters should be changed after each cycle, paying particular attention to the wash cycle and the final rinse cycle return filters. These filters, which should be bagged and labeled, prove that the pipeline is clean. Only distilled water should be used for all cycles, especially rinsing. The level in the tank need not be excessive, merely enough to cover the heater float switch. The final rinse, however, may require a full 50 gallons. Note that most of the details of the procedure are included in the initial flush description. This section should be referred to if problems arise in the wash or rinse cycles.
A comparative study of algorithms for radar imaging from gapped data
NASA Astrophysics Data System (ADS)
Xu, Xiaojian; Luan, Ruixue; Jia, Li; Huang, Ying
2007-09-01
In ultra wideband (UWB) radar imagery, there are often cases where the radar's operating bandwidth is interrupted due to various reasons, either periodically or randomly. Such interruption produces phase history data gaps, which in turn result in artifacts in the image if conventional image reconstruction techniques are used. The higher level artifacts severely degrade the radar images. In this work, several novel techniques for artifacts suppression in gapped data imaging were discussed. These include: (1) A maximum entropy based gap filling technique using a modified Burg algorithm (MEBGFT); (2) An alternative iteration deconvolution based on minimum entropy (AIDME) and its modified version, a hybrid max-min entropy procedure; (3) A windowed coherent CLEAN algorithm; and (4) Two-dimensional (2-D) periodically-gapped Capon (PG-Capon) and APES (PG-APES) algorithms. Performance of various techniques is comparatively studied.
A High-Order Finite-Volume Algorithm for Fokker-Planck Collisions in Magnetized Plasmas
Xiong, Z; Cohen, R H; Rognlien, T D; Xu, X Q
2007-04-18
A high-order finite volume algorithm is developed for the Fokker-Planck Operator (FPO) describing Coulomb collisions in strongly magnetized plasmas. The algorithm is based on a general fourth-order reconstruction scheme for an unstructured grid in the velocity space spanned by parallel velocity and magnetic moment. The method provides density conservation and high-order-accurate evaluation of the FPO independent of the choice of the velocity coordinates. As an example, a linearized FPO in constant-of-motion coordinates, i.e. the total energy and the magnetic moment, is developed using the present algorithm combined with a cut-cell merging procedure. Numerical tests include the Spitzer thermalization problem and the return to isotropy for distributions initialized with velocity space loss cones. Utilization of the method for a nonlinear FPO is straightforward but requires evaluation of the Rosenbluth potentials.
Genetic algorithms and MCML program for recovery of optical properties of homogeneous turbid media
Morales Cruzado, Beatriz; y Montiel, Sergio Vázquez; Atencio, José Alberto Delgado
2013-01-01
In this paper, we present and validate a new method for optical properties recovery of turbid media with slab geometry. This method is an iterative method that compares diffuse reflectance and transmittance, measured using integrating spheres, with those obtained using the known algorithm MCML. The search procedure is based in the evolution of a population due to selection of the best individual, i.e., using a genetic algorithm. This new method includes several corrections such as non-linear effects in integrating spheres measurements and loss of light due to the finite size of the sample. As a potential application and proof-of-principle experiment of this new method, we use this new algorithm in the recovery of optical properties of blood samples at different degrees of coagulation. PMID:23504404
On recursive least-squares filtering algorithms and implementations. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Hsieh, Shih-Fu
1990-01-01
In many real-time signal processing applications, fast and numerically stable algorithms for solving least-squares problems are necessary and important. In particular, under non-stationary conditions, these algorithms must be able to adapt themselves to reflect the changes in the system and take appropriate adjustments to achieve optimum performances. Among existing algorithms, the QR-decomposition (QRD)-based recursive least-squares (RLS) methods have been shown to be useful and effective for adaptive signal processing. In order to increase the speed of processing and achieve high throughput rate, many algorithms are being vectorized and/or pipelined to facilitate high degrees of parallelism. A time-recursive formulation of RLS filtering employing block QRD will be considered first. Several methods, including a new non-continuous windowing scheme based on selectively rejecting contaminated data, were investigated for adaptive processing. Based on systolic triarrays, many other forms of systolic arrays are shown to be capable of implementing different algorithms. Various updating and downdating systolic algorithms and architectures for RLS filtering are examined and compared in details, which include Householder reflector, Gram-Schmidt procedure, and Givens rotation. A unified approach encompassing existing square-root-free algorithms is also proposed. For the sinusoidal spectrum estimation problem, a judicious method of separating the noise from the signal is of great interest. Various truncated QR methods are proposed for this purpose and compared to the truncated SVD method. Computer simulations provided for detailed comparisons show the effectiveness of these methods. This thesis deals with fundamental issues of numerical stability, computational efficiency, adaptivity, and VLSI implementation for the RLS filtering problems. In all, various new and modified algorithms and architectures are proposed and analyzed; the significance of any of the new method depends
Wang, Ting; Ren, Zhao; Ding, Ying; Fang, Zhou; Sun, Zhe; MacDonald, Matthew L; Sweet, Robert A; Wang, Jieru; Chen, Wei
2016-02-01
Biological networks provide additional information for the analysis of human diseases, beyond the traditional analysis that focuses on single variables. Gaussian graphical model (GGM), a probability model that characterizes the conditional dependence structure of a set of random variables by a graph, has wide applications in the analysis of biological networks, such as inferring interaction or comparing differential networks. However, existing approaches are either not statistically rigorous or are inefficient for high-dimensional data that include tens of thousands of variables for making inference. In this study, we propose an efficient algorithm to implement the estimation of GGM and obtain p-value and confidence interval for each edge in the graph, based on a recent proposal by Ren et al., 2015. Through simulation studies, we demonstrate that the algorithm is faster by several orders of magnitude than the current implemented algorithm for Ren et al. without losing any accuracy. Then, we apply our algorithm to two real data sets: transcriptomic data from a study of childhood asthma and proteomic data from a study of Alzheimer's disease. We estimate the global gene or protein interaction networks for the disease and healthy samples. The resulting networks reveal interesting interactions and the differential networks between cases and controls show functional relevance to the diseases. In conclusion, we provide a computationally fast algorithm to implement a statistically sound procedure for constructing Gaussian graphical model and making inference with high-dimensional biological data. The algorithm has been implemented in an R package named "FastGGM".
Wang, Ting; Ren, Zhao; Ding, Ying; Fang, Zhou; Sun, Zhe; MacDonald, Matthew L; Sweet, Robert A; Wang, Jieru; Chen, Wei
2016-02-01
Biological networks provide additional information for the analysis of human diseases, beyond the traditional analysis that focuses on single variables. Gaussian graphical model (GGM), a probability model that characterizes the conditional dependence structure of a set of random variables by a graph, has wide applications in the analysis of biological networks, such as inferring interaction or comparing differential networks. However, existing approaches are either not statistically rigorous or are inefficient for high-dimensional data that include tens of thousands of variables for making inference. In this study, we propose an efficient algorithm to implement the estimation of GGM and obtain p-value and confidence interval for each edge in the graph, based on a recent proposal by Ren et al., 2015. Through simulation studies, we demonstrate that the algorithm is faster by several orders of magnitude than the current implemented algorithm for Ren et al. without losing any accuracy. Then, we apply our algorithm to two real data sets: transcriptomic data from a study of childhood asthma and proteomic data from a study of Alzheimer's disease. We estimate the global gene or protein interaction networks for the disease and healthy samples. The resulting networks reveal interesting interactions and the differential networks between cases and controls show functional relevance to the diseases. In conclusion, we provide a computationally fast algorithm to implement a statistically sound procedure for constructing Gaussian graphical model and making inference with high-dimensional biological data. The algorithm has been implemented in an R package named "FastGGM". PMID:26872036
SamACO: variable sampling ant colony optimization algorithm for continuous optimization.
Hu, Xiao-Min; Zhang, Jun; Chung, Henry Shu-Hung; Li, Yun; Liu, Ou
2010-12-01
An ant colony optimization (ACO) algorithm offers algorithmic techniques for optimization by simulating the foraging behavior of a group of ants to perform incremental solution constructions and to realize a pheromone laying-and-following mechanism. Although ACO is first designed for solving discrete (combinatorial) optimization problems, the ACO procedure is also applicable to continuous optimization. This paper presents a new way of extending ACO to solving continuous optimization problems by focusing on continuous variable sampling as a key to transforming ACO from discrete optimization to continuous optimization. The proposed SamACO algorithm consists of three major steps, i.e., the generation of candidate variable values for selection, the ants' solution construction, and the pheromone update process. The distinct characteristics of SamACO are the cooperation of a novel sampling method for discretizing the continuous search space and an efficient incremental solution construction method based on the sampled values. The performance of SamACO is tested using continuous numerical functions with unimodal and multimodal features. Compared with some state-of-the-art algorithms, including traditional ant-based algorithms and representative computational intelligence algorithms for continuous optimization, the performance of SamACO is seen competitive and promising.
Algorithms for skiascopy measurement automatization
NASA Astrophysics Data System (ADS)
Fomins, Sergejs; Trukša, Renārs; KrūmiĆa, Gunta
2014-10-01
Automatic dynamic infrared retinoscope was developed, which allows to run procedure at a much higher rate. Our system uses a USB image sensor with up to 180 Hz refresh rate equipped with a long focus objective and 850 nm infrared light emitting diode as light source. Two servo motors driven by microprocessor control the rotation of semitransparent mirror and motion of retinoscope chassis. Image of eye pupil reflex is captured via software and analyzed along the horizontal plane. Algorithm for automatic accommodative state analysis is developed based on the intensity changes of the fundus reflex.
Post-processing procedure for industrial quantum key distribution systems
NASA Astrophysics Data System (ADS)
Kiktenko, Evgeny; Trushechkin, Anton; Kurochkin, Yury; Fedorov, Aleksey
2016-08-01
We present algorithmic solutions aimed on post-processing procedure for industrial quantum key distribution systems with hardware sifting. The main steps of the procedure are error correction, parameter estimation, and privacy amplification. Authentication of classical public communication channel is also considered.
Efficient Algorithm for Optimizing Adaptive Quantum Metrology Processes
NASA Astrophysics Data System (ADS)
Hentschel, Alexander; Sanders, Barry C.
2011-12-01
Quantum-enhanced metrology infers an unknown quantity with accuracy beyond the standard quantum limit (SQL). Feedback-based metrological techniques are promising for beating the SQL but devising the feedback procedures is difficult and inefficient. Here we introduce an efficient self-learning swarm-intelligence algorithm for devising feedback-based quantum metrological procedures. Our algorithm can be trained with simulated or real-world trials and accommodates experimental imperfections, losses, and decoherence.
Efficient algorithm for optimizing adaptive quantum metrology processes.
Hentschel, Alexander; Sanders, Barry C
2011-12-01
Quantum-enhanced metrology infers an unknown quantity with accuracy beyond the standard quantum limit (SQL). Feedback-based metrological techniques are promising for beating the SQL but devising the feedback procedures is difficult and inefficient. Here we introduce an efficient self-learning swarm-intelligence algorithm for devising feedback-based quantum metrological procedures. Our algorithm can be trained with simulated or real-world trials and accommodates experimental imperfections, losses, and decoherence.
A Parallel Algorithm for the Vehicle Routing Problem
Groer, Christopher S; Golden, Bruce; Edward, Wasil
2011-01-01
The vehicle routing problem (VRP) is a dicult and well-studied combinatorial optimization problem. We develop a parallel algorithm for the VRP that combines a heuristic local search improvement procedure with integer programming. We run our parallel algorithm with as many as 129 processors and are able to quickly nd high-quality solutions to standard benchmark problems. We assess the impact of parallelism by analyzing our procedure's performance under a number of dierent scenarios.
Designing Flightdeck Procedures
NASA Technical Reports Server (NTRS)
Barshi, Immanuel; Mauro, Robert; Degani, Asaf; Loukopoulou, Loukia
2016-01-01
The primary goal of this document is to provide guidance on how to design, implement, and evaluate flight deck procedures. It provides a process for developing procedures that meet clear and specific requirements. This document provides a brief overview of: 1) the requirements for procedures, 2) a process for the design of procedures, and 3) a process for the design of checklists. The brief overview is followed by amplified procedures that follow the above steps and provide details for the proper design, implementation and evaluation of good flight deck procedures and checklists.
Computerized procedures system
Lipner, Melvin H.; Mundy, Roger A.; Franusich, Michael D.
2010-10-12
An online data driven computerized procedures system that guides an operator through a complex process facility's operating procedures. The system monitors plant data, processes the data and then, based upon this processing, presents the status of the current procedure step and/or substep to the operator. The system supports multiple users and a single procedure definition supports several interface formats that can be tailored to the individual user. Layered security controls access privileges and revisions are version controlled. The procedures run on a server that is platform independent of the user workstations that the server interfaces with and the user interface supports diverse procedural views.
Astigmatism and diagnostic procedures.
Visnjić, Mirna Belovari; Zrinsćak, Ognjen; Barisić, Freja; Iveković, Renata; Laus, Katia Novak; Mandić, Zdravko
2012-06-01
Astigmatism represents an inability of the cornea and lens to provide a sharp image onto the retina. Correcting astigmatic errors, whether congenital, contact lens induced or surgically induced, is now an integral part of modern cataract and refractive procedures. Development of modern technology has enabled accurate diagnosis and perfect opportunities for correction; however, while cataract and keratorefractive surgery have come a long way in the last decade, the treatment and diagnosis of astigmatism continue to challenge ophthalmologists. There are several diagnostic procedures and tools available today, some standard and some contemporary that include keratometry, corneal topography, apparatus using wavefront or Scheimpflug analysis like Orbscan, Pentacam, Wavescan, etc. With the introduction of several new diagnostic tools, measurements of astigmatism have become less of an issue, but in some cases it is still difficult to obtain consistent results. What remains still unanswered is the question of the best diagnostic tool on the market. Further research is needed to evaluate both tools as well as their clinical application for optimal use. PMID:23115957
Noise-enhanced clustering and competitive learning algorithms.
Osoba, Osonde; Kosko, Bart
2013-01-01
Noise can provably speed up convergence in many centroid-based clustering algorithms. This includes the popular k-means clustering algorithm. The clustering noise benefit follows from the general noise benefit for the expectation-maximization algorithm because many clustering algorithms are special cases of the expectation-maximization algorithm. Simulations show that noise also speeds up convergence in stochastic unsupervised competitive learning, supervised competitive learning, and differential competitive learning.
Current procedural terminology coding of nuclear medicine procedures.
McKusick, K A; Quaife, M A
1993-01-01
The future of nuclear medicine is dependent on payment for new procedures. Today, the basis of payment by the federal government is a relative value unit (RVU) system; the RVUS employed in this system are for medical services and procedures listed and described in Physicians' Current Procedural Terminology, fourth edition. Current procedural terminology (CPT) is maintained by the AMA; annual revisions include adding new codes or revised or deleted old codes. This process involves all national medical specialty societies. Starting in 1992 a new process, the Relative Updating Committee, which was initiated by the AMA, organized medicine to formalize a method for recommending relative values for physician procedures and services. In this rapidly changing scenario, all nuclear medicine procedure codes are under review by the coding and nomenclature committees of the medical societies interested in imaging. Significant CPT changes and additions were made in the cardiovascular nuclear medicine codes in 1992, reflecting the current imaging protocols and pharmacological agents for performing cardiac stress testing and new codes that recognize combinations of ventricular function measurements in patients undergoing myocardial perfusion imaging with technetium-99m agents.
Tassan, S. )
1993-06-01
An algorithm using AVHRR data has been set up for the detection of a white tide consisting of algae secretion ('mucilage'), an event occurring in the Adriatic Sea under particular meteorological conditions. The algorithm, which includes an ad hoc procedure for cloud masking, has been tested with reference to the mucilage map obtained from the analysis of contemporary Thematic Mapper data, as well as by comparing consecutive AVHRR scenes. The main features of the exceptional mucilage phenomenon that took place in the northern basin of the Adriatic Sea in summer 1989 are shown by a time series of maps.
An automatic and fast centerline extraction algorithm for virtual colonoscopy.
Jiang, Guangxiang; Gu, Lixu
2005-01-01
This paper introduces a new refined centerline extraction algorithm, which is based on and significantly improved from distance mapping algorithms. The new approach include three major parts: employing a colon segmentation method; designing and realizing a fast Euclidean Transform algorithm and inducting boundary voxels cutting (BVC) approach. The main contribution is the BVC processing, which greatly speeds up the Dijkstra algorithm and improves the whole performance of the new algorithm. Experimental results demonstrate that the new centerline algorithm was more efficient and accurate comparing with existing algorithms. PMID:17281406
Algorithm for remote sensing of land surface temperature
NASA Astrophysics Data System (ADS)
AlSultan, Sultan; Lim, H. S.; MatJafri, M. Z.; Abdullah, K.
2008-10-01
This study employs the developed algorithm for retrieving land surface temperature (LST) from Landsat TM over Saudi Arabia. The algorithm is a mono window algorithm because the Landsat TM has only one thermal band between wavelengths of 10.44-12.42 μm. The proposed algorithm included three parameters, brightness temperature, surface emissivity and incoming solar radiation in the algorithm regression analysis. The LST estimated by the proposed developed algorithm and the LST values produced using ATCORT2_T in the PCI Geomatica 9.1 image processing software were compared. The mono window algorithm produced high accuracy LST values using Landsat TM data.
Research on numerical algorithms for large space structures
NASA Technical Reports Server (NTRS)
Denman, E. D.
1981-01-01
Numerical algorithms for analysis and design of large space structures are investigated. The sign algorithm and its application to decoupling of differential equations are presented. The generalized sign algorithm is given and its application to several problems discussed. The Laplace transforms of matrix functions and the diagonalization procedure for a finite element equation are discussed. The diagonalization of matrix polynomials is considered. The quadrature method and Laplace transforms is discussed and the identification of linear systems by the quadrature method investigated.
A Simulated Annealing Procedure for the Joint Inversion of Spectroscopic and Compositional Data.
NASA Astrophysics Data System (ADS)
Seelos, F. P.; Arvidson, R. E.
2001-12-01
A simulated annealing algorithm capable of inverting thermal emission spectra and compositional data acquired from a common geologic target has been developed. The inversion allows for the identification and proportion estimation of low concentration mineral endmembers. This method will be especially applicable to the 2007 Mars Mobile Geobiology Explorer equipped with an emission spectrometer for mineralogical analyses and a Laser Induced Breakdown Spectrometer for the remote acquisition of elemental information. The coupled inversion is cast as a multidimensional minimization problem where the hyperspace volume to be investigated is defined by the library endmembers at the disposal of the algorithm. This is a vector space in which exists all possible combinations of the library endmembers, with the endmember suite serving as an orthogonal set of basis vectors that span the hyperspace. The goal of the minimization is to locate the hyperspace coordinate that has the lowest associated model error value. This will correspond to the best possible model composition and mineralogy that can be generated by linearly mixing members of the endmember mineral suite. As opposed to standard unmixing routines, the simulated annealing algorithm is flexible enough to minimize any type of model error function. This allows the algorithm to interpret elemental analyses at any level of rigor, including elemental presence, relative abundances, abundance ratios, or exact mole percent. A synthetic data set was developed and systematically degraded with noise of various form and magnitude prior to being inverted with the simulated annealing algorithm as well as two purely spectral unmixing procedures. The simulated annealing procedure outperformed both of the alternate algorithms with an overall factor of two improvement in the mean sum of squares of deviations in the solution parameters. The detailed results from synthetic data inversions as well as the analysis of laboratory data will be
Analysis and Evaluation of GPM Pre-launch Algorithms
NASA Astrophysics Data System (ADS)
Chandrasekar, Venkatachalam; Le, Minda
2014-05-01
The Global Precipitation Measurement (GPM) mission is the next satellite mission to obtain global precipitation measurements following success of TRMM (Tropical Rainfall Measuring Mission). GPM will be launched on February 28, 2014. The GPM mission architecture consists of satellite instruments flying within a constellation to provide accurate precipitation measurements around the globe every 2 to 4 hours and the its orbits cover up to 65 degree latitude of the earth. The GPM core satellite will be equipped with a dual-frequency precipitation radar (DPR) operating at Ku- (13.6 GHz) and Ka- (35.5 GHz) band. DPR on aboard the GPM core satellite is expected to improve our knowledge of precipitation processes relative to the single-frequency (Ku- band) radar used in TRMM by providing greater dynamic range, more detailed information on microphysics, and better accuracies in rainfall and liquid water content retrievals. New Ka- band channel observation of DPR will help to improve the detection thresholds for light rain and snow relative to TRMM PR. The dual-frequency signals will allow us to distinguish regions of liquid, frozen, and mixed-phase precipitation. GPM-DPR level 2 pre-launch algorithms include seven modules. Classification module plays a critical function in the retrieval system of DPR. The outputs of the classification module determine the nature of microphysical models and algorithms to be used in the retrievals. Classification module involves two main aspects: 1) precipitation type classification, including classifying stratiform, convective, and other rain type; and 2) hydrometeor profile characterization or hydrometeor phase state detection. DPR offers dual-frequency observations along the vertical profile, which provides additional information for investigating the microphysical properties using the difference in measured radar reflectivities at the two frequencies, a quantity often called the measured dual-frequency ratio (DFRm). The vertical profile
Reasoning about systolic algorithms
Purushothaman, S.
1986-01-01
Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.
Algorithm-development activities
NASA Technical Reports Server (NTRS)
Carder, Kendall L.
1994-01-01
The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.
NASA Technical Reports Server (NTRS)
Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John
2005-01-01
The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.
Collected radiochemical and geochemical procedures
Kleinberg, J
1990-05-01
This revision of LA-1721, 4th Ed., Collected Radiochemical Procedures, reflects the activities of two groups in the Isotope and Nuclear Chemistry Division of the Los Alamos National Laboratory: INC-11, Nuclear and radiochemistry; and INC-7, Isotope Geochemistry. The procedures fall into five categories: I. Separation of Radionuclides from Uranium, Fission-Product Solutions, and Nuclear Debris; II. Separation of Products from Irradiated Targets; III. Preparation of Samples for Mass Spectrometric Analysis; IV. Dissolution Procedures; and V. Geochemical Procedures. With one exception, the first category of procedures is ordered by the positions of the elements in the Periodic Table, with separate parts on the Representative Elements (the A groups); the d-Transition Elements (the B groups and the Transition Triads); and the Lanthanides (Rare Earths) and Actinides (the 4f- and 5f-Transition Elements). The members of Group IIIB-- scandium, yttrium, and lanthanum--are included with the lanthanides, elements they resemble closely in chemistry and with which they occur in nature. The procedures dealing with the isolation of products from irradiated targets are arranged by target element.
Global Precipitation Measurement: GPM Microwave Imager (GMI) Algorithm Development Approach
NASA Technical Reports Server (NTRS)
Stocker, Erich Franz
2009-01-01
This slide presentation reviews the approach to the development of the Global Precipitation Measurement algorithm. This presentation includes information about the responsibilities for the development of the algorithm, and the calibration. Also included is information about the orbit, and the sun angle. The test of the algorithm code will be done with synthetic data generated from the Precipitation Processing System (PPS).
INSENS classification algorithm report
Hernandez, J.E.; Frerking, C.J.; Myers, D.W.
1993-07-28
This report describes a new algorithm developed for the Imigration and Naturalization Service (INS) in support of the INSENS project for classifying vehicles and pedestrians using seismic data. This algorithm is less sensitive to nuisance alarms due to environmental events than the previous algorithm. Furthermore, the algorithm is simple enough that it can be implemented in the 8-bit microprocessor used in the INSENS system.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
An Efficient Reachability Analysis Algorithm
NASA Technical Reports Server (NTRS)
Vatan, Farrokh; Fijany, Amir
2008-01-01
A document discusses a new algorithm for generating higher-order dependencies for diagnostic and sensor placement analysis when a system is described with a causal modeling framework. This innovation will be used in diagnostic and sensor optimization and analysis tools. Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in-situ platforms. This algorithm will serve as a power tool for technologies that satisfy a key requirement of autonomous spacecraft, including science instruments and in-situ missions.
Quality control algorithms for rainfall measurements
NASA Astrophysics Data System (ADS)
Golz, Claudia; Einfalt, Thomas; Gabella, Marco; Germann, Urs
2005-09-01
One of the basic requirements for a scientific use of rain data from raingauges, ground and space radars is data quality control. Rain data could be used more intensively in many fields of activity (meteorology, hydrology, etc.), if the achievable data quality could be improved. This depends on the available data quality delivered by the measuring devices and the data quality enhancement procedures. To get an overview of the existing algorithms a literature review and literature pool have been produced. The diverse algorithms have been evaluated to meet VOLTAIRE objectives and sorted in different groups. To test the chosen algorithms an algorithm pool has been established, where the software is collected. A large part of this work presented here is implemented in the scope of the EU-project VOLTAIRE ( Validati on of mu ltisensors precipit ation fields and numerical modeling in Mediter ran ean test sites).
Hesitant fuzzy agglomerative hierarchical clustering algorithms
NASA Astrophysics Data System (ADS)
Zhang, Xiaolu; Xu, Zeshui
2015-02-01
Recently, hesitant fuzzy sets (HFSs) have been studied by many researchers as a powerful tool to describe and deal with uncertain data, but relatively, very few studies focus on the clustering analysis of HFSs. In this paper, we propose a novel hesitant fuzzy agglomerative hierarchical clustering algorithm for HFSs. The algorithm considers each of the given HFSs as a unique cluster in the first stage, and then compares each pair of the HFSs by utilising the weighted Hamming distance or the weighted Euclidean distance. The two clusters with smaller distance are jointed. The procedure is then repeated time and again until the desirable number of clusters is achieved. Moreover, we extend the algorithm to cluster the interval-valued hesitant fuzzy sets, and finally illustrate the effectiveness of our clustering algorithms by experimental results.
Genetic algorithms for the vehicle routing problem
NASA Astrophysics Data System (ADS)
Volna, Eva
2016-06-01
The Vehicle Routing Problem (VRP) is one of the most challenging combinatorial optimization tasks. This problem consists in designing the optimal set of routes for fleet of vehicles in order to serve a given set of customers. Evolutionary algorithms are general iterative algorithms for combinatorial optimization. These algorithms have been found to be very effective and robust in solving numerous problems from a wide range of application domains. This problem is known to be NP-hard; hence many heuristic procedures for its solution have been suggested. For such problems it is often desirable to obtain approximate solutions, so they can be found fast enough and are sufficiently accurate for the purpose. In this paper we have performed an experimental study that indicates the suitable use of genetic algorithms for the vehicle routing problem.
24 CFR 266.648 - Items included in total loss.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Contract Rights and Obligations Claim Procedures § 266.648 Items included in total loss. In computing the... assessments, and water bills that are liens before the Mortgage; and (2) Fire and hazard insurance on...
An improved algorithm for wildfire detection
NASA Astrophysics Data System (ADS)
Nakau, K.
2010-12-01
consider the way to cancel sunlight reflection. In this study, author utilizes simple linear correction for estimation of infrared emission considering sunlight reflection. As well as bran new core part of wildfire algorithm, we need to eliminate bright reflectance matters, including cloud, desert and sun glint. Also, we need to eliminate the false alarms at coastal area for difference of surface temperature between land and ocean. An existing algorithm MOD14 has same procedure, however, some of these ancillary parts are newly introduced or improved. Snow mask is newly introduced to reduce a bright reflectance with snow and ice covered area. Also, the improved ancillary parts include candidate selection of fire pixel, cloud mask, water body mask. With these improvements, wildfire with dense smoke or wildfire under thin cloud could be detected by this algorithm. This wild fire product is not validated by ground observations yet. However, distribution is well corresponded with wildfire location in same periods. Unfortunately, this algorithm also detects false alarm in urban area same as existing one. This should be corrected adopting other bands. Current algorithm will be performed in JASMES website.
Some Forerunners of Cloze Procedure.
ERIC Educational Resources Information Center
Harris, David P.
1985-01-01
Looks at some of the methods of testing foreign language learning which are closely related to, yet pre-date, Wilson Taylor's cloze procedure. These include the Ebbinghaus Completion Method, which was first reported in 1897 and was used to test mental ability. Describes later modifications of the Ebbinghaus Method. (SED)
Excursion-Set-Mediated Genetic Algorithm
NASA Technical Reports Server (NTRS)
Noever, David; Baskaran, Subbiah
1995-01-01
Excursion-set-mediated genetic algorithm (ESMGA) is embodiment of method of searching for and optimizing computerized mathematical models. Incorporates powerful search and optimization techniques based on concepts analogous to natural selection and laws of genetics. In comparison with other genetic algorithms, this one achieves stronger condition for implicit parallelism. Includes three stages of operations in each cycle, analogous to biological generation.
Algorithm for genome contig assembly. Final report
1995-09-01
An algorithm was developed for genome contig assembly which extended the range of data types that could be included in assembly and which ran on the order of a hundred times faster than the algorithm it replaced. Maps of all existing cosmid clone and YAC data at the Human Genome Information Resource were assembled using ICA. The resulting maps are summarized.
IUS guidance algorithm gamma guide assessment
NASA Technical Reports Server (NTRS)
Bray, R. E.; Dauro, V. A.
1980-01-01
The Gamma Guidance Algorithm which controls the inertial upper stage is described. The results of an independent assessment of the algorithm's performance in satisfying the NASA missions' targeting objectives are presented. The results of a launch window analysis for a Galileo mission, and suggested improvements are included.
Streamwise Upwind, Moving-Grid Flow Algorithm
NASA Technical Reports Server (NTRS)
Goorjian, Peter M.; Guruswamy, Guru P.; Obayashi, Shigeru
1992-01-01
Extension to moving grids enables computation of transonic flows about moving bodies. Algorithm computes unsteady transonic flow on basis of nondimensionalized thin-layer Navier-Stokes equations in conservation-law form. Solves equations by use of computational grid based on curvilinear coordinates conforming to, and moving with, surface(s) of solid body or bodies in flow field. Simulates such complicated phenomena as transonic flow (including shock waves) about oscillating wing. Algorithm developed by extending prior streamwise upwind algorithm solving equations on fixed curvilinear grid described in "Streamwise Algorithm for Simulation of Flow" (ARC-12718).
A parallel algorithm for global routing
NASA Technical Reports Server (NTRS)
Brouwer, Randall J.; Banerjee, Prithviraj
1990-01-01
A Parallel Hierarchical algorithm for Global Routing (PHIGURE) is presented. The router is based on the work of Burstein and Pelavin, but has many extensions for general global routing and parallel execution. Main features of the algorithm include structured hierarchical decomposition into separate independent tasks which are suitable for parallel execution and adaptive simplex solution for adding feedthroughs and adjusting channel heights for row-based layout. Alternative decomposition methods and the various levels of parallelism available in the algorithm are examined closely. The algorithm is described and results are presented for a shared-memory multiprocessor implementation.
Pollutant Assessments Group Procedures Manual: Volume 1, Administrative and support procedures
Not Available
1992-03-01
This manual describes procedures currently in use by the Pollutant Assessments Group. The manual is divided into two volumes: Volume 1 includes administrative and support procedures, and Volume 2 includes technical procedures. These procedures are revised in an ongoing process to incorporate new developments in hazardous waste assessment technology and changes in administrative policy. Format inconsistencies will be corrected in subsequent revisions of individual procedures. The purpose of the Pollutant Assessments Groups Procedures Manual is to provide a standardized set of procedures documenting in an auditable manner the activities performed by the Pollutant Assessments Group (PAG) of the Health and Safety Research Division (HASRD) of the Environmental Measurements and Applications Section (EMAS) at Oak Ridge National Laboratory (ORNL). The Procedures Manual ensures that the organizational, administrative, and technical activities of PAG conform properly to protocol outlined by funding organizations. This manual also ensures that the techniques and procedures used by PAG and other contractor personnel meet the requirements of applicable governmental, scientific, and industrial standards. The Procedures Manual is sufficiently comprehensive for use by PAG and contractor personnel in the planning, performance, and reporting of project activities and measurements. The Procedures Manual provides procedures for conducting field measurements and includes program planning, equipment operation, and quality assurance elements. Successive revisions of this manual will be archived in the PAG Document Control Department to facilitate tracking of the development of specific procedures.
Developing policies and procedures.
Randolph, Susan A
2006-11-01
The development of policies and procedures is an integral part of the occupational health nurse's role. Policies and procedures serve as the foundation for the occupational health service and are based on its vision, mission, culture, and values. The design and layout selected for the policies and procedures should be simple, consistent, and easy to use. The same format should be used for all existing and new policies and procedures. Policies and procedures should be reviewed periodically based on a specified time frame (i.e., annually). However, some policies may require a more frequent review if they involve rapidly changing external standards, ethical issues, or emerging exposures. PMID:17124968
Fast training algorithms for multilayer neural nets.
Brent, R P
1991-01-01
An algorithm that is faster than back-propagation and for which it is not necessary to specify the number of hidden units in advance is described. The relationship with other fast pattern-recognition algorithms, such as algorithms based on k-d trees, is discussed. The algorithm has been implemented and tested on artificial problems, such as the parity problem, and on real problems arising in speech recognition. Experimental results, including training times and recognition accuracy, are given. Generally, the algorithm achieves accuracy as good as or better than nets trained using back-propagation. Accuracy is comparable to that for the nearest-neighbor algorithm, which is slower and requires more storage space.
Visualizing output for a data learning algorithm
NASA Astrophysics Data System (ADS)
Carson, Daniel; Graham, James; Ternovskiy, Igor
2016-05-01
This paper details the process we went through to visualize the output for our data learning algorithm. We have been developing a hierarchical self-structuring learning algorithm based around the general principles of the LaRue model. One example of a proposed application of this algorithm would be traffic analysis, chosen because it is conceptually easy to follow and there is a significant amount of already existing data and related research material with which to work with. While we choose the tracking of vehicles for our initial approach, it is by no means the only target of our algorithm. Flexibility is the end goal, however, we still need somewhere to start. To that end, this paper details our creation of the visualization GUI for our algorithm, the features we included and the initial results we obtained from our algorithm running a few of the traffic based scenarios we designed.
NASA Astrophysics Data System (ADS)
Neta, B.; Mansager, B.
1992-08-01
Audio information concerning targets generally includes direction, frequencies, and energy levels. One use of audio cueing is to use direction information to help determine where more sensitive visual direction and acquisition sensors should be directed. Generally, use of audio cueing will shorten times required for visual detection, although there could be circumstances where the audio information is misleading and degrades visual performance. Audio signatures can also be useful for helping classify the emanating platform, as well as to provide estimates of its velocity. The Janus combat simulation is the premier high resolution model used by the Army and other agencies to conduct research. This model has a visual detection model which essentially incorporates algorithms as described by Hartman(1985). The model in its current form does not have any sound cueing capability. This report is part of a research effort to investigate the utility of developing such a capability.
POSE Algorithms for Automated Docking
NASA Technical Reports Server (NTRS)
Heaton, Andrew F.; Howard, Richard T.
2011-01-01
POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data.
SDR Input Power Estimation Algorithms
NASA Technical Reports Server (NTRS)
Nappier, Jennifer M.; Briones, Janette C.
2013-01-01
The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.
SDR input power estimation algorithms
NASA Astrophysics Data System (ADS)
Briones, J. C.; Nappier, J. M.
The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.
An algorithmic approach to crustal deformation analysis
NASA Technical Reports Server (NTRS)
Iz, Huseyin Baki
1987-01-01
In recent years the analysis of crustal deformation measurements has become important as a result of current improvements in geodetic methods and an increasing amount of theoretical and observational data provided by several earth sciences. A first-generation data analysis algorithm which combines a priori information with current geodetic measurements was proposed. Relevant methods which can be used in the algorithm were discussed. Prior information is the unifying feature of this algorithm. Some of the problems which may arise through the use of a priori information in the analysis were indicated and preventive measures were demonstrated. The first step in the algorithm is the optimal design of deformation networks. The second step in the algorithm identifies the descriptive model of the deformation field. The final step in the algorithm is the improved estimation of deformation parameters. Although deformation parameters are estimated in the process of model discrimination, they can further be improved by the use of a priori information about them. According to the proposed algorithm this information must first be tested against the estimates calculated using the sample data only. Null-hypothesis testing procedures were developed for this purpose. Six different estimators which employ a priori information were examined. Emphasis was put on the case when the prior information is wrong and analytical expressions for possible improvements under incompatible prior information were derived.
Efficient computer algebra algorithms for polynomial matrices in control design
NASA Technical Reports Server (NTRS)
Baras, J. S.; Macenany, D. C.; Munach, R.
1989-01-01
The theory of polynomial matrices plays a key role in the design and analysis of multi-input multi-output control and communications systems using frequency domain methods. Examples include coprime factorizations of transfer functions, cannonical realizations from matrix fraction descriptions, and the transfer function design of feedback compensators. Typically, such problems abstract in a natural way to the need to solve systems of Diophantine equations or systems of linear equations over polynomials. These and other problems involving polynomial matrices can in turn be reduced to polynomial matrix triangularization procedures, a result which is not surprising given the importance of matrix triangularization techniques in numerical linear algebra. Matrices with entries from a field and Gaussian elimination play a fundamental role in understanding the triangularization process. In the case of polynomial matrices, matrices with entries from a ring for which Gaussian elimination is not defined and triangularization is accomplished by what is quite properly called Euclidean elimination. Unfortunately, the numerical stability and sensitivity issues which accompany floating point approaches to Euclidean elimination are not very well understood. New algorithms are presented which circumvent entirely such numerical issues through the use of exact, symbolic methods in computer algebra. The use of such error-free algorithms guarantees that the results are accurate to within the precision of the model data--the best that can be hoped for. Care must be taken in the design of such algorithms due to the phenomenon of intermediate expressions swell.
An algorithm for a generalization of the Richardson extrapolation process
NASA Technical Reports Server (NTRS)
Ford, William F.; Sidi, Avram
1987-01-01
The paper presents a recursive method, designated the W exp (m)-algorithm, for implementing a generalization of the Richardson extrapolation process. Compared to the direct solution of the linear sytems of equations defining the extrapolation procedure, this method requires a small number of arithmetic operations and very little storage. The technique is also applied to solve recursively the coefficient problem associated with the rational approximations obtained by applying a d-transformation to power series. In the course of development a new recursive algorithm for implementing a very general extrapolation procedure is introduced, for solving the same problem. A FORTRAN program for the W exp (m)-algorithm is also appended.
Landsat classification accuracy assessment procedures
Mead, R. R.; Szajgin, John
1982-01-01
A working conference was held in Sioux Falls, South Dakota, 12-14 November, 1980 dealing with Landsat classification Accuracy Assessment Procedures. Thirteen formal presentations were made on three general topics: (1) sampling procedures, (2) statistical analysis techniques, and (3) examples of projects which included accuracy assessment and the associated costs, logistical problems, and value of the accuracy data to the remote sensing specialist and the resource manager. Nearly twenty conference attendees participated in two discussion sessions addressing various issues associated with accuracy assessment. This paper presents an account of the accomplishments of the conference.
Palliative Procedures in Lung Cancer
Masuda, Emi; Sista, Akhilesh K.; Pua, Bradley B.; Madoff, David C.
2013-01-01
Palliative care aims to optimize comfort and function when cure is not possible. Image-guided interventions for palliative treatment of lung cancer is aimed at local control of advanced disease in the affected lung, adjacent mediastinal structures, or distant metastatic sites. These procedures include endovascular therapy for superior vena cava syndrome, bronchial artery embolization for hemoptysis associated with lung cancer, and ablation of osseous metastasis. Pathophysiology, clinical presentation, indications of these palliative treatments, procedural techniques, complications, and possible future interventions are discussed in this article. PMID:24436537
Model Specification Searches Using Ant Colony Optimization Algorithms
ERIC Educational Resources Information Center
Marcoulides, George A.; Drezner, Zvi
2003-01-01
Ant colony optimization is a recently proposed heuristic procedure inspired by the behavior of real ants. This article applies the procedure to model specification searches in structural equation modeling and reports the results. The results demonstrate the capabilities of ant colony optimization algorithms for conducting automated searches.
2011-01-01
Background Position-specific priors (PSP) have been used with success to boost EM and Gibbs sampler-based motif discovery algorithms. PSP information has been computed from different sources, including orthologous conservation, DNA duplex stability, and nucleosome positioning. The use of prior information has not yet been used in the context of combinatorial algorithms. Moreover, priors have been used only independently, and the gain of combining priors from different sources has not yet been studied. Results We extend RISOTTO, a combinatorial algorithm for motif discovery, by post-processing its output with a greedy procedure that uses prior information. PSP's from different sources are combined into a scoring criterion that guides the greedy search procedure. The resulting method, called GRISOTTO, was evaluated over 156 yeast TF ChIP-chip sequence-sets commonly used to benchmark prior-based motif discovery algorithms. Results show that GRISOTTO is at least as accurate as other twelve state-of-the-art approaches for the same task, even without combining priors. Furthermore, by considering combined priors, GRISOTTO is considerably more accurate than the state-of-the-art approaches for the same task. We also show that PSP's improve GRISOTTO ability to retrieve motifs from mouse ChiP-seq data, indicating that the proposed algorithm can be applied to data from a different technology and for a higher eukaryote. Conclusions The conclusions of this work are twofold. First, post-processing the output of combinatorial algorithms by incorporating prior information leads to a very efficient and effective motif discovery method. Second, combining priors from different sources is even more beneficial than considering them separately. PMID:21513505
Writer`s guide for technical procedures
1998-12-01
A primary objective of operations conducted in the US Department of Energy (DOE) complex is safety. Procedures are a critical element of maintaining a safety envelope to ensure safe facility operation. This DOE Writer`s Guide for Technical Procedures addresses the content, format, and style of technical procedures that prescribe production, operation of equipment and facilities, and maintenance activities. The DOE Writer`s Guide for Management Control Procedures and DOE Writer`s Guide for Emergency and Alarm Response Procedures are being developed to assist writers in developing nontechnical procedures. DOE is providing this guide to assist writers across the DOE complex in producing accurate, complete, and usable procedures that promote safe and efficient operations that comply with DOE orders, including DOE Order 5480.19, Conduct of Operations for DOE Facilities, and 5480.6, Safety of Department of Energy-Owned Nuclear Reactors.
Atmospheric Correction Algorithm for Hyperspectral Imagery
R. J. Pollina
1999-09-01
In December 1997, the US Department of Energy (DOE) established a Center of Excellence (Hyperspectral-Multispectral Algorithm Research Center, HyMARC) for promoting the research and development of algorithms to exploit spectral imagery. This center is located at the DOE Remote Sensing Laboratory in Las Vegas, Nevada, and is operated for the DOE by Bechtel Nevada. This paper presents the results to date of a research project begun at the center during 1998 to investigate the correction of hyperspectral data for atmospheric aerosols. Results of a project conducted by the Rochester Institute of Technology to define, implement, and test procedures for absolute calibration and correction of hyperspectral data to absolute units of high spectral resolution imagery will be presented. Hybrid techniques for atmospheric correction using image or spectral scene data coupled through radiative propagation models will be specifically addressed. Results of this effort to analyze HYDICE sensor data will be included. Preliminary results based on studying the performance of standard routines, such as Atmospheric Pre-corrected Differential Absorption and Nonlinear Least Squares Spectral Fit, in retrieving reflectance spectra show overall reflectance retrieval errors of approximately one to two reflectance units in the 0.4- to 2.5-micron-wavelength region (outside of the absorption features). These results are based on HYDICE sensor data collected from the Southern Great Plains Atmospheric Radiation Measurement site during overflights conducted in July of 1997. Results of an upgrade made in the model-based atmospheric correction techniques, which take advantage of updates made to the moderate resolution atmospheric transmittance model (MODTRAN 4.0) software, will also be presented. Data will be shown to demonstrate how the reflectance retrieval in the shorter wavelengths of the blue-green region will be improved because of enhanced modeling of multiple scattering effects.
NASA Technical Reports Server (NTRS)
Ramirez, Daniel Perez; Lyamani, H.; Olmo, F. J.; Whiteman, D. N.; Navas-Guzman, F.; Alados-Arboledas, L.
2012-01-01
This paper presents the development and set up of a cloud screening and data quality control algorithm for a star photometer based on CCD camera as detector. These algorithms are necessary for passive remote sensing techniques to retrieve the columnar aerosol optical depth, delta Ae(lambda), and precipitable water vapor content, W, at nighttime. This cloud screening procedure consists of calculating moving averages of delta Ae() and W under different time-windows combined with a procedure for detecting outliers. Additionally, to avoid undesirable Ae(lambda) and W fluctuations caused by the atmospheric turbulence, the data are averaged on 30 min. The algorithm is applied to the star photometer deployed in the city of Granada (37.16 N, 3.60 W, 680 ma.s.l.; South-East of Spain) for the measurements acquired between March 2007 and September 2009. The algorithm is evaluated with correlative measurements registered by a lidar system and also with all-sky images obtained at the sunset and sunrise of the previous and following days. Promising results are obtained detecting cloud-affected data. Additionally, the cloud screening algorithm has been evaluated under different aerosol conditions including Saharan dust intrusion, biomass burning and pollution events.
Candidate CDTI procedures study
NASA Technical Reports Server (NTRS)
Ace, R. E.
1981-01-01
A concept with potential for increasing airspace capacity by involving the pilot in the separation control loop is discussed. Some candidate options are presented. Both enroute and terminal area procedures are considered and, in many cases, a technologically advanced Air Traffic Control structure is assumed. Minimum display characteristics recommended for each of the described procedures are presented. Recommended sequencing of the operational testing of each of the candidate procedures is presented.
The computational structural mechanics testbed procedures manual
NASA Technical Reports Server (NTRS)
Stewart, Caroline B. (Compiler)
1991-01-01
The purpose of this manual is to document the standard high level command language procedures of the Computational Structural Mechanics (CSM) Testbed software system. A description of each procedure including its function, commands, data interface, and use is presented. This manual is designed to assist users in defining and using command procedures to perform structural analysis in the CSM Testbed User's Manual and the CSM Testbed Data Library Description.
Apollo experience report: Systems and flight procedures development
NASA Technical Reports Server (NTRS)
Kramer, P. C.
1973-01-01
This report describes the process of crew procedures development used in the Apollo Program. The two major categories, Systems Procedures and Flight Procedures, are defined, as are the forms of documentation required. A description is provided of the operation of the procedures change control process, which includes the roles of man-in-the-loop simulations and the Crew Procedures Change Board. Brief discussions of significant aspects of the attitude control, computer, electrical power, environmental control, and propulsion subsystems procedures development are presented. Flight procedures are subdivided by mission phase: launch and translunar injection, rendezvous, lunar descent and ascent, and entry. Procedures used for each mission phase are summarized.
HEATR project: ATR algorithm parallelization
NASA Astrophysics Data System (ADS)
Deardorf, Catherine E.
1998-09-01
High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.
Including Conflict in Creative Writing.
ERIC Educational Resources Information Center
Litvin, Martin
Conflict is the basis of all stories and thus should appear in some form in the first sentence. There are three kinds of conflict: people vs. people; people vs. nature; and people vs. themselves. Conflict must be repeated in all the various elements of the story's structure, including the plot, which is the plan of action telling what happens to…
Family Living, Including Sex Education.
ERIC Educational Resources Information Center
Forlano, George
This volume describes and evaluates 21 selected New York City Board of Education Umbrella Programs for the 1974-1975 school year. The programs include: (1) the parent resource center, (2) the teacher self-help program, (3) the East Harlem pre-kindergarten center, (4) the Brooklyn College volunteer tutoring program, (5) the parent education for…
21 CFR 211.100 - Written procedures; deviations.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 4 2014-04-01 2014-04-01 false Written procedures; deviations. 211.100 Section... Process Controls § 211.100 Written procedures; deviations. (a) There shall be written procedures for... in this subpart. These written procedures, including any changes, shall be drafted, reviewed,...
21 CFR 211.100 - Written procedures; deviations.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 4 2010-04-01 2010-04-01 false Written procedures; deviations. 211.100 Section... Process Controls § 211.100 Written procedures; deviations. (a) There shall be written procedures for... in this subpart. These written procedures, including any changes, shall be drafted, reviewed,...
Reasoning about systolic algorithms
Purushothaman, S.; Subrahmanyam, P.A.
1988-12-01
The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.
High-resolution algorithms for the Navier-Stokes equations for generalized discretizations
NASA Astrophysics Data System (ADS)
Mitchell, Curtis Randall
Accurate finite volume solution algorithms for the two dimensional Navier Stokes equations and the three dimensional Euler equations for both structured and unstructured grid topologies are presented. Results for two dimensional quadrilateral and triangular elements and three dimensional tetrahedral elements will be provided. Fundamental to the solution algorithm is a technique for generating multidimensional polynomials which model the spatial variation of the flow variables. Cell averaged data is used to reconstruct pointwise distributions of the dependent variables. The reconstruction errors are evaluated on triangular meshes. The implementation of the algorithm is unique in that three reconstructions are performed for each cell face in the domain. Two of the reconstructions are used to evaluate the inviscid fluxes and correspond to the right and left interface states needed for the solution of a Riemann problem. The third reconstruction is used to evaluate the viscous fluxes. The gradient terms that appear in the viscous fluxes are formed by simply differentiating the polynomial. By selecting the appropriate cell control volumes, centered, upwind and upwind-biased stencils are possible. Numerical calculations in two dimensions include solutions to elliptic boundary value problems, Ringlebs' flow, an inviscid shock reflection, a flat plate boundary layer, and a shock induced separation over a flat plate. Three dimensional results include the ONERA M6 wing. All of the unstructured grids were generated using an advancing front mesh generation procedure. Modifications to the three dimensional grid generator were necessary to discretize the surface grids for bodies with high curvature. In addition, mesh refinement algorithms were implemented to improve the surface grid integrity. Examples include a Glasair fuselage, High Speed Civil Transport, and the ONERA M6 wing. The role of reconstruction as applied to adaptive remeshing is discussed and a new first order error
Atmospheric channel for bistatic optical communication: simulation algorithms
NASA Astrophysics Data System (ADS)
Belov, V. V.; Tarasenkov, M. V.
2015-11-01
Three algorithms of statistical simulation of the impulse response (IR) for the atmospheric optical communication channel are considered, including algorithms of local estimate and double local estimate and the algorithm suggested by us. On the example of a homogeneous molecular atmosphere it is demonstrated that algorithms of double local estimate and the suggested algorithm are more efficient than the algorithm of local estimate. For small optical path length, the proposed algorithm is more efficient, and for large optical path length, the algorithm of double local estimate is more efficient. Using the proposed algorithm, the communication quality is estimated for a particular case of the atmospheric channel under conditions of intermediate turbidity. The communication quality is characterized by the maximum IR, time of maximum IR, integral IR, and bandwidth of the communication channel. Calculations of these criteria demonstrated that communication is most efficient when the point of intersection of the directions toward the source and the receiver is most close to the source point.
Inflight IFR procedures simulator
NASA Technical Reports Server (NTRS)
Parker, L. C. (Inventor)
1984-01-01
An inflight IFR procedures simulator for generating signals and commands to conventional instruments provided in an airplane is described. The simulator includes a signal synthesizer which generates predetermined simulated signals corresponding to signals normally received from remote sources upon being activated. A computer is connected to the signal synthesizer and causes the signal synthesizer to produce simulated signals responsive to programs fed into the computer. A switching network is connected to the signal synthesizer, the antenna of the aircraft, and navigational instruments and communication devices for selectively connecting instruments and devices to the synthesizer and disconnecting the antenna from the navigational instruments and communication device. Pressure transducers are connected to the altimeter and speed indicator for supplying electrical signals to the computer indicating the altitude and speed of the aircraft. A compass is connected for supply electrical signals for the computer indicating the heading of the airplane. The computer upon receiving signals from the pressure transducer and compass, computes the signals that are fed to the signal synthesizer which, in turn, generates simulated navigational signals.
NASA Technical Reports Server (NTRS)
Obrien, Maureen E.
1990-01-01
Telerobotic operations, whether under autonomous or teleoperated control, require a much more sophisticated safety system than that needed for most industrial applications. Industrial robots generally perform very repetitive tasks in a controlled, static environment. The safety system in that case can be as simple as shutting down the robot if a human enters the work area, or even simply building a cage around the work space. Telerobotic operations, however, will take place in a dynamic, sometimes unpredictable environment, and will involve complicated and perhaps unrehearsed manipulations. This creates a much greater potential for damage to the robot or objects in its vicinity. The Procedural Safety System (PSS) collects data from external sensors and the robot, then processes it through an expert system shell to determine whether an unsafe condition or potential unsafe condition exists. Unsafe conditions could include exceeding velocity, acceleration, torque, or joint limits, imminent collision, exceeding temperature limits, and robot or sensor component failure. If a threat to safety exists, the operator is warned. If the threat is serious enough, the robot is halted. The PSS, therefore, uses expert system technology to enhance safety thus reducing operator work load, allowing him/her to focus on performing the task at hand without the distraction of worrying about violating safety criteria.
49 CFR 383.131 - Test procedures.
Code of Federal Regulations, 2010 CFR
2010-10-01
...) Information on the requirements described in § 383.71, the implied consent to alcohol testing described in... refusal to comply with such alcohol testing, State procedures described in § 383.73, and other appropriate...; (4) Details of testing procedures, including the purpose of the tests, how to respond, any...
18 CFR 1308.32 - Prehearing procedures.
Code of Federal Regulations, 2010 CFR
2010-04-01
... as used in those Rules shall be deemed to mean “Hearing Officer”; the term plaintiff shall be deemed... in this part, prehearing procedures, including discovery, shall be conducted in accordance with Rules 6, 7(b), 16, 26, 28-37, and 56 of the Federal Rules of Civil Procedure, except that the...
Irrigation customer survey procedures and results
Harrer, B.J.; Johnston, J.W.; Dase, J.E.; Hattrup, M.P.; Reed, G.
1987-03-01
This report describes the statistical procedures, administrative procedures, and results of a telephone survey designed to collect primary data from individuals in the Pacific Northwest region who use electricity in irrigating agricultural crops. The project was intended to collect data useful for a variety of purposes, including conservation planning, load forecasting, and rate design.
14 CFR 23.1585 - Operating procedures.
Code of Federal Regulations, 2012 CFR
2012-01-01
..., stalling); (4) Procedures for restarting any turbine engine in flight, including the effects of altitude... this section, for all single-engine airplanes, the procedures, speeds, and configuration(s) for a glide following engine failure, in accordance with § 23.71 and the subsequent forced landing, must be...
14 CFR 183.53 - Procedures manual.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Procedures manual. 183.53 Section 183.53... manual. No ODA Letter of Designation may be issued before the Administrator approves an applicant's procedures manual. The approved manual must: (a) Be available to each member of the ODA Unit; (b) Include...
Multiple Comparison Procedures when Population Variances Differ.
ERIC Educational Resources Information Center
Olejnik, Stephen; Lee, JaeShin
A review of the literature on multiple comparison procedures suggests several alternative approaches for comparing means when population variances differ. These include: (1) the approach of P. A. Games and J. F. Howell (1976); (2) C. W. Dunnett's C confidence interval (1980); and (3) Dunnett's T3 solution (1980). These procedures control the…
40 CFR 1507.3 - Agency procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Agency procedures. 1507.3 Section 1507.3 Protection of Environment COUNCIL ON ENVIRONMENTAL QUALITY AGENCY COMPLIANCE § 1507.3 Agency... environmental impact statements. (c) Agency procedures may include specific criteria for providing...
40 CFR 1507.3 - Agency procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Agency procedures. 1507.3 Section 1507.3 Protection of Environment COUNCIL ON ENVIRONMENTAL QUALITY AGENCY COMPLIANCE § 1507.3 Agency... environmental impact statements. (c) Agency procedures may include specific criteria for providing...
40 CFR 1507.3 - Agency procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Agency procedures. 1507.3 Section 1507.3 Protection of Environment COUNCIL ON ENVIRONMENTAL QUALITY AGENCY COMPLIANCE § 1507.3 Agency... environmental impact statements. (c) Agency procedures may include specific criteria for providing...
48 CFR 2842.1503 - Procedures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... CONTRACT ADMINISTRATION Contractor Performance Information 2842.1503 Procedures. Past performance evaluation procedures and systems shall include, to the greatest practicable extent, the evaluation and performance rating factors set forth in the Office of Federal Procurement Policy best practices guide for...
Procedure to Generate the MPACT Multigroup Library
Kim, Kang Seog
2015-12-17
The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the light water reactor. The objective of this document is focused on reviewing the current procedure to generate the MPACT multigroup library. Detailed methodologies and procedures are included in this document for further discussion to improve the MPACT multigroup library.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Procedures. 410.002 Section 410.002 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE COMPETITION AND ACQUISITION PLANNING MARKET RESEARCH 410.002 Procedures. Market research must include obtaining information...
14 CFR 23.1585 - Operating procedures.
Code of Federal Regulations, 2014 CFR
2014-01-01
... airplane and loss of control (for example, stalling); (4) Procedures for restarting any turbine engine in... normal approach and landing, in accordance with §§ 23.73 and 23.75, and a transition to the balked... in § 23.149; and (4) Procedures for restarting any engine in flight including the effects of...
How to Write Effective Procedure Manuals.
ERIC Educational Resources Information Center
Wold, Geoffrey H.
1987-01-01
Describes six key guidelines for developing usable procedure manuals, including defining the audience; designing a standard format; preparing an outline; using a clear, concise writing style; testing the procedures; and "finalizing" the product with indices, glossaries, appendices, and section tabs. Well-written manuals can increase employee…
14 CFR 183.53 - Procedures manual.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Procedures manual. 183.53 Section 183.53... manual. No ODA Letter of Designation may be issued before the Administrator approves an applicant's procedures manual. The approved manual must: (a) Be available to each member of the ODA Unit; (b) Include...
Code of Federal Regulations, 2012 CFR
2012-04-01
... PROCEDURES Bridges on Federal Dams § 630.803 Procedures. A State's application to qualify a project under this subpart will include: (a) A certification that the bridge is economically desirable and needed as... funds to be used in construction of the roadway approaches. (c) A statement of any obligation on...
Torrens, Francisco; Castellano, Gloria
2014-01-01
Pesticide residues in wine were analyzed by liquid chromatography-tandem mass spectrometry. Retentions are modelled by structure-property relationships. Bioplastic evolution is an evolutionary perspective conjugating effect of acquired characters and evolutionary indeterminacy-morphological determination-natural selection principles; its application to design co-ordination index barely improves correlations. Fractal dimensions and partition coefficient differentiate pesticides. Classification algorithms are based on information entropy and its production. Pesticides allow a structural classification by nonplanarity, and number of O, S, N and Cl atoms and cycles; different behaviours depend on number of cycles. The novelty of the approach is that the structural parameters are related to retentions. Classification algorithms are based on information entropy. When applying procedures to moderate-sized sets, excessive results appear compatible with data suffering a combinatorial explosion. However, equipartition conjecture selects criterion resulting from classification between hierarchical trees. Information entropy permits classifying compounds agreeing with principal component analyses. Periodic classification shows that pesticides in the same group present similar properties; those also in equal period, maximum resemblance. The advantage of the classification is to predict the retentions for molecules not included in the categorization. Classification extends to phenyl/sulphonylureas and the application will be to predict their retentions. PMID:24905607
ERIC Educational Resources Information Center
Davis, Kevin; Poston, George
This manual provides information on the enucleation procedure (removal of the eyes for organ banks). An introductory section focuses on the anatomy of the eye and defines each of the parts. Diagrams of the eye are provided. A list of enucleation materials follows. Other sections present outlines of (1) a sterile procedure; (2) preparation for eye…
ERIC Educational Resources Information Center
Handy, Rollo; Harwood, E. C.
This book discusses and analyzes the many different procedures of inquiry, both old and new, which have been used in an attempt to solve the problems men encounter. Section A examines some outmoded procedures of inquiry, describes scientific inquiry, and presents the Dewey-Bentley view of scientific method. Sections B and C, which comprise the…
Sequential unconstrained minimization algorithms for constrained optimization
NASA Astrophysics Data System (ADS)
Byrne, Charles
2008-02-01
The problem of minimizing a function f(x):RJ → R, subject to constraints on the vector variable x, occurs frequently in inverse problems. Even without constraints, finding a minimizer of f(x) may require iterative methods. We consider here a general class of iterative algorithms that find a solution to the constrained minimization problem as the limit of a sequence of vectors, each solving an unconstrained minimization problem. Our sequential unconstrained minimization algorithm (SUMMA) is an iterative procedure for constrained minimization. At the kth step we minimize the function G_k(x)=f(x)+g_k(x), to obtain xk. The auxiliary functions gk(x):D ⊆ RJ → R+ are nonnegative on the set D, each xk is assumed to lie within D, and the objective is to minimize the continuous function f:RJ → R over x in the set C=\\overline D , the closure of D. We assume that such minimizers exist, and denote one such by \\hat x . We assume that the functions gk(x) satisfy the inequalities 0\\leq g_k(x)\\leq G_{k-1}(x)-G_{k-1}(x^{k-1}), for k = 2, 3, .... Using this assumption, we show that the sequence {f(xk)} is decreasing and converges to f({\\hat x}) . If the restriction of f(x) to D has bounded level sets, which happens if \\hat x is unique and f(x) is closed, proper and convex, then the sequence {xk} is bounded, and f(x^*)=f({\\hat x}) , for any cluster point x*. Therefore, if \\hat x is unique, x^*={\\hat x} and \\{x^k\\}\\rightarrow {\\hat x} . When \\hat x is not unique, convergence can still be obtained, in particular cases. The SUMMA includes, as particular cases, the well-known barrier- and penalty-function methods, the simultaneous multiplicative algebraic reconstruction technique (SMART), the proximal minimization algorithm of Censor and Zenios, the entropic proximal methods of Teboulle, as well as certain cases of gradient descent and the Newton-Raphson method. The proof techniques used for SUMMA can be extended to obtain related results for the induced proximal
Multidisciplinary design optimization using genetic algorithms
NASA Technical Reports Server (NTRS)
Unal, Resit
1994-01-01
Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared
Universal single level implicit algorithm for gasdynamics
NASA Technical Reports Server (NTRS)
Lombard, C. K.; Venkatapthy, E.
1984-01-01
A single level effectively explicit implicit algorithm for gasdynamics is presented. The method meets all the requirements for unconditionally stable global iteration over flows with mixed supersonic and supersonic zones including blunt body flow and boundary layer flows with strong interaction and streamwise separation. For hyperbolic (supersonic flow) regions the method is automatically equivalent to contemporary space marching methods. For elliptic (subsonic flow) regions, rapid convergence is facilitated by alternating direction solution sweeps which bring both sets of eigenvectors and the influence of both boundaries of a coordinate line equally into play. Point by point updating of the data with local iteration on the solution procedure at each spatial step as the sweeps progress not only renders the method single level in storage but, also, improves nonlinear accuracy to accelerate convergence by an order of magnitude over related two level linearized implicit methods. The method derives robust stability from the combination of an eigenvector split upwind difference method (CSCM) with diagonally dominant ADI(DDADI) approximate factorization and computed characteristic boundary approximations.
Efficient algorithms for Hirshfeld-I charges
Finzel, Kati; Martín Pendás, Ángel; Francisco, Evelio
2015-08-28
A new viewpoint on iterative Hirshfeld charges is presented, whereby the atomic populations obtained from such a scheme are interpreted as such populations which reproduce themselves. This viewpoint yields a self-consistent requirement for the Hirshfeld-I populations rather than being understood as the result of an iterative procedure. Based on this self-consistent requirement, much faster algorithms for Hirshfeld-I charges have been developed. In addition, new atomic reference densities for the Hirshfeld-I procedure are presented. The proposed reference densities are N-representable, display proper atomic shell structure and can be computed for any charged species.
New algorithms for the symmetric tridiagonal eigenvalue computation
Pan, V. |
1994-12-31
The author presents new algorithms that accelerate the bisection method for the symmetric eigenvalue problem. The algorithms rely on some new techniques, which include acceleration of Newton`s iteration and can also be further applied to acceleration of some other iterative processes, in particular, of iterative algorithms for approximating polynomial zeros.
An algorithm for the automatic synchronization of Omega receivers
NASA Technical Reports Server (NTRS)
Stonestreet, W. M.; Marzetta, T. L.
1977-01-01
The Omega navigation system and the requirement for receiver synchronization are discussed. A description of the synchronization algorithm is provided. The numerical simulation and its associated assumptions were examined and results of the simulation are presented. The suggested form of the synchronization algorithm and the suggested receiver design values were surveyed. A Fortran of the synchronization algorithm used in the simulation was also included.
Updated treatment algorithm of pulmonary arterial hypertension.
Galiè, Nazzareno; Corris, Paul A; Frost, Adaani; Girgis, Reda E; Granton, John; Jing, Zhi Cheng; Klepetko, Walter; McGoon, Michael D; McLaughlin, Vallerie V; Preston, Ioana R; Rubin, Lewis J; Sandoval, Julio; Seeger, Werner; Keogh, Anne
2013-12-24
The demands on a pulmonary arterial hypertension (PAH) treatment algorithm are multiple and in some ways conflicting. The treatment algorithm usually includes different types of recommendations with varying degrees of scientific evidence. In addition, the algorithm is required to be comprehensive but not too complex, informative yet simple and straightforward. The type of information in the treatment algorithm are heterogeneous including clinical, hemodynamic, medical, interventional, pharmacological and regulatory recommendations. Stakeholders (or users) including physicians from various specialties and with variable expertise in PAH, nurses, patients and patients' associations, healthcare providers, regulatory agencies and industry are often interested in the PAH treatment algorithm for different reasons. These are the considerable challenges faced when proposing appropriate updates to the current evidence-based treatment algorithm.The current treatment algorithm may be divided into 3 main areas: 1) general measures, supportive therapy, referral strategy, acute vasoreactivity testing and chronic treatment with calcium channel blockers; 2) initial therapy with approved PAH drugs; and 3) clinical response to the initial therapy, combination therapy, balloon atrial septostomy, and lung transplantation. All three sections will be revisited highlighting information newly available in the past 5 years and proposing updates where appropriate. The European Society of Cardiology grades of recommendation and levels of evidence will be adopted to rank the proposed treatments. PMID:24355643
Authentication Procedures - The Procedures and Integration Working Group
Kouzes, Richard T.; Bratcher, Leigh; Gosnell, Tom; Langner, Diana; MacArthur, D.; Mihalczo, John T.; Pura, Carolyn; Riedy, Alex; Rexroth, Paul; Scott, Mary; Springarn, Jay
2001-05-31
Authentication is how we establish trust in monitoring systems and measurements to verify compliance with, for example, the storage of nuclear weapons material. Authentication helps assure the monitoring party that accurate and reliable information is provided by any measurement system and that any irregularities are detected. The U.S. is developing its point of view on the procedures for authentication of monitoring systems now planned or contemplated for arms reduction and control applications. The authentication of a system utilizes a set of approaches, including: functional testing using trusted calibration sources, evaluation of documentation, evaluation of software, evaluation of hardware, random selection of hardware and software, tamper-indicating devices, and operational procedures. Authentication of measurement systems should occur throughout their lifecycles, starting with the elements of design, and moving to off-site authentication, on-siste authentication, and continuing with authentication following repair. The most important of these is the initial design of systems. Hardware and software design criteria and procurement decisions can make future authentication relatively straightforward or conversely very difficult. Facility decisions can likewise ease the procedures for authentication since reliable and effective monitoring systems and tampering indicating devices can help provide the assurance needed in the integrity of such items as measurement systems, spare equipment, and reference sources. This paper will summarize the results of the U.S. Authentication Task Force discussion on the role of procedures in authentication.
Algorithms for improved performance in cryptographic protocols.
Schroeppel, Richard Crabtree; Beaver, Cheryl Lynn
2003-11-01
Public key cryptographic algorithms provide data authentication and non-repudiation for electronic transmissions. The mathematical nature of the algorithms, however, means they require a significant amount of computation, and encrypted messages and digital signatures possess high bandwidth. Accordingly, there are many environments (e.g. wireless, ad-hoc, remote sensing networks) where public-key requirements are prohibitive and cannot be used. The use of elliptic curves in public-key computations has provided a means by which computations and bandwidth can be somewhat reduced. We report here on the research conducted in an LDRD aimed to find even more efficient algorithms and to make public-key cryptography available to a wider range of computing environments. We improved upon several algorithms, including one for which a patent has been applied. Further we discovered some new problems and relations on which future cryptographic algorithms may be based.
A new algorithm for coding geological terminology
NASA Astrophysics Data System (ADS)
Apon, W.
The Geological Survey of The Netherlands has developed an algorithm to convert the plain geological language of lithologic well logs into codes suitable for computer processing and link these to existing plotting programs. The algorithm is based on the "direct method" and operates in three steps: (1) searching for defined word combinations and assigning codes; (2) deleting duplicated codes; (3) correcting incorrect code combinations. Two simple auxiliary files are used. A simple PC demonstration program is included to enable readers to experiment with this algorithm. The Department of Quarternary Geology of the Geological Survey of The Netherlands possesses a large database of shallow lithologic well logs in plain language and has been using a program based on this algorithm for about 3 yr. Erroneous codes resulting from using this algorithm are less than 2%.
Basic firefly algorithm for document clustering
NASA Astrophysics Data System (ADS)
Mohammed, Athraa Jasim; Yusof, Yuhanis; Husni, Husniza
2015-12-01
The Document clustering plays significant role in Information Retrieval (IR) where it organizes documents prior to the retrieval process. To date, various clustering algorithms have been proposed and this includes the K-means and Particle Swarm Optimization. Even though these algorithms have been widely applied in many disciplines due to its simplicity, such an approach tends to be trapped in a local minimum during its search for an optimal solution. To address the shortcoming, this paper proposes a Basic Firefly (Basic FA) algorithm to cluster text documents. The algorithm employs the Average Distance to Document Centroid (ADDC) as the objective function of the search. Experiments utilizing the proposed algorithm were conducted on the 20Newsgroups benchmark dataset. Results demonstrate that the Basic FA generates a more robust and compact clusters than the ones produced by K-means and Particle Swarm Optimization (PSO).
Comparison of rotation algorithms for digital images
NASA Astrophysics Data System (ADS)
Starovoitov, Valery V.; Samal, Dmitry
1999-09-01
The paper presents a comparative study of several algorithms developed for digital image rotation. No losing generality we studied gray scale images. We have tested methods preserving gray values of the original images, performing some interpolation and two procedures implemented into the Corel Photo-paint and Adobe Photoshop soft packages. By the similar way methods for rotation of color images may be evaluated also.
Totally parallel multilevel algorithms
NASA Technical Reports Server (NTRS)
Frederickson, Paul O.
1988-01-01
Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.
Neoclassical Transport Including Collisional Nonlinearity
Candy, J.; Belli, E. A.
2011-06-10
In the standard {delta}f theory of neoclassical transport, the zeroth-order (Maxwellian) solution is obtained analytically via the solution of a nonlinear equation. The first-order correction {delta}f is subsequently computed as the solution of a linear, inhomogeneous equation that includes the linearized Fokker-Planck collision operator. This equation admits analytic solutions only in extreme asymptotic limits (banana, plateau, Pfirsch-Schlueter), and so must be solved numerically for realistic plasma parameters. Recently, numerical codes have appeared which attempt to compute the total distribution f more accurately than in the standard ordering by retaining some nonlinear terms related to finite-orbit width, while simultaneously reusing some form of the linearized collision operator. In this work we show that higher-order corrections to the distribution function may be unphysical if collisional nonlinearities are ignored.
Clutter discrimination algorithm simulation in pulse laser radar imaging
NASA Astrophysics Data System (ADS)
Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; Su, Xuan; Zhu, Fule
2015-10-01
Pulse laser radar imaging performance is greatly influenced by different kinds of clutter. Various algorithms are developed to mitigate clutter. However, estimating performance of a new algorithm is difficult. Here, a simulation model for estimating clutter discrimination algorithms is presented. This model consists of laser pulse emission, clutter jamming, laser pulse reception and target image producing. Additionally, a hardware platform is set up gathering clutter data reflected by ground and trees. The data logging is as clutter jamming input in the simulation model. The hardware platform includes a laser diode, a laser detector and a high sample rate data logging circuit. The laser diode transmits short laser pulses (40ns FWHM) at 12.5 kilohertz pulse rate and at 905nm wavelength. An analog-to-digital converter chip integrated in the sample circuit works at 250 mega samples per second. The simulation model and the hardware platform contribute to a clutter discrimination algorithm simulation system. Using this system, after analyzing clutter data logging, a new compound pulse detection algorithm is developed. This new algorithm combines matched filter algorithm and constant fraction discrimination (CFD) algorithm. Firstly, laser echo pulse signal is processed by matched filter algorithm. After the first step, CFD algorithm comes next. Finally, clutter jamming from ground and trees is discriminated and target image is produced. Laser radar images are simulated using CFD algorithm, matched filter algorithm and the new algorithm respectively. Simulation result demonstrates that the new algorithm achieves the best target imaging effect of mitigating clutter reflected by ground and trees.
45 CFR 84.36 - Procedural safeguards.
Code of Federal Regulations, 2012 CFR
2012-10-01
... identification, evaluation, or educational placement of persons who, because of handicap, need or are believed to need special instruction or related services, a system of procedural safeguards that includes...
45 CFR 84.36 - Procedural safeguards.
Code of Federal Regulations, 2010 CFR
2010-10-01
... identification, evaluation, or educational placement of persons who, because of handicap, need or are believed to need special instruction or related services, a system of procedural safeguards that includes...
14 CFR 1259.202 - Application procedures.
Code of Federal Regulations, 2011 CFR
2011-01-01
.... (a) The opportunity to apply shall be announced by the Director, Educational Affairs Division. (b) The application procedures and evaluation guidelines for awards under this section will be included in... selection panel appointed by the Director, Educational Affairs Division....
14 CFR 1259.202 - Application procedures.
Code of Federal Regulations, 2010 CFR
2010-01-01
.... (a) The opportunity to apply shall be announced by the Director, Educational Affairs Division. (b) The application procedures and evaluation guidelines for awards under this section will be included in... selection panel appointed by the Director, Educational Affairs Division....
Current procedural terminology; a primer.
Hirsch, Joshua A; Leslie-Mazwi, Thabele M; Nicola, Gregory N; Barr, Robert M; Bello, Jacqueline A; Donovan, William D; Tu, Raymond; Alson, Mark D; Manchikanti, Laxmaiah
2015-04-01
In 1966, The American Medical Association (AMA) working with multiple major medical specialty societies developed an iterative coding system for describing medical procedures and services using uniform language, the Current Procedural Terminology (CPT) system. The current code set, CPT IV, forms the basis of reporting most of the services performed by healthcare providers, physicians and non-physicians as well as facilities allowing effective, reliable communication among physician and other providers, third parties and patients. This coding system and its maintenance has evolved significantly since its inception, and now goes well beyond its readily perceived role in reimbursement. Additional roles include administrative management, tracking new and investigational procedures, and evolving aspects of 'pay for performance'. The system also allows for local, regional and national utilization comparisons for medical education and research. Neurointerventional specialists use CPT category I codes regularly--for example, 36,215 for first-order cerebrovascular angiography, 36,216 for second-order vessels, and 37,184 for acute stroke treatment by mechanical means. Additionally, physicians add relevant modifiers to the CPT codes, such as '-26' to indicate 'professional charge only,' or '-59' to indicate a distinct procedural service performed on the same day. PMID:24589819
Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1998-01-01
A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and
34 CFR 303.420 - Due process procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... AND REHABILITATIVE SERVICES, DEPARTMENT OF EDUCATION EARLY INTERVENTION PROGRAM FOR INFANTS AND... Children § 303.420 Due process procedures. Each system must include written procedures including procedures for mediation as described in § 303.419, for the timely administrative resolution of individual...
Algorithm Updates for the Fourth SeaWiFS Data Reprocessing
NASA Technical Reports Server (NTRS)
Hooker, Stanford, B. (Editor); Firestone, Elaine R. (Editor); Patt, Frederick S.; Barnes, Robert A.; Eplee, Robert E., Jr.; Franz, Bryan A.; Robinson, Wayne D.; Feldman, Gene Carl; Bailey, Sean W.
2003-01-01
The efforts to improve the data quality for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) data products have continued, following the third reprocessing of the global data set in May 2000. Analyses have been ongoing to address all aspects of the processing algorithms, particularly the calibration methodologies, atmospheric correction, and data flagging and masking. All proposed changes were subjected to rigorous testing, evaluation and validation. The results of these activities culminated in the fourth reprocessing, which was completed in July 2002. The algorithm changes, which were implemented for this reprocessing, are described in the chapters of this volume. Chapter 1 presents an overview of the activities leading up to the fourth reprocessing, and summarizes the effects of the changes. Chapter 2 describes the modifications to the on-orbit calibration, specifically the focal plane temperature correction and the temporal dependence. Chapter 3 describes the changes to the vicarious calibration, including the stray light correction to the Marine Optical Buoy (MOBY) data and improved data screening procedures. Chapter 4 describes improvements to the near-infrared (NIR) band correction algorithm. Chapter 5 describes changes to the atmospheric correction and the oceanic property retrieval algorithms, including out-of-band corrections, NIR noise reduction, and handling of unusual conditions. Chapter 6 describes various changes to the flags and masks, to increase the number of valid retrievals, improve the detection of the flag conditions, and add new flags. Chapter 7 describes modifications to the level-la and level-3 algorithms, to improve the navigation accuracy, correct certain types of spacecraft time anomalies, and correct a binning logic error. Chapter 8 describes the algorithm used to generate the SeaWiFS photosynthetically available radiation (PAR) product. Chapter 9 describes a coupled ocean-atmosphere model, which is used in one of the changes
Hemispherectomy Procedure in Proteus Syndrome
GUNAWAN, PrastiyaIndra; LUSIANA, Lusiana; SAHARSO, Darto
2016-01-01
Objective Proteus syndrome is a rare overgrowth disorder including bone, soft tissue, and skin. Central nervous system manifestations were reported in about 40% of the patients including hemimegalencephaly and the resultant hemicranial hyperplasia, convulsions and mental deficiency. We report a 1-month-old male baby referred to Pediatric Neurology Clinic Soetomo Hospital, Surabaya, Indonesia in 2014 presented recurrent seizures since birth with asymmetric dysmorphic face with the right side larger than the left, subcutaneous mass and linear nevi. Craniocervical MRI revealed hemimegalencephaly right cerebral hemisphere. Triple antiepileptic drugs were already given as well as the ketogenic diet, but the seizures persisted. The seizure then was resolved after hemispherectomy procedure. PMID:27375761
Including Magnetostriction in Micromagnetic Models
NASA Astrophysics Data System (ADS)
Conbhuí, Pádraig Ó.; Williams, Wyn; Fabian, Karl; Nagy, Lesleis
2016-04-01
The magnetic anomalies that identify crustal spreading are predominantly recorded by basalts formed at the mid-ocean ridges, whose magnetic signals are dominated by iron-titanium-oxides (Fe3-xTixO4), so called "titanomagnetites", of which the Fe2.4Ti0.6O4 (TM60) phase is the most common. With sufficient quantities of titanium present, these minerals exhibit strong magnetostriction. To date, models of these grains in the pseudo-single domain (PSD) range have failed to accurately account for this effect. In particular, a popular analytic treatment provided by Kittel (1949) for describing the magnetostrictive energy as an effective increase of the anisotropy constant can produce unphysical strains for non-uniform magnetizations. I will present a rigorous approach based on work by Brown (1966) and by Kroner (1958) for including magnetostriction in micromagnetic codes which is suitable for modelling hysteresis loops and finding remanent states in the PSD regime. Preliminary results suggest the more rigorously defined micromagnetic models exhibit higher coercivities and extended single domain ranges when compared to more simplistic approaches.
NASA Technical Reports Server (NTRS)
Grossman, B.; Cinella, P.
1988-01-01
A finite-volume method for the numerical computation of flows with nonequilibrium thermodynamics and chemistry is presented. A thermodynamic model is described which simplifies the coupling between the chemistry and thermodynamics and also results in the retention of the homogeneity property of the Euler equations (including all the species continuity and vibrational energy conservation equations). Flux-splitting procedures are developed for the fully coupled equations involving fluid dynamics, chemical production and thermodynamic relaxation processes. New forms of flux-vector split and flux-difference split algorithms are embodied in a fully coupled, implicit, large-block structure, including all the species conservation and energy production equations. Several numerical examples are presented, including high-temperature shock tube and nozzle flows. The methodology is compared to other existing techniques, including spectral and central-differenced procedures, and favorable comparisons are shown regarding accuracy, shock-capturing and convergence rates.
2016-01-01
The Nuss procedure is now the preferred operation for surgical correction of pectus excavatum (PE). It is a minimally invasive technique, whereby one to three curved metal bars are inserted behind the sternum in order to push it into a normal position. The bars are left in situ for three years and then removed. This procedure significantly improves quality of life and, in most cases, also improves cardiac performance. Previously, the modified Ravitch procedure was used with resection of cartilage and the use of posterior support. This article details the new modified Nuss procedure, which requires the use of shorter bars than specified by the original technique. This technique facilitates the operation as the bar may be guided manually through the chest wall and no additional stabilizing sutures are necessary. PMID:27747185
Dynamic alarm response procedures
Martin, J.; Gordon, P.; Fitch, K.
2006-07-01
The Dynamic Alarm Response Procedure (DARP) system provides a robust, Web-based alternative to existing hard-copy alarm response procedures. This paperless system improves performance by eliminating time wasted looking up paper procedures by number, looking up plant process values and equipment and component status at graphical display or panels, and maintenance of the procedures. Because it is a Web-based system, it is platform independent. DARP's can be served from any Web server that supports CGI scripting, such as Apache{sup R}, IIS{sup R}, TclHTTPD, and others. DARP pages can be viewed in any Web browser that supports Javascript and Scalable Vector Graphics (SVG), such as Netscape{sup R}, Microsoft Internet Explorer{sup R}, Mozilla Firefox{sup R}, Opera{sup R}, and others. (authors)
... the heart. During the procedure, small wires called electrodes are placed inside your heart to measure your ... is in place, your doctor will place small electrodes in different areas of your heart. These electrodes ...
Common Interventional Radiology Procedures
... of common interventional techniques is below. Common Interventional Radiology Procedures Angiography An X-ray exam of the ... into the vertebra. Copyright © 2016 Society of Interventional Radiology. All rights reserved. 3975 Fair Ridge Drive • Suite ...
NASA Astrophysics Data System (ADS)
Mangold, Stefan; van de Kamp, Thomas; Steininger, Ralph
2016-05-01
The usefulness of full field transmission spectroscopy is shown using the example of mandible of the stick insect Peruphasma schultei. An advanced data evaluation tool chain with an energy drift correction and highly reproducible automatic background correction is presented. The results show significant difference between the top and the bottom of the mandible of an adult stick insect.
NASA Astrophysics Data System (ADS)
Curà, Francesca; Mura, Andrea
2013-11-01
Tooth stiffness is a very important parameter in studying both static and dynamic behaviour of spline couplings and gears. Many works concerning tooth stiffness calculation are available in the literature, but experimental results are very rare, above all considering spline couplings. In this work experimental values of spline coupling tooth stiffness have been obtained by means of a special hexapod measuring device. Experimental results have been compared with the corresponding theoretical and numerical ones. Also the effect of angular misalignments between hub and shaft has been investigated in the experimental planning.
Safety referral procedures clarified.
2014-12-01
Two types of referrals are available for the purpose of harmonising pharmacovigilance decisions across the EU: the urgent procedure and the "normal" procedure. In both cases, the Pharmacovigilance Risk Assessment Committee (PRAC) issues a recommendation that the marketing authorisation committees concerned must take into account when formulating their opinions. If Member States disagree in their decisions, a final referral is available, although it lacks transparency. The European Commission's final decision is binding on all Member States. PMID:25629154
Universal lossless compression algorithm for textual images
NASA Astrophysics Data System (ADS)
al Zahir, Saif
2012-03-01
In recent years, an unparalleled volume of textual information has been transported over the Internet via email, chatting, blogging, tweeting, digital libraries, and information retrieval systems. As the volume of text data has now exceeded 40% of the total volume of traffic on the Internet, compressing textual data becomes imperative. Many sophisticated algorithms were introduced and employed for this purpose including Huffman encoding, arithmetic encoding, the Ziv-Lempel family, Dynamic Markov Compression, and Burrow-Wheeler Transform. My research presents novel universal algorithm for compressing textual images. The algorithm comprises two parts: 1. a universal fixed-to-variable codebook; and 2. our row and column elimination coding scheme. Simulation results on a large number of Arabic, Persian, and Hebrew textual images show that this algorithm has a compression ratio of nearly 87%, which exceeds published results including JBIG2.
Incorporating Spatial Models in Visual Field Test Procedures
Rubinstein, Nikki J.; McKendrick, Allison M.; Turpin, Andrew
2016-01-01
Purpose To introduce a perimetric algorithm (Spatially Weighted Likelihoods in Zippy Estimation by Sequential Testing [ZEST] [SWeLZ]) that uses spatial information on every presentation to alter visual field (VF) estimates, to reduce test times without affecting output precision and accuracy. Methods SWeLZ is a maximum likelihood Bayesian procedure, which updates probability mass functions at VF locations using a spatial model. Spatial models were created from empirical data, computational models, nearest neighbor, random relationships, and interconnecting all locations. SWeLZ was compared to an implementation of the ZEST algorithm for perimetry using computer simulations on 163 glaucomatous and 233 normal VFs (Humphrey Field Analyzer 24-2). Output measures included number of presentations and visual sensitivity estimates. Results There was no significant difference in accuracy or precision of SWeLZ for the different spatial models relative to ZEST, either when collated across whole fields or when split by input sensitivity. Inspection of VF maps showed that SWeLZ was able to detect localized VF loss. SWeLZ was faster than ZEST for normal VFs: median number of presentations reduced by 20% to 38%. The number of presentations was equivalent for SWeLZ and ZEST when simulated on glaucomatous VFs. Conclusions SWeLZ has the potential to reduce VF test times in people with normal VFs, without detriment to output precision and accuracy in glaucomatous VFs. Translational Relevance SWeLZ is a novel perimetric algorithm. Simulations show that SWeLZ can reduce the number of test presentations for people with normal VFs. Since many patients have normal fields, this has the potential for significant time savings in clinical settings. PMID:26981329
Multiangle dynamic light scattering analysis using an improved recursion algorithm
NASA Astrophysics Data System (ADS)
Li, Lei; Li, Wei; Wang, Wanyan; Zeng, Xianjiang; Chen, Junyao; Du, Peng; Yang, Kecheng
2015-10-01
Multiangle dynamic light scattering (MDLS) compensates for the low information in a single-angle dynamic light scattering (DLS) measurement by combining the light intensity autocorrelation functions from a number of measurement angles. Reliable estimation of PSD from MDLS measurements requires accurate determination of the weighting coefficients and an appropriate inversion method. We propose the Recursion Nonnegative Phillips-Twomey (RNNPT) algorithm, which is insensitive to the noise of correlation function data, for PSD reconstruction from MDLS measurements. The procedure includes two main steps: 1) the calculation of the weighting coefficients by the recursion method, and 2) the PSD estimation through the RNNPT algorithm. And we obtained suitable regularization parameters for the algorithm by using MR-L-curve since the overall computational cost of this method is sensibly less than that of the L-curve for large problems. Furthermore, convergence behavior of the MR-L-curve method is in general superior to that of the L-curve method and the error of MR-L-curve method is monotone decreasing. First, the method was evaluated on simulated unimodal lognormal PSDs and multimodal lognormal PSDs. For comparison, reconstruction results got by a classical regularization method were included. Then, to further study the stability and sensitivity of the proposed method, all examples were analyzed using correlation function data with different levels of noise. The simulated results proved that RNNPT method yields more accurate results in the determination of PSDs from MDLS than those obtained with the classical regulation method for both unimodal and multimodal PSDs.
40 CFR 86.1235-96 - Dynamometer procedure.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Emission Test Procedures for New Gasoline-Fueled, Natural Gas-Fueled, Liquefied Petroleum Gas-Fueled and Methanol-Fueled Heavy-Duty Vehicles § 86.1235-96 Dynamometer procedure. Section 86.1235-96 includes...
40 CFR 86.1235-96 - Dynamometer procedure.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Emission Test Procedures for New Gasoline-Fueled, Natural Gas-Fueled, Liquefied Petroleum Gas-Fueled and Methanol-Fueled Heavy-Duty Vehicles § 86.1235-96 Dynamometer procedure. Section 86.1235-96 includes...
42 CFR 493.1251 - Standard: Procedure manual.
Code of Federal Regulations, 2014 CFR
2014-10-01
.... (2) Microscopic examination, including the detection of inadequately prepared slides. (3) Step-by... must be provided by the laboratory. (d) Procedures and changes in procedures must be approved,...
A fast meteor detection algorithm
NASA Astrophysics Data System (ADS)
Gural, P.
2016-01-01
A low latency meteor detection algorithm for use with fast steering mirrors had been previously developed to track and telescopically follow meteors in real-time (Gural, 2007). It has been rewritten as a generic clustering and tracking software module for meteor detection that meets both the demanding throughput requirements of a Raspberry Pi while also maintaining a high probability of detection. The software interface is generalized to work with various forms of front-end video pre-processing approaches and provides a rich product set of parameterized line detection metrics. Discussion will include the Maximum Temporal Pixel (MTP) compression technique as a fast thresholding option for feeding the detection module, the detection algorithm trade for maximum processing throughput, details on the clustering and tracking methodology, processing products, performance metrics, and a general interface description.
Algorithm refinement for fluctuating hydrodynamics
Williams, Sarah A.; Bell, John B.; Garcia, Alejandro L.
2007-07-03
This paper introduces an adaptive mesh and algorithmrefinement method for fluctuating hydrodynamics. This particle-continuumhybrid simulates the dynamics of a compressible fluid with thermalfluctuations. The particle algorithm is direct simulation Monte Carlo(DSMC), a molecular-level scheme based on the Boltzmann equation. Thecontinuum algorithm is based on the Landau-Lifshitz Navier-Stokes (LLNS)equations, which incorporate thermal fluctuations into macroscopichydrodynamics by using stochastic fluxes. It uses a recently-developedsolver for LLNS, based on third-order Runge-Kutta. We present numericaltests of systems in and out of equilibrium, including time-dependentsystems, and demonstrate dynamic adaptive refinement by the computationof a moving shock wave. Mean system behavior and second moment statisticsof our simulations match theoretical values and benchmarks well. We findthat particular attention should be paid to the spectrum of the flux atthe interface between the particle and continuum methods, specificallyfor the non-hydrodynamic (kinetic) time scales.
Rempp, Florian; Mahler, Guenter; Michel, Mathias
2007-09-15
We introduce a scheme to perform the cooling algorithm, first presented by Boykin et al. in 2002, for an arbitrary number of times on the same set of qbits. We achieve this goal by adding an additional SWAP gate and a bath contact to the algorithm. This way one qbit may repeatedly be cooled without adding additional qbits to the system. By using a product Liouville space to model the bath contact we calculate the density matrix of the system after a given number of applications of the algorithm.
NASA Astrophysics Data System (ADS)
Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.
2013-01-01
A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.
NASA Technical Reports Server (NTRS)
Chan, Hak-Wai; Yan, Tsun-Yee
1989-01-01
Algorithm developed for optimal routing of packets of data along links of multilink, multinode digital communication network. Algorithm iterative and converges to cost-optimal assignment independent of initial assignment. Each node connected to other nodes through links, each containing number of two-way channels. Algorithm assigns channels according to message traffic leaving and arriving at each node. Modified to take account of different priorities among packets belonging to different users by using different delay constraints or imposing additional penalties via cost function.
NASA Astrophysics Data System (ADS)
Ahmed, Yasser A.; Afifi, Hossam; Rubino, Gerardo
1999-05-01
This paper present a new algorithm for stereo matching. The main idea is to decompose the original problem into independent hierarchical and more elementary problems that can be solved faster without any complicated mathematics using BBD. To achieve that, we use a new image feature called 'continuity feature' instead of classical noise. This feature can be extracted from any kind of images by a simple process and without using a searching technique. A new matching technique is proposed to match the continuity feature. The new algorithm resolves the main disadvantages of feature based stereo matching algorithms.
Advances in the EDM-DEDM procedure.
Caliandro, Rocco; Carrozzini, Benedetta; Cascarano, Giovanni Luca; Giacovazzo, Carmelo; Mazzone, Anna Maria; Siliqi, Dritan
2009-03-01
The DEDM (difference electron-density modification) algorithm has been described in a recent paper [Caliandro et al. (2008), Acta Cryst. A64, 519-528]: it breaks down the collinearity between model structure phases and difference structure phase estimates. The new difference electron-density produced by DEDM, summed to the calculated Fourier maps, is expected to provide a representation of the full structure that is more accurate than that obtained by the observed Fourier synthesis. In the same paper, the DEDM algorithm was combined with the EDM (electron-density modification) approach to give the EDM-DEDM procedure which, when applied to practical molecular-replacement cases, was able to improve the model structures. In this paper, it is shown that EDM-DEDM suffers from some critical points that did not allow cyclical application of the procedure. These points are identified and modifications are made to allow iteration of the procedure. The applications indicate that EDM-DEDM may become a fundamental tool in protein crystallography.
Continuation of advanced crew procedures development techniques
NASA Technical Reports Server (NTRS)
Arbet, J. D.; Benbow, R. L.; Evans, M. E.; Mangiaracina, A. A.; Mcgavern, J. L.; Spangler, M. C.; Tatum, I. C.
1976-01-01
An operational computer program, the Procedures and Performance Program (PPP) which operates in conjunction with the Phase I Shuttle Procedures Simulator to provide a procedures recording and crew/vehicle performance monitoring capability was developed. A technical synopsis of each task resulting in the development of the Procedures and Performance Program is provided. Conclusions and recommendations for action leading to the improvements in production of crew procedures development and crew training support are included. The PPP provides real-time CRT displays and post-run hardcopy output of procedures, difference procedures, performance data, parametric analysis data, and training script/training status data. During post-run, the program is designed to support evaluation through the reconstruction of displays to any point in time. A permanent record of the simulation exercise can be obtained via hardcopy output of the display data and via transfer to the Generalized Documentation Processor (GDP). Reference procedures data may be transferred from the GDP to the PPP. Interface is provided with the all digital trajectory program, the Space Vehicle Dynamics Simulator (SVDS) to support initial procedures timeline development.
Vectorized Rebinning Algorithm for Fast Data Down-Sampling
NASA Technical Reports Server (NTRS)
Dean, Bruce; Aronstein, David; Smith, Jeffrey
2013-01-01
A vectorized rebinning (down-sampling) algorithm, applicable to N-dimensional data sets, has been developed that offers a significant reduction in computer run time when compared to conventional rebinning algorithms. For clarity, a two-dimensional version of the algorithm is discussed to illustrate some specific details of the algorithm content, and using the language of image processing, 2D data will be referred to as "images," and each value in an image as a "pixel." The new approach is fully vectorized, i.e., the down-sampling procedure is done as a single step over all image rows, and then as a single step over all image columns. Data rebinning (or down-sampling) is a procedure that uses a discretely sampled N-dimensional data set to create a representation of the same data, but with fewer discrete samples. Such data down-sampling is fundamental to digital signal processing, e.g., for data compression applications.
Bayesian Smoothing Algorithms in Partially Observed Markov Chains
NASA Astrophysics Data System (ADS)
Ait-el-Fquih, Boujemaa; Desbouvries, François
2006-11-01
Let x = {xn}n∈N be a hidden process, y = {yn}n∈N an observed process and r = {rn}n∈N some auxiliary process. We assume that t = {tn}n∈N with tn = (xn, rn, yn-1) is a (Triplet) Markov Chain (TMC). TMC are more general than Hidden Markov Chains (HMC) and yet enable the development of efficient restoration and parameter estimation algorithms. This paper is devoted to Bayesian smoothing algorithms for TMC. We first propose twelve algorithms for general TMC. In the Gaussian case, these smoothers reduce to a set of algorithms which include, among other solutions, extensions to TMC of classical Kalman-like smoothing algorithms (originally designed for HMC) such as the RTS algorithms, the Two-Filter algorithms or the Bryson and Frazier algorithm.
THE APPLICATION OF AN EVOLUTIONARY ALGORITHM TO THE OPTIMIZATION OF A MESOSCALE METEOROLOGICAL MODEL
Werth, D.; O'Steen, L.
2008-02-11
We show that a simple evolutionary algorithm can optimize a set of mesoscale atmospheric model parameters with respect to agreement between the mesoscale simulation and a limited set of synthetic observations. This is illustrated using the Regional Atmospheric Modeling System (RAMS). A set of 23 RAMS parameters is optimized by minimizing a cost function based on the root mean square (rms) error between the RAMS simulation and synthetic data (observations derived from a separate RAMS simulation). We find that the optimization can be efficient with relatively modest computer resources, thus operational implementation is possible. The optimization efficiency, however, is found to depend strongly on the procedure used to perturb the 'child' parameters relative to their 'parents' within the evolutionary algorithm. In addition, the meteorological variables included in the rms error and their weighting are found to be an important factor with respect to finding the global optimum.
Development of an anthropomorphic breast software phantom based on region growing algorithm
NASA Astrophysics Data System (ADS)
Zhang, Cuiping; Bakic, Predrag R.; Maidment, Andrew D. A.
2008-03-01
Software breast phantoms offer greater flexibility in generating synthetic breast images compared to physical phantoms. The realism of such generated synthetic images depends on the method for simulating the three-dimensional breast anatomical structures. We present here a novel algorithm for computer simulation of breast anatomy. The algorithm simulates the skin, regions of predominantly adipose tissue and fibro-glandular tissue, and the matrix of adipose tissue compartments and Cooper's ligaments. The simulation approach is based upon a region growing procedure; adipose compartments are grown from a selected set of seed points with different orientation and growth rate. The simulated adipose compartments vary in shape and size similarly to the anatomical breast variation, resulting in much improved phantom realism compared to our previous simulation based on geometric primitives. The proposed simulation also has an improved control over the breast size and glandularity. Our software breast phantom has been used in a number of applications, including breast tomosynthesis and texture analysis optimization.
NASA Astrophysics Data System (ADS)
Liu, Xingbin; Mei, Wenbo; Du, Huiqian
2016-05-01
In this paper, a novel approach based on compressive sensing and chaos is proposed for simultaneously compressing, fusing and encrypting multi-modal images. The sparsely represented source images are firstly measured with the key-controlled pseudo-random measurement matrix constructed using logistic map, which reduces the data to be processed and realizes the initial encryption. Then the obtained measurements are fused by the proposed adaptive weighted fusion rule. The fused measurement is further encrypted into the ciphertext through an iterative procedure including improved random pixel exchanging technique and fractional Fourier transform. The fused image can be reconstructed by decrypting the ciphertext and using a recovery algorithm. The proposed algorithm not only reduces data volume but also simplifies keys, which improves the efficiency of transmitting data and distributing keys. Numerical results demonstrate the feasibility and security of the proposed scheme.
NASA Astrophysics Data System (ADS)
Gupta, Atul; Bayraktar, Harun H.; Fox, Julia C.; Keaveny, Tony M.; Papadopoulos, Panayiotis
2007-06-01
Trabecular bone is a highly porous orthotropic cellular solid material present inside human bones such as the femur (hip bone) and vertebra (spine). In this study, an infinitesimal plasticity-like model with isotropic/kinematic hardening is developed to describe yielding of trabecular bone at the continuum level. One of the unique features of this formulation is the development of the plasticity-like model in strain space for a yield envelope expressed in terms of principal strains having asymmetric yield behavior. An implicit return-mapping approach is adopted to obtain a symmetric algorithmic tangent modulus and a step-by-step procedure of algorithmic implementation is derived. To investigate the performance of this approach in a full-scale finite element simulation, the model is implemented in a non-linear finite element analysis program and several test problems including the simulation of loading of the human femur structures are analyzed. The results show good agreement with the experimental data.
Adiabatic isometric mapping algorithm for embedding 2-surfaces in Euclidean 3-space
NASA Astrophysics Data System (ADS)
Ray, Shannon; Miller, Warner A.; Alsing, Paul M.; Yau, Shing-Tung
2015-12-01
Alexandrov proved that any simplicial complex homeomorphic to a sphere with strictly non-negative Gaussian curvature at each vertex can be isometrically embedded uniquely in {{{R}}}3 as a convex polyhedron. Due to the nonconstructive nature of his proof, there have yet to be any algorithms, that we know of, that realizes the Alexandrov embedding in polynomial time. Following his proof, we developed the adiabatic isometric mapping (AIM) algorithm. AIM uses a guided adiabatic pull-back procedure on a given polyhedral metric to produce an embedding that approximates the unique Alexandrov polyhedron. Tests of AIM applied to two different polyhedral metrics suggests that its run time is sub cubic with respect to the number of vertices. Although Alexandrov’s theorem specifically addresses the embedding of convex polyhedral metrics, we tested AIM on a broader class of polyhedral metrics that included regions of negative Gaussian curvature. One test was on a surface just outside the ergosphere of a Kerr black hole.
A cuckoo search algorithm for multimodal optimization.
Cuevas, Erik; Reyna-Orta, Adolfo
2014-01-01
Interest in multimodal optimization is expanding rapidly, since many practical engineering problems demand the localization of multiple optima within a search space. On the other hand, the cuckoo search (CS) algorithm is a simple and effective global optimization algorithm which can not be directly applied to solve multimodal optimization problems. This paper proposes a new multimodal optimization algorithm called the multimodal cuckoo search (MCS). Under MCS, the original CS is enhanced with multimodal capacities by means of (1) the incorporation of a memory mechanism to efficiently register potential local optima according to their fitness value and the distance to other potential solutions, (2) the modification of the original CS individual selection strategy to accelerate the detection process of new local minima, and (3) the inclusion of a depuration procedure to cyclically eliminate duplicated memory elements. The performance of the proposed approach is compared to several state-of-the-art multimodal optimization algorithms considering a benchmark suite of fourteen multimodal problems. Experimental results indicate that the proposed strategy is capable of providing better and even a more consistent performance over existing well-known multimodal algorithms for the majority of test problems yet avoiding any serious computational deterioration. PMID:25147850
Evaluating Algorithm Performance Metrics Tailored for Prognostics
NASA Technical Reports Server (NTRS)
Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai
2009-01-01
Prognostics has taken a center stage in Condition Based Maintenance (CBM) where it is desired to estimate Remaining Useful Life (RUL) of the system so that remedial measures may be taken in advance to avoid catastrophic events or unwanted downtimes. Validation of such predictions is an important but difficult proposition and a lack of appropriate evaluation methods renders prognostics meaningless. Evaluation methods currently used in the research community are not standardized and in many cases do not sufficiently assess key performance aspects expected out of a prognostics algorithm. In this paper we introduce several new evaluation metrics tailored for prognostics and show that they can effectively evaluate various algorithms as compared to other conventional metrics. Specifically four algorithms namely; Relevance Vector Machine (RVM), Gaussian Process Regression (GPR), Artificial Neural Network (ANN), and Polynomial Regression (PR) are compared. These algorithms vary in complexity and their ability to manage uncertainty around predicted estimates. Results show that the new metrics rank these algorithms in different manner and depending on the requirements and constraints suitable metrics may be chosen. Beyond these results, these metrics offer ideas about how metrics suitable to prognostics may be designed so that the evaluation procedure can be standardized. 1
A cuckoo search algorithm for multimodal optimization.
Cuevas, Erik; Reyna-Orta, Adolfo
2014-01-01
Interest in multimodal optimization is expanding rapidly, since many practical engineering problems demand the localization of multiple optima within a search space. On the other hand, the cuckoo search (CS) algorithm is a simple and effective global optimization algorithm which can not be directly applied to solve multimodal optimization problems. This paper proposes a new multimodal optimization algorithm called the multimodal cuckoo search (MCS). Under MCS, the original CS is enhanced with multimodal capacities by means of (1) the incorporation of a memory mechanism to efficiently register potential local optima according to their fitness value and the distance to other potential solutions, (2) the modification of the original CS individual selection strategy to accelerate the detection process of new local minima, and (3) the inclusion of a depuration procedure to cyclically eliminate duplicated memory elements. The performance of the proposed approach is compared to several state-of-the-art multimodal optimization algorithms considering a benchmark suite of fourteen multimodal problems. Experimental results indicate that the proposed strategy is capable of providing better and even a more consistent performance over existing well-known multimodal algorithms for the majority of test problems yet avoiding any serious computational deterioration.
Staggered solution procedures for multibody dynamics simulation
NASA Astrophysics Data System (ADS)
Park, K. C.; Chiou, J. C.; Downer, J. D.
1990-04-01
The numerical solution procedure for multibody dynamics (MBD) systems is termed a staggered MBD solution procedure that solves the generalized coordinates in a separate module from that for the constraint force. This requires a reformulation of the constraint conditions so that the constraint forces can also be integrated in time. A major advantage of such a partitioned solution procedure is that additional analysis capabilities such as active controller and design optimization modules can be easily interfaced without embedding them into a monolithic program. After introducing the basic equations of motion for MBD system in the second section, Section 3 briefly reviews some constraint handling techniques and introduces the staggered stabilized technique for the solution of the constraint forces as independent variables. The numerical direct time integration of the equations of motion is described in Section 4. As accurate damping treatment is important for the dynamics of space structures, we have employed the central difference method and the mid-point form of the trapezoidal rule since they engender no numerical damping. This is in contrast to the current practice in dynamic simulations of ground vehicles by employing a set of backward difference formulas. First, the equations of motion are partitioned according to the translational and the rotational coordinates. This sets the stage for an efficient treatment of the rotational motions via the singularity-free Euler parameters. The resulting partitioned equations of motion are then integrated via a two-stage explicit stabilized algorithm for updating both the translational coordinates and angular velocities. Once the angular velocities are obtained, the angular orientations are updated via the mid-point implicit formula employing the Euler parameters. When the two algorithms, namely, the two-stage explicit algorithm for the generalized coordinates and the implicit staggered procedure for the constraint Lagrange
Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi
2014-01-01
Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425
2013-07-29
The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.
Mobile Energy Laboratory Procedures
Armstrong, P.R.; Batishko, C.R.; Dittmer, A.L.; Hadley, D.L.; Stoops, J.L.
1993-09-01
Pacific Northwest Laboratory (PNL) has been tasked to plan and implement a framework for measuring and analyzing the efficiency of on-site energy conversion, distribution, and end-use application on federal facilities as part of its overall technical support to the US Department of Energy (DOE) Federal Energy Management Program (FEMP). The Mobile Energy Laboratory (MEL) Procedures establish guidelines for specific activities performed by PNL staff. PNL provided sophisticated energy monitoring, auditing, and analysis equipment for on-site evaluation of energy use efficiency. Specially trained engineers and technicians were provided to conduct tests in a safe and efficient manner with the assistance of host facility staff and contractors. Reports were produced to describe test procedures, results, and suggested courses of action. These reports may be used to justify changes in operating procedures, maintenance efforts, system designs, or energy-using equipment. The MEL capabilities can subsequently be used to assess the results of energy conservation projects. These procedures recognize the need for centralized NM administration, test procedure development, operator training, and technical oversight. This need is evidenced by increasing requests fbr MEL use and the economies available by having trained, full-time MEL operators and near continuous MEL operation. DOE will assign new equipment and upgrade existing equipment as new capabilities are developed. The equipment and trained technicians will be made available to federal agencies that provide funding for the direct costs associated with MEL use.
Genetic-Algorithm Tool For Search And Optimization
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven
1995-01-01
SPLICER computer program used to solve search and optimization problems. Genetic algorithms adaptive search procedures (i.e., problem-solving methods) based loosely on processes of natural selection and Darwinian "survival of fittest." Algorithms apply genetically inspired operators to populations of potential solutions in iterative fashion, creating new populations while searching for optimal or nearly optimal solution to problem at hand. Written in Think C.
Solar Position Algorithm for Solar Radiation Applications (Revised)
Reda, I.; Andreas, A.
2008-01-01
This report is a step-by-step procedure for implementing an algorithm to calculate the solar zenith and azimuth angles in the period from the year -2000 to 6000, with uncertainties of ?0.0003/. It is written in a step-by-step format to simplify otherwise complicated steps, with a focus on the sun instead of the planets and stars in general. The algorithm is written in such a way to accommodate solar radiation applications.
Algorithm development for Maxwell's equations for computational electromagnetism
NASA Technical Reports Server (NTRS)
Goorjian, Peter M.
1990-01-01
A new algorithm has been developed for solving Maxwell's equations for the electromagnetic field. It solves the equations in the time domain with central, finite differences. The time advancement is performed implicitly, using an alternating direction implicit procedure. The space discretization is performed with finite volumes, using curvilinear coordinates with electromagnetic components along those directions. Sample calculations are presented of scattering from a metal pin, a square and a circle to demonstrate the capabilities of the new algorithm.
A selective-update affine projection algorithm with selective input vectors
NASA Astrophysics Data System (ADS)
Kong, NamWoong; Shin, JaeWook; Park, PooGyeon
2011-10-01
This paper proposes an affine projection algorithm (APA) with selective input vectors, which based on the concept of selective-update in order to reduce estimation errors and computations. The algorithm consists of two procedures: input- vector-selection and state-decision. The input-vector-selection procedure determines the number of input vectors by checking with mean square error (MSE) whether the input vectors have enough information for update. The state-decision procedure determines the current state of the adaptive filter by using the state-decision criterion. As the adaptive filter is in transient state, the algorithm updates the filter coefficients with the selected input vectors. On the other hand, as soon as the adaptive filter reaches the steady state, the update procedure is not performed. Through these two procedures, the proposed algorithm achieves small steady-state estimation errors, low computational complexity and low update complexity for colored input signals.
Motion Cueing Algorithm Development: Piloted Performance Testing of the Cueing Algorithms
NASA Technical Reports Server (NTRS)
Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.
2005-01-01
The relative effectiveness in simulating aircraft maneuvers with both current and newly developed motion cueing algorithms was assessed with an eleven-subject piloted performance evaluation conducted on the NASA Langley Visual Motion Simulator (VMS). In addition to the current NASA adaptive algorithm, two new cueing algorithms were evaluated: the optimal algorithm and the nonlinear algorithm. The test maneuvers included a straight-in approach with a rotating wind vector, an offset approach with severe turbulence and an on/off lateral gust that occurs as the aircraft approaches the runway threshold, and a takeoff both with and without engine failure after liftoff. The maneuvers were executed with each cueing algorithm with added visual display delay conditions ranging from zero to 200 msec. Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. Piloted performance parameters for the approach maneuvers, the vertical velocity upon touchdown and the runway touchdown position, were also analyzed but did not show any noticeable difference among the cueing algorithms. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach were less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.
NASA Technical Reports Server (NTRS)
Bohse, J. R.; Bewtra, M.; Barnes, W. L.
1979-01-01
The rationale and procedures used in the radiometric calibration and correction of Heat Capacity Mapping Mission (HCMM) data are presented. Instrument-level testing and calibration of the Heat Capacity Mapping Radiometer (HCMR) were performed by the sensor contractor ITT Aerospace/Optical Division. The principal results are included. From the instrumental characteristics and calibration data obtained during ITT acceptance tests, an algorithm for post-launch processing was developed. Integrated spacecraft-level sensor calibration was performed at Goddard Space Flight Center (GSFC) approximately two months before launch. This calibration provided an opportunity to validate the data calibration algorithm. Instrumental parameters and results of the validation are presented and the performances of the instrument and the data system after launch are examined with respect to the radiometric results. Anomalies and their consequences are discussed. Flight data indicates a loss in sensor sensitivity with time. The loss was shown to be recoverable by an outgassing procedure performed approximately 65 days after the infrared channel was turned on. It is planned to repeat this procedure periodically.
Updated Evidence-Based Treatment Algorithm in Pulmonary Arterial Hypertension
Barst, Robyn J.; Gibbs, J. Simon; Ghofrani, Hossein A.; Hoeper, Marius M.; McLaughlin, Vallerie V.; Rubin, Lewis J.; Sitbon, Olivier; Tapson, Victor; Galiè, Nazzareno
2009-01-01
Uncontrolled and controlled clinical trials with different compounds and procedures are reviewed to define the risk-benefit profiles for therapeutic options in pulmonary arterial hypertension (PAH). A grading system for the level of evidence of treatments based on the controlled clinical trials performed with each compound is used to propose an evidence-based treatment algorithm. The algorithm includes drugs approved by regulatory agencies for the treatment of PAH and/or drugs available for other indications. The different treatments have been evaluated mainly in idiopathic PAH, heritable PAH, and in PAH associated with the scleroderma spectrum of diseases or with anorexigen use. Extrapolation of these recommendations to other PAH subgroups should be done with caution. Oral anticoagulation is proposed for most patients; diuretic treatment and supplemental oxygen are indicated in cases of fluid retention and hypoxemia, respectively. High doses of calcium channel blockers are indicated only in the minority of patients who respond to acute vasoreactivity testing. Nonresponders to acute vasoreactivity testing, or responders who remain in World Health Organization (WHO) functional class III, should be considered candidates for treatment with either an oral phosphodiesterase-5 inhibitor or an oral endothelin-receptor antagonist. Continuous intravenous administration of epoprostenol remains the treatment of choice in WHO functional class IV patients. Combination therapy is recommended for patients treated with PAH monotherapy who remain in New York Heart Association functional class III. Atrial septostomy and lung transplantation are indicated for refractory patients or where medical treatment is unavailable. PMID:19555861
Modifications to Axially Symmetric Simulations Using New DSMC (2007) Algorithms
NASA Technical Reports Server (NTRS)
Liechty, Derek S.
2008-01-01
Several modifications aimed at improving physical accuracy are proposed for solving axially symmetric problems building on the DSMC (2007) algorithms introduced by Bird. Originally developed to solve nonequilibrium, rarefied flows, the DSMC method is now regularly used to solve complex problems over a wide range of Knudsen numbers. These new algorithms include features such as nearest neighbor collisions excluding the previous collision partners, separate collision and sampling cells, automatically adaptive variable time steps, a modified no-time counter procedure for collisions, and discontinuous and event-driven physical processes. Axially symmetric solutions require radial weighting for the simulated molecules since the molecules near the axis represent fewer real molecules than those farther away from the axis due to the difference in volume of the cells. In the present methodology, these radial weighting factors are continuous, linear functions that vary with the radial position of each simulated molecule. It is shown that how one defines the number of tentative collisions greatly influences the mean collision time near the axis. The method by which the grid is treated for axially symmetric problems also plays an important role near the axis, especially for scalar pressure. A new method to treat how the molecules are traced through the grid is proposed to alleviate the decrease in scalar pressure at the axis near the surface. Also, a modification to the duplication buffer is proposed to vary the duplicated molecular velocities while retaining the molecular kinetic energy and axially symmetric nature of the problem.
Deciphering and generalizing Demiański-Janis-Newman algorithm
NASA Astrophysics Data System (ADS)
Erbin, Harold
2016-05-01
In the case of vanishing cosmological constant, Demiański has shown that the Janis-Newman algorithm can be generalized in order to include a NUT charge and another parameter c, in addition to the angular momentum. Moreover it was proved that only a NUT charge can be added for non-vanishing cosmological constant. However despite the fact that the form of the coordinate transformations was obtained, it was not explained how to perform the complexification on the metric function, and the procedure does not follow directly from the usual Janis-Newman rules. The goal of our paper is threefold: explain the hidden assumptions of Demiański's analysis, generalize the computations to topological horizons (spherical and hyperbolic) and to charged solutions, and explain how to perform the complexification of the function. In particular we present a new solution which is an extension of the Demiański metric to hyperbolic horizons. These different results open the door to applications on (gauged) supergravity since they allow for a systematic application of the Demiański-Janis-Newman algorithm.
An algorithm to build mock galaxy catalogues using MICE simulations
NASA Astrophysics Data System (ADS)
Carretero, J.; Castander, F. J.; Gaztañaga, E.; Crocce, M.; Fosalba, P.
2015-02-01
We present a method to build mock galaxy catalogues starting from a halo catalogue that uses halo occupation distribution (HOD) recipes as well as the subhalo abundance matching (SHAM) technique. Combining both prescriptions we are able to push the absolute magnitude of the resulting catalogue to fainter luminosities than using just the SHAM technique and can interpret our results in terms of the HOD modelling. We optimize the method by populating with galaxies friends-of-friends dark matter haloes extracted from the Marenostrum Institut de Ciències de l'Espai dark matter simulations and comparing them to observational constraints. Our resulting mock galaxy catalogues manage to reproduce the observed local galaxy luminosity function and the colour-magnitude distribution as observed by the Sloan Digital Sky Survey. They also reproduce the observed galaxy clustering properties as a function of luminosity and colour. In order to achieve that, the algorithm also includes scatter in the halo mass-galaxy luminosity relation derived from direct SHAM and a modified Navarro-Frenk-White mass density profile to place satellite galaxies in their host dark matter haloes. Improving on general usage of the HOD that fits the clustering for given magnitude limited samples, our catalogues are constructed to fit observations at all luminosities considered and therefore for any luminosity subsample. Overall, our algorithm is an economic procedure of obtaining galaxy mock catalogues down to faint magnitudes that are necessary to understand and interpret galaxy surveys.
Clause Elimination Procedures for CNF Formulas
NASA Astrophysics Data System (ADS)
Heule, Marijn; Järvisalo, Matti; Biere, Armin
We develop and analyze clause elimination procedures, a specific family of simplification techniques for conjunctive normal form (CNF) formulas. Extending known procedures such as tautology, subsumption, and blocked clause elimination, we introduce novel elimination procedures based on hidden and asymmetric variants of these techniques. We analyze the resulting nine (including five new) clause elimination procedures from various perspectives: size reduction, BCP-preservance, confluence, and logical equivalence. For the variants not preserving logical equivalence, we show how to reconstruct solutions to original CNFs from satisfying assignments to simplified CNFs. We also identify a clause elimination procedure that does a transitive reduction of the binary implication graph underlying any CNF formula purely on the CNF level.
Group implicit concurrent algorithms in nonlinear structural dynamics
NASA Technical Reports Server (NTRS)
Ortiz, M.; Sotelino, E. D.
1989-01-01
During the 70's and 80's, considerable effort was devoted to developing efficient and reliable time stepping procedures for transient structural analysis. Mathematically, the equations governing this type of problems are generally stiff, i.e., they exhibit a wide spectrum in the linear range. The algorithms best suited to this type of applications are those which accurately integrate the low frequency content of the response without necessitating the resolution of the high frequency modes. This means that the algorithms must be unconditionally stable, which in turn rules out explicit integration. The most exciting possibility in the algorithms development area in recent years has been the advent of parallel computers with multiprocessing capabilities. So, this work is mainly concerned with the development of parallel algorithms in the area of structural dynamics. A primary objective is to devise unconditionally stable and accurate time stepping procedures which lend themselves to an efficient implementation in concurrent machines. Some features of the new computer architecture are summarized. A brief survey of current efforts in the area is presented. A new class of concurrent procedures, or Group Implicit algorithms is introduced and analyzed. The numerical simulation shows that GI algorithms hold considerable promise for application in coarse grain as well as medium grain parallel computers.
NASA Astrophysics Data System (ADS)
Windarto, Indratno, S. W.; Nuraini, N.; Soewono, E.
2014-02-01
Genetic algorithm is an optimization method based on the principles of genetics and natural selection in life organisms. The algorithm begins by defining the optimization variables, defining the cost function (in a minimization problem) or the fitness function (in a maximization problem) and selecting genetic algorithm parameters. The main procedures in genetic algorithm are generating initial population, selecting some chromosomes (individual) as parent's individual, mating, and mutation. In this paper, binary and continuous genetic algorithms were implemented to estimate growth rate and carrying capacity parameter from poultry data cited from literature. For simplicity, all genetic algorithm parameters (selection rate and mutation rate) are set to be constant along implementation of the algorithm. It was found that by selecting suitable mutation rate, both algorithms can estimate these parameters well. Suitable range for mutation rate in continuous genetic algorithm is wider than the binary one.
Goldfarb, Neil I; Pizzi, Laura T; Fuhr, Joseph P; Salvador, Christopher; Sikirica, Vanja; Kornbluth, Asher; Lewis, Blair
2004-01-01
The purpose of this study was to review economic considerations related to establishing a diagnosis of Crohn's disease, and to compare the costs of a diagnostic algorithm incorporating wireless capsule endoscopy (WCE) with the current algorithm for diagnosing Crohn's disease suspected in the small bowel. Published literature, clinical trial data on WCE in comparison to other diagnostic tools, and input from clinical experts were used as data sources for (1) identifying contributors to the costs of diagnosing Crohn's disease; (2) exploring where WCE should be placed within the diagnostic algorithm for Crohn's; and (3) constructing decision tree models with sensitivity analyses to explore costs (from a payor perspective) of diagnosing Crohn's disease using WCE compared to other diagnostic methods. Literature review confirms that Crohn's disease is a significant and growing public health concern from clinical, humanistic and economic perspectives, and results in a long-term burden for patients, their families, providers, insurers, and employers. Common diagnostic procedures include radiologic studies such as small bowel follow through (SBFT), enteroclysis, CT scans, ultrasounds, and MRIs, as well as serologic testing, and various forms of endoscopy. Diagnostic costs for Crohn's disease can be considerable, especially given the cycle of repeat testing due to the low diagnostic yield of certain procedures and the inability of current diagnostic procedures to image the entire small bowel. WCE has a higher average diagnostic yield than comparative procedures due to imaging clarity and the ability to visualize the entire small bowel. Literature review found the average diagnostic yield of SBFT and colonoscopy for work-up of Crohn's disease to be 53.87%, whereas WCE had a diagnostic yield of 69.59%. A simple decision tree model comparing two arms--colonoscopy and SBFT, or WCE--estimates that WCE produces a cost savings of 291dollars for each case presenting for diagnostic
Arianespace streamlines launch procedures
NASA Astrophysics Data System (ADS)
Lenorovitch, Jeffrey M.
1992-06-01
Ariane has entered a new operational phase in which launch procedures have been enhanced to reduce the length of launch campaigns, lower mission costs, and increase operational availability/flexibility of the three-stage vehicle. The V50 mission utilized the first vehicle from a 50-launcher production lot ordered by Arianespace, and was the initial flight with a stretched third stage that enhances Ariane's performance. New operational procedures were introduced gradually over more than a year, starting with the V42 launch in January 1991.
2015-01-01
An important goal in cardiovascular and thoracic surgery is reducing surgical trauma to achieve faster recovery for our patients. Mini-Bentall procedure encompasses aortic root and ascending aortic replacement with re-implantation of coronary buttons, performed via a mini-sternotomy. The skin incision extends from the angle of Louis to the third intercostal space, usually measuring 5-7 cm in length. Through this incision, it is possible to perform isolated aortic root surgery and/or hemi-arch replacement. The present illustrated article describes the technical details on how I perform a Mini-Bentall procedure with hemi-arch replacement. PMID:25870816
Monte Carlo procedure for protein design
NASA Astrophysics Data System (ADS)
Irbäck, Anders; Peterson, Carsten; Potthast, Frank; Sandelin, Erik
1998-11-01
A method for sequence optimization in protein models is presented. The approach, which has inherited its basic philosophy from recent work by Deutsch and Kurosky [Phys. Rev. Lett. 76, 323 (1996)] by maximizing conditional probabilities rather than minimizing energy functions, is based upon a different and very efficient multisequence Monte Carlo scheme. By construction, the method ensures that the designed sequences represent good folders thermodynamically. A bootstrap procedure for the sequence space search is devised making very large chains feasible. The algorithm is successfully explored on the two-dimensional HP model [K. F. Lau and K. A. Dill, Macromolecules 32, 3986 (1989)] with chain lengths N=16, 18, and 32.
Operational Implementation of Space Debris Mitigation Procedures
NASA Astrophysics Data System (ADS)
Gicquel, Anne-Helene; Bonaventure, Francois
2013-08-01
During the spacecraft lifetime, Astrium supports its customers to manage collision risks alerts from the Joint Space Operations Center (JSpOC). This was previously done with hot-line support and a manual operational procedure. Today, it is automated and integrated in QUARTZ, the Astrium Flight Dynamics operational tool. The algorithms and process details for this new 5- step functionality are provided in this paper. To improve this functionality, some R&D activities such as the study of dilution phenomenon and low relative velocity encounters are going on. Regarding end of life disposal, recent operational experiences as well as studies results are presented.
Scheduling with genetic algorithms
NASA Technical Reports Server (NTRS)
Fennel, Theron R.; Underbrink, A. J., Jr.; Williams, George P. W., Jr.
1994-01-01
In many domains, scheduling a sequence of jobs is an important function contributing to the overall efficiency of the operation. At Boeing, we develop schedules for many different domains, including assembly of military and commercial aircraft, weapons systems, and space vehicles. Boeing is under contract to develop scheduling systems for the Space Station Payload Planning System (PPS) and Payload Operations and Integration Center (POIC). These applications require that we respect certain sequencing restrictions among the jobs to be scheduled while at the same time assigning resources to the jobs. We call this general problem scheduling and resource allocation. Genetic algorithms (GA's) offer a search method that uses a population of solutions and benefits from intrinsic parallelism to search the problem space rapidly, producing near-optimal solutions. Good intermediate solutions are probabalistically recombined to produce better offspring (based upon some application specific measure of solution fitness, e.g., minimum flowtime, or schedule completeness). Also, at any point in the search, any intermediate solution can be accepted as a final solution; allowing the search to proceed longer usually produces a better solution while terminating the search at virtually any time may yield an acceptable solution. Many processes are constrained by restrictions of sequence among the individual jobs. For a specific job, other jobs must be completed beforehand. While there are obviously many other constraints on processes, it is these on which we focussed for this research: how to allocate crews to jobs while satisfying job precedence requirements and personnel, and tooling and fixture (or, more generally, resource) requirements.
The Dropout Learning Algorithm
Baldi, Pierre; Sadowski, Peter
2014-01-01
Dropout is a recently introduced algorithm for training neural network by randomly dropping units during training to prevent their co-adaptation. A mathematical analysis of some of the static and dynamic properties of dropout is provided using Bernoulli gating variables, general enough to accommodate dropout on units or connections, and with variable rates. The framework allows a complete analysis of the ensemble averaging properties of dropout in linear networks, which is useful to understand the non-linear case. The ensemble averaging properties of dropout in non-linear logistic networks result from three fundamental equations: (1) the approximation of the expectations of logistic functions by normalized geometric means, for which bounds and estimates are derived; (2) the algebraic equality between normalized geometric means of logistic functions with the logistic of the means, which mathematically characterizes logistic functions; and (3) the linearity of the means with respect to sums, as well as products of independent variables. The results are also extended to other classes of transfer functions, including rectified linear functions. Approximation errors tend to cancel each other and do not accumulate. Dropout can also be connected to stochastic neurons and used to predict firing rates, and to backpropagation by viewing the backward propagation as ensemble averaging in a dropout linear network. Moreover, the convergence properties of dropout can be understood in terms of stochastic gradient descent. Finally, for the regularization properties of dropout, the expectation of the dropout gradient is the gradient of the corresponding approximation ensemble, regularized by an adaptive weight decay term with a propensity for self-consistent variance minimization and sparse representations. PMID:24771879
Parallel projected variable metric algorithms for unconstrained optimization
NASA Technical Reports Server (NTRS)
Freeman, T. L.
1989-01-01
The parallel variable metric optimization algorithms of Straeter (1973) and van Laarhoven (1985) are reviewed, and the possible drawbacks of the algorithms are noted. By including Davidon (1975) projections in the variable metric updating, researchers can generalize Straeter's algorithm to a family of parallel projected variable metric algorithms which do not suffer the above drawbacks and which retain quadratic termination. Finally researchers consider the numerical performance of one member of the family on several standard example problems and illustrate how the choice of the displacement vectors affects the performance of the algorithm.
Novel biomedical tetrahedral mesh methods: algorithms and applications
NASA Astrophysics Data System (ADS)
Yu, Xiao; Jin, Yanfeng; Chen, Weitao; Huang, Pengfei; Gu, Lixu
2007-12-01
Tetrahedral mesh generation algorithm, as a prerequisite of many soft tissue simulation methods, becomes very important in the virtual surgery programs because of the real-time requirement. Aiming to speed up the computation in the simulation, we propose a revised Delaunay algorithm which makes a good balance of quality of tetrahedra, boundary preservation and time complexity, with many improved methods. Another mesh algorithm named Space-Disassembling is also presented in this paper, and a comparison of Space-Disassembling, traditional Delaunay algorithm and the revised Delaunay algorithm is processed based on clinical soft-tissue simulation projects, including craniofacial plastic surgery and breast reconstruction plastic surgery.
New hybrid genetic particle swarm optimization algorithm to design multi-zone binary filter.
Lin, Jie; Zhao, Hongyang; Ma, Yuan; Tan, Jiubin; Jin, Peng
2016-05-16
The binary phase filters have been used to achieve an optical needle with small lateral size. Designing a binary phase filter is still a scientific challenge in such fields. In this paper, a hybrid genetic particle swarm optimization (HGPSO) algorithm is proposed to design the binary phase filter. The HGPSO algorithm includes self-adaptive parameters, recombination and mutation operations that originated from the genetic algorithm. Based on the benchmark test, the HGPSO algorithm has achieved global optimization and fast convergence. In an easy-to-perform optimizing procedure, the iteration number of HGPSO is decreased to about a quarter of the original particle swarm optimization process. A multi-zone binary phase filter is designed by using the HGPSO. The long depth of focus and high resolution are achieved simultaneously, where the depth of focus and focal spot transverse size are 6.05λ and 0.41λ, respectively. Therefore, the proposed HGPSO can be applied to the optimization of filter with multiple parameters. PMID:27409895
Progress on automated data analysis algorithms for ultrasonic inspection of composites
NASA Astrophysics Data System (ADS)
Aldrin, John C.; Forsyth, David S.; Welter, John T.
2015-03-01
Progress is presented on the development and demonstration of automated data analysis (ADA) software to address the burden in interpreting ultrasonic inspection data for large composite structures. The automated data analysis algorithm is presented in detail, which follows standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. New algorithms have been implemented to reliably identify indications in time-of-flight images near the front and back walls of composite panels. Adaptive call criteria have also been applied to address sensitivity to variation in backwall signal level, panel thickness variation, and internal signal noise. ADA processing results are presented for a variety of test specimens that include inserted materials and discontinuities produced under poor manufacturing conditions. Software tools have been developed to support both ADA algorithm design and certification, producing a statistical evaluation of indication results and false calls using a matching process with predefined truth tables. Parametric studies were performed to evaluate detection and false call results with respect to varying algorithm settings.
Toddler test or procedure preparation
Preparing toddler for test/procedure; Test/procedure preparation - toddler; Preparing for a medical test or procedure - toddler ... Before the test, know that your child will probably cry. Even if you prepare, your child may feel some discomfort or ...
Preschooler test or procedure preparation
Preparing preschoolers for test/procedure; Test/procedure preparation - preschooler ... Preparing children for medical tests can reduce their distress. It can also make them less likely to cry and resist the procedure. Research shows that ...
FastGGM: An Efficient Algorithm for the Inference of Gaussian Graphical Model in Biological Networks
Ding, Ying; Fang, Zhou; Sun, Zhe; MacDonald, Matthew L.; Sweet, Robert A.; Wang, Jieru; Chen, Wei
2016-01-01
Biological networks provide additional information for the analysis of human diseases, beyond the traditional analysis that focuses on single variables. Gaussian graphical model (GGM), a probability model that characterizes the conditional dependence structure of a set of random variables by a graph, has wide applications in the analysis of biological networks, such as inferring interaction or comparing differential networks. However, existing approaches are either not statistically rigorous or are inefficient for high-dimensional data that include tens of thousands of variables for making inference. In this study, we propose an efficient algorithm to implement the estimation of GGM and obtain p-value and confidence interval for each edge in the graph, based on a recent proposal by Ren et al., 2015. Through simulation studies, we demonstrate that the algorithm is faster by several orders of magnitude than the current implemented algorithm for Ren et al. without losing any accuracy. Then, we apply our algorithm to two real data sets: transcriptomic data from a study of childhood asthma and proteomic data from a study of Alzheimer’s disease. We estimate the global gene or protein interaction networks for the disease and healthy samples. The resulting networks reveal interesting interactions and the differential networks between cases and controls show functional relevance to the diseases. In conclusion, we provide a computationally fast algorithm to implement a statistically sound procedure for constructing Gaussian graphical model and making inference with high-dimensional biological data. The algorithm has been implemented in an R package named “FastGGM”. PMID:26872036
Full potential unsteady computations including aeroelastic effects
NASA Technical Reports Server (NTRS)
Shankar, Vijaya; Ide, Hiroshi
1989-01-01
A unified formulation is presented based on the full potential framework coupled with an appropriate structural model to compute steady and unsteady flows over rigid and flexible configurations across the Mach number range. The unsteady form of the full potential equation in conservation form is solved using an implicit scheme maintaining time accuracy through internal Newton iterations. A flux biasing procedure based on the unsteady sonic reference conditions is implemented to compute hyperbolic regions with moving sonic and shock surfaces. The wake behind a trailing edge is modeled using a mathematical cut across which the pressure is satisfied to be continuous by solving an appropriate vorticity convection equation. An aeroelastic model based on the generalized modal deflection approach interacts with the nonlinear aerodynamics and includes both static as well as dynamic structural analyses capability. Results are presented for rigid and flexible configurations at different Mach numbers ranging from subsonic to supersonic conditions. The dynamic response of a flexible wing below and above its flutter point is demonstrated.
Operational Control Procedures for the Activated Sludge Process, Part III-A: Calculation Procedures.
ERIC Educational Resources Information Center
West, Alfred W.
This is the second in a series of documents developed by the National Training and Operational Technology Center describing operational control procedures for the activated sludge process used in wastewater treatment. This document deals exclusively with the calculation procedures, including simplified mixing formulas, aeration tank…
The 'obsolescence' of assessment procedures.
Russell, Elbert W
2010-01-01
The concept that obsolescence or being "out of date" makes a test or procedure invalid ("inaccurate," "inappropriate," "not useful," "creating wrong interpretations," etc.) has been widely accepted in psychology and neuropsychology. Such obsolescence, produced by publishing a new version of a test, has produced an extensive nullification of research effort (probably 10,000 Wechsler studies). The arguments, attempting to justify obsolescence, include the Flynn Effect, the creation of a new version of a test or simply time. However, the Flynn Effect appears to have plateaued. In psychometric theory, validated tests do not lose their validity due to the creation of newer versions. Time does not invalidate tests due to the improvement of neurological methodology, such as magnetic resonance imaging. This assumption is unscientific, unproven, and if true, would discredit all older neuropsychological and neurological knowledge. In science, no method, theory, or information, once validated, loses that validation merely due to time or the creation of another test or procedure. Once validated, a procedure is only disproved or replaced by means of new research. PMID:20146123
Element-by-element Solution Procedures for Nonlinear Structural Analysis
NASA Technical Reports Server (NTRS)
Hughes, T. J. R.; Winget, J. M.; Levit, I.
1984-01-01
Element-by-element approximate factorization procedures are proposed for solving the large finite element equation systems which arise in nonlinear structural mechanics. Architectural and data base advantages of the present algorithms over traditional direct elimination schemes are noted. Results of calculations suggest considerable potential for the methods described.
40 CFR 51.357 - Test procedures and standards.
Code of Federal Regulations, 2013 CFR
2013-07-01
... forth in 40 CFR 86.094-17(e)(1): “Control of Air Pollution From New Motor Vehicles and New Motor Vehicle... invalid test condition, unsafe conditions, fast pass/fail algorithms, or, in the case of the on-board... with the procedures contained in appendix B to this subpart. (8) Emission control device...
A method of automatic control procedures cardiopulmonary resuscitation
NASA Astrophysics Data System (ADS)
Bureev, A. Sh.; Zhdanov, D. S.; Kiseleva, E. Yu.; Kutsov, M. S.; Trifonov, A. Yu.
2015-11-01
The study is to present the results of works on creation of methods of automatic control procedures of cardiopulmonary resuscitation (CPR). A method of automatic control procedure of CPR by evaluating the acoustic data of the dynamics of blood flow in the bifurcation of carotid arteries and the dynamics of air flow in a trachea according to the current guidelines for CPR is presented. Evaluation of the patient is carried out by analyzing the respiratory noise and blood flow in the interspaces between the chest compressions and artificial pulmonary ventilation. The device operation algorithm of automatic control procedures of CPR and its block diagram has been developed.
Testing Intelligently Includes Double-Checking Wechsler IQ Scores
ERIC Educational Resources Information Center
Kuentzel, Jeffrey G.; Hetterscheidt, Lesley A.; Barnett, Douglas
2011-01-01
The rigors of standardized testing make for numerous opportunities for examiner error, including simple computational mistakes in scoring. Although experts recommend that test scoring be double-checked, the extent to which independent double-checking would reduce scoring errors is not known. A double-checking procedure was established at a…
Algorithm implementation on the Navier-Stokes computer
NASA Technical Reports Server (NTRS)
Krist, Steven E.; Zang, Thomas A.
1987-01-01
The Navier-Stokes Computer is a multi-purpose parallel-processing supercomputer which is currently under development at Princeton University. It consists of multiple local memory parallel processors, called Nodes, which are interconnected in a hypercube network. Details of the procedures involved in implementing an algorithm on the Navier-Stokes computer are presented. The particular finite difference algorithm considered in this analysis was developed for simulation of laminar-turbulent transition in wall bounded shear flows. Projected timing results for implementing this algorithm indicate that operation rates in excess of 42 GFLOPS are feasible on a 128 Node machine.
A generalized TRL algorithm for s-parameter de-embedding
Colestock, P.; Foley, M.
1993-04-01
At FNAL bench measurements of the longitudinal impedance of various beamline components have been performed using stretched wire methods. The basic approach is to use a network analyzer (NWA) to measure the transmission and reflection characteristics (s-parameters) of the beam line component. It is then possible to recover the effective longitudinal impedance from the s-parameters. Several NWA calibration procedures have been implemented in an effort to improve the accuracy of these measurements. These procedures are mathematical techniques for extracting the s-parameters of a test device from external NWA measurements which include the effect of measurement fixtures. The TRL algorithm has proven to be the most effective of these techniques. This method has the advantage of properly accounting for the nonideal calibration standards used in the NWA measurements.
CFAR detection algorithm for acoustic-seismic landmine detection
NASA Astrophysics Data System (ADS)
Matalkah, Ghaith M.; Matalgah, Mustafa M.; Sabatier, James M.
2007-04-01
Automating the detection process in acoustic-seismic landmine detection speeds up the detection process and eliminates the need for a human operator in the minefield. Previous automatic detection algorithms for acoustic landmine detection showed excellent results for detecting landmines in various environments. However, these algorithms use environment-specific noise-removal procedures that rely on training sets acquired over mine-free areas. In this work, we derive a new detection algorithm that adapts to varying conditions and employs environment-independent techniques. The algorithm is based on the generalized likelihood ratio (GLR) test and asymptotically achieves a constant false alarm rate (CFAR). The algorithm processes the magnitude and phase of the vibrational velocity and shows satisfying results of detecting landmines in gravel and dirt lanes.
Markov random-field-based anomaly screening algorithm
NASA Astrophysics Data System (ADS)
Bello, Martin G.
1995-06-01
A novel anomaly screening algorithm is described which makes use of a regression diagnostic associated with the fitting of Markov Random Field (MRF) models. This regression diagnostic quantifies the extent to which a given neighborhood of pixels is atypical, relative to local background characteristics. The screening algorithm consists first in the calculation of an MRF-based anomoly statistic values. Next, 'blob' features, such as pixel count and maximal pixel intensity are calculated, and ranked over the image, in order to 'filter' the blobs to some final subset of most likely candidates. Receiver operating characteristics obtained from applying the above described screening algorithm to the detection of minelike targets in high- and low-frequency side-scan sonar imagery are presented together with results obtained from other screening algorithms for comparison, demonstrating performance comparable to trained human operators. In addition, real-time implementation considerations associated with each algorithmic component of the described procedure are identified.
Simulating Laboratory Procedures.
ERIC Educational Resources Information Center
Baker, J. E.; And Others
1986-01-01
Describes the use of computer assisted instruction in a medical microbiology course. Presents examples of how computer assisted instruction can present case histories in which the laboratory procedures are simulated. Discusses an authoring system used to prepare computer simulations and provides one example of a case history dealing with fractured…
ERIC Educational Resources Information Center
Hester, Yvette
Least squares methods are sophisticated mathematical curve fitting procedures used in all classical parametric methods. The linear least squares approximation is most often associated with finding the "line of best fit" or the regression line. Since all statistical analyses are correlational and all classical parametric methods are least square…
Advanced intrarenal ureteroscopic procedures.
Monga, Manoj; Beeman, William W
2004-02-01
The role of flexible ureteroscopy in the management of intrarenal pathology has undergone a dramatic evolution, powered by improvements in flexible ureteroscope design; deflection and image quality; diversification of small, disposable instrumentation; and the use of holmium laser lithotripsy. This article reviews the application of flexible ureteroscopy for advanced intrarenal procedures.
Visual Screening: A Procedure.
ERIC Educational Resources Information Center
Williams, Robert T.
Vision is a complex process involving three phases: physical (acuity), physiological (integrative), and psychological (perceptual). Although these phases cannot be considered discrete, they provide the basis for the visual screening procedure used by the Reading Services of Colorado State University and described in this document. Ten tests are…
Student Loan Collection Procedures.
ERIC Educational Resources Information Center
National Association of College and University Business Officers, Washington, DC.
This manual on the collection of student loans is intended for the use of business officers and loan collection personnel of colleges and universities of all sizes. The introductory chapter is an overview of sound collection practices and procedures. It discusses the making of a loan, in-school servicing of the accounts, the exit interview, the…
PLATO Courseware Development Procedures.
ERIC Educational Resources Information Center
Mahler, William A.; And Others
This is an exploratory study of methods for the preparation of computer curriculum materials. It deals with courseware development procedures for the PLATO IV computer-based education system, and draws on interviews with over 100 persons engaged in courseware production. The report presents a five stage model of development: (1) planning, (2)…
ERIC Educational Resources Information Center
Green, Gary J.
This paper presents two actual problems involving grievance procedures. Both problems involve pending litigation and one of them involves pending arbitration. The first problem occurred in a wealthy Minnesota school district and involved a seniority list. Because of changes in the financial basis for supporting public schools, it became necessary…
Educational Accounting Procedures.
ERIC Educational Resources Information Center
Tidwell, Sam B.
This chapter of "Principles of School Business Management" reviews the functions, procedures, and reports with which school business officials must be familiar in order to interpret and make decisions regarding the school district's financial position. Among the accounting functions discussed are financial management, internal auditing, annual…
ERIC Educational Resources Information Center
Dunst, Carl J.
2006-01-01
Procedures for using a decision algorithm for determining whether an infant or toddler is eligible for Part C early intervention is the focus of this eligibility determination practice guideline. An algorithm is a step-by-step problem-solving procedure or decision-making process that results in a solution or accurate decision in a finite number of…
28 CFR 65.84 - Procedures for the Attorney General when seeking State or local assistance.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., immigration law enforcement fundamentals and procedures, civil rights law, and sensitivity and cultural..., including applicable immigration law enforcement standards and procedures, civil rights law, and...
Five-dimensional Janis-Newman algorithm
NASA Astrophysics Data System (ADS)
Erbin, Harold; Heurtier, Lucien
2015-08-01
The Janis-Newman algorithm has been shown to be successful in finding new stationary solutions of four-dimensional gravity. Attempts for a generalization to higher dimensions have already been found for the restricted cases with only one angular momentum. In this paper we propose an extension of this algorithm to five-dimensions with two angular momenta—using the prescription of Giampieri—through two specific examples, that are the Myers-Perry and BMPV black holes. We also discuss possible enlargements of our prescriptions to other dimensions and maximal number of angular momenta, and show how dimensions higher than six appear to be much more challenging to treat within this framework. Nonetheless this general algorithm provides a unification of the formulation in d=3,4,5 of the Janis-Newman algorithm, from which several examples are exposed, including the BTZ black hole.
Aerodynamic Shape Optimization using an Evolutionary Algorithm
NASA Technical Reports Server (NTRS)
Hoist, Terry L.; Pulliam, Thomas H.
2003-01-01
A method for aerodynamic shape optimization based on an evolutionary algorithm approach is presented and demonstrated. Results are presented for a number of model problems to access the effect of algorithm parameters on convergence efficiency and reliability. A transonic viscous airfoil optimization problem-both single and two-objective variations is used as the basis for a preliminary comparison with an adjoint-gradient optimizer. The evolutionary algorithm is coupled with a transonic full potential flow solver and is used to optimize the inviscid flow about transonic wings including multi-objective and multi-discipline solutions that lead to the generation of pareto fronts. The results indicate that the evolutionary algorithm approach is easy to implement, flexible in application and extremely reliable.
Aerodynamic Shape Optimization using an Evolutionary Algorithm
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)
2003-01-01
A method for aerodynamic shape optimization based on an evolutionary algorithm approach is presented and demonstrated. Results are presented for a number of model problems to access the effect of algorithm parameters on convergence efficiency and reliability. A transonic viscous airfoil optimization problem, both single and two-objective variations, is used as the basis for a preliminary comparison with an adjoint-gradient optimizer. The evolutionary algorithm is coupled with a transonic full potential flow solver and is used to optimize the inviscid flow about transonic wings including multi-objective and multi-discipline solutions that lead to the generation of pareto fronts. The results indicate that the evolutionary algorithm approach is easy to implement, flexible in application and extremely reliable.
31 CFR 205.9 - What is included in a Treasury-State agreement?
Code of Federal Regulations, 2010 CFR
2010-07-01
... (Continued) FISCAL SERVICE, DEPARTMENT OF THE TREASURY FINANCIAL MANAGEMENT SERVICE RULES AND PROCEDURES FOR EFFICIENT FEDERAL-STATE FUNDS TRANSFERS Rules Applicable to Federal Assistance Programs Included in...
Licensing failure in the European decentralised procedure.
Langedijk, Joris; Ebbers, Hans C; Mantel-Teeuwisse, Aukje K; Kruger-Peters, Alexandra G; Leufkens, Hubert G M
2016-05-25
The majority of the licensing applications in the European Union are submitted via the decentralised procedure. Little is known about licensing failure (i.e. refusal or withdrawal of a marketing authorisation application) in the EU decentralised procedure compared to the EU centralised procedure and the approval procedure in the United States. The study aim was to determine the frequency of and determinants for licensing failure of marketing authorisation applications submitted via this procedure. We assessed procedures that failed between 2008 and 2012 with The Netherlands as leading authority and assessed the remaining major objections. In total 492 procedures were completed, of which 48 (9.8%) failed: 8 refused, 40 withdrawn. A wide variety of major objections was identified and included both quality (48 major objections) and clinical (45 major objections) issues. The low failure rate may be related to the regular interaction between competent authorities and applicants during the procedure. Some degree of licensing failure may be inevitable, as it may also be affected by the financial feasibility or willingness to resolve major objections, as well as other reasons to withdraw an application besides the raised major objections.
A Polynomial Time, Numerically Stable Integer Relation Algorithm
NASA Technical Reports Server (NTRS)
Ferguson, Helaman R. P.; Bailey, Daivd H.; Kutler, Paul (Technical Monitor)
1998-01-01
Let x = (x1, x2...,xn be a vector of real numbers. X is said to possess an integer relation if there exist integers a(sub i) not all zero such that a1x1 + a2x2 + ... a(sub n)Xn = 0. Beginning in 1977 several algorithms (with proofs) have been discovered to recover the a(sub i) given x. The most efficient of these existing integer relation algorithms (in terms of run time and the precision required of the input) has the drawback of being very unstable numerically. It often requires a numeric precision level in the thousands of digits to reliably recover relations in modest-sized test problems. We present here a new algorithm for finding integer relations, which we have named the "PSLQ" algorithm. It is proved in this paper that the PSLQ algorithm terminates with a relation in a number of iterations that is bounded by a polynomial in it. Because this algorithm employs a numerically stable matrix reduction procedure, it is free from the numerical difficulties, that plague other integer relation algorithms. Furthermore, its stability admits an efficient implementation with lower run times oil average than other algorithms currently in Use. Finally, this stability can be used to prove that relation bounds obtained from computer runs using this algorithm are numerically accurate.
Scalable Nearest Neighbor Algorithms for High Dimensional Data.
Muja, Marius; Lowe, David G
2014-11-01
For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching. PMID:26353063
Scalable Nearest Neighbor Algorithms for High Dimensional Data.
Muja, Marius; Lowe, David G
2014-11-01
For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.
Operational algorithm development and refinement approaches
NASA Astrophysics Data System (ADS)
Ardanuy, Philip E.
2003-11-01
Next-generation polar and geostationary systems, such as the National Polar-orbiting Operational Environmental Satellite System (NPOESS) and the Geostationary Operational Environmental Satellite (GOES)-R, will deploy new generations of electro-optical reflective and emissive capabilities. These will include low-radiometric-noise, improved spatial resolution multi-spectral and hyperspectral imagers and sounders. To achieve specified performances (e.g., measurement accuracy, precision, uncertainty, and stability), and best utilize the advanced space-borne sensing capabilities, a new generation of retrieval algorithms will be implemented. In most cases, these advanced algorithms benefit from ongoing testing and validation using heritage research mission algorithms and data [e.g., the Earth Observing System (EOS)] Moderate-resolution Imaging Spectroradiometer (MODIS) and Shuttle Ozone Limb Scattering Experiment (SOLSE)/Limb Ozone Retreival Experiment (LORE). In these instances, an algorithm's theoretical basis is not static, but rather improves with time. Once frozen, an operational algorithm can "lose ground" relative to research analogs. Cost/benefit analyses provide a basis for change management. The challenge is in reconciling and balancing the stability, and "comfort," that today"s generation of operational platforms provide (well-characterized, known, sensors and algorithms) with the greatly improved quality, opportunities, and risks, that the next generation of operational sensors and algorithms offer. By using the best practices and lessons learned from heritage/groundbreaking activities, it is possible to implement an agile process that enables change, while managing change. This approach combines a "known-risk" frozen baseline with preset completion schedules with insertion opportunities for algorithm advances as ongoing validation activities identify and repair areas of weak performance. This paper describes an objective, adaptive implementation roadmap that
Minimally invasive procedures for neuropathic pain.
Sdrulla, Andrei; Chen, Grace
2016-04-01
Neuropathic pain is "pain arising as a direct consequence of a lesion or disease affecting the somatosensory system". The prevalence of neuropathic pain ranges from 7 to 11% of the population and minimally invasive procedures have been used to both diagnose and treat neuropathic pain. Diagnostic procedures consist of nerve blocks aimed to isolate the peripheral nerve implicated, whereas therapeutic interventions either modify or destroy nerve function. Procedures that modify how nerves function include epidural steroid injections, peripheral nerve blocks and sympathetic nerve blocks. Neuroablative procedures include radiofrequency ablation, cryoanalgesia and neurectomies. Currently, neuromodulation with peripheral nerve stimulators and spinal cord stimulators are the most evidence-based treatments of neuropathic pain. PMID:26988024
Automated training for algorithms that learn from genomic data.
Cilingir, Gokcen; Broschat, Shira L
2015-01-01
Supervised machine learning algorithms are used by life scientists for a variety of objectives. Expert-curated public gene and protein databases are major resources for gathering data to train these algorithms. While these data resources are continuously updated, generally, these updates are not incorporated into published machine learning algorithms which thereby can become outdated soon after their introduction. In this paper, we propose a new model of operation for supervised machine learning algorithms that learn from genomic data. By defining these algorithms in a pipeline in which the training data gathering procedure and the learning process are automated, one can create a system that generates a classifier or predictor using information available from public resources. The proposed model is explained using three case studies on SignalP, MemLoci, and ApicoAP in which existing machine learning models are utilized in pipelines. Given that the vast majority of the procedures described for gathering training data can easily be automated, it is possible to transform valuable machine learning algorithms into self-evolving learners that benefit from the ever-changing data available for gene products and to develop new machine learning algorithms that are similarly capable.
On selecting a body surface mapping procedure.
Hoekema, R; Uijen, G J; van Oosterom, A
1999-04-01
Throughout the world, various procedures related to body surface mapping have evolved. The large differences in these procedures make multicenter studies difficult. This paper discusses the problems involved in selecting the number of leads, lead placement, and map format. Methods are highlighted that have been developed for pooling of the data as obtained by different centers. Recommendations are included to newcomers in the field. (The work stems from an international study, the Noninvasive Evaluation of the Myocardium, a study group sponsored by the European Commission, which has as one of its objectives the standardization of body surface mapping procedures.)
An improved algorithm for evaluating trellis phase codes
NASA Technical Reports Server (NTRS)
Mulligan, M. G.; Wilson, S. G.
1984-01-01
A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.
An improved algorithm for evaluating trellis phase codes
NASA Technical Reports Server (NTRS)
Mulligan, M. G.; Wilson, S. G.
1982-01-01
A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.
Evolutionary pattern search algorithms
Hart, W.E.
1995-09-19
This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimental analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.
Inverse wing design in transonic flow including viscous interaction
NASA Technical Reports Server (NTRS)
Carlson, Leland A.; Ratcliff, Robert R.; Gally, Thomas A.; Campbell, Richard L.
1989-01-01
Several inverse methods were compared and initial results indicate that differences in results are primarily due to coordinate systems and fuselage representations and not to design procedures. Further, results from a direct-inverse method that includes 3-D wing boundary layer effects, wake curvature, and wake displacement are represented. These results show that boundary layer displacements must be included in the design process for accurate results.
Nonequilibrium chemistry boundary layer integral matrix procedure
NASA Technical Reports Server (NTRS)
Tong, H.; Buckingham, A. C.; Morse, H. L.
1973-01-01
The development of an analytic procedure for the calculation of nonequilibrium boundary layer flows over surfaces of arbitrary catalycities is described. An existing equilibrium boundary layer integral matrix code was extended to include nonequilibrium chemistry while retaining all of the general boundary condition features built into the original code. For particular application to the pitch-plane of shuttle type vehicles, an approximate procedure was developed to estimate the nonequilibrium and nonisentropic state at the edge of the boundary layer.
Diagnostic Algorithm for Residual Pain After Total Knee Arthroplasty.
Park, Caroline N; White, Peter B; Meftah, Morteza; Ranawat, Amar S; Ranawat, Chitranjan S
2016-01-01
Although total knee arthroplasty is a successful and cost-effective procedure, patient dissatisfaction remains as high as 50%. Postoperative residual knee pain after total knee arthroplasty, with or without crepitation, is a major factor that contributes to patient dissatisfaction. The most common location for residual pain after total knee arthroplasty is anteriorly. Because residual pain has been associated with an un-resurfaced patella, this review includes only registry data and total knee arthroplasty with patella replacement. Some suggest that the pathogenesis of residual knee pain may be related to mechanical stimuli that activate free nerve endings around the patellofemoral joint. Various etiologies have been implicated in residual pain, including (1) low-grade infection, (2) midflexion instability, and (3) component malalignment with patellar maltracking. Less common causes include (4) crepitation and patellar clunk syndrome; (5) patellofemoral symptoms, including overstuffing and avascular necrosis of the patella; (6) early aseptic loosening; (7) hypersensitivity to metal or cement; (8) complex regional pain syndrome; and (9) pseudoaneurysm. Because all of these conditions can lead to residual pain, identifying the etiology can be a difficult diagnostic challenge. Often, patients with persistent pain and normal findings on radiographs and laboratory workup may benefit from a diagnostic injection or further imaging. However, up to 10% to 15% of patients with residual pain may have unexplained pain. This literature review summarizes the findings on the causes of residual pain and presents a diagnostic algorithm to facilitate an accurate diagnosis for residual pain after total knee arthroplasty. PMID:26811953
Confidence intervals for expected moments algorithm flood quantile estimates
Cohn, T.A.; Lane, W.L.; Stedinger, J.R.
2001-01-01
Historical and paleoflood information can substantially improve flood frequency estimates if appropriate statistical procedures are properly applied. However, the Federal guidelines for flood frequency analysis, set forth in Bulletin 17B, rely on an inefficient "weighting" procedure that fails to take advantage of historical and paleoflood information. This has led researchers to propose several more efficient alternatives including the Expected Moments Algorithm (EMA), which is attractive because it retains Bulletin 17B's statistical structure (method of moments with the Log Pearson Type 3 distribution) and thus can be easily integrated into flood analyses employing the rest of the Bulletin 17B approach. The practical utility of EMA, however, has been limited because no closed-form method has been available for quantifying the uncertainty of EMA-based flood quantile estimates. This paper addresses that concern by providing analytical expressions for the asymptotic variance of EMA flood-quantile estimators and confidence intervals for flood quantile estimates. Monte Carlo simulations demonstrate the properties of such confidence intervals for sites where a 25- to 100-year streamgage record is augmented by 50 to 150 years of historical information. The experiments show that the confidence intervals, though not exact, should be acceptable for most purposes.
25 CFR 168.17 - Concurrence procedures.
Code of Federal Regulations, 2014 CFR
2014-04-01
... activities is a program consisting of a series of range management acts, including but not limited to... condition of the range land to deteriorate. (5) Conservation practice is a program consisting of a series of... consisting of a series of range management acts, including but not limited to procedures by which...
NASA Astrophysics Data System (ADS)
Melchiorre, C.; Tryggvason, A.
2015-12-01
We refine and test an algorithm for landslide susceptibility assessment in areas with sensitive clays. The algorithm uses soil data and digital elevation models to identify areas which may be prone to landslides and has been applied in Sweden for several years. The algorithm is very computationally efficient and includes an intelligent filtering procedure for identifying and removing small-scale artifacts in the hazard maps produced. Where information on bedrock depth is available, this can be included in the analysis, as can information on several soil-type-based cross-sectional angle thresholds for slip. We evaluate how processing choices such as of filtering parameters, local cross-sectional angle thresholds, and inclusion of bedrock depth information affect model performance. The specific cross-sectional angle thresholds used were derived by analyzing the relationship between landslide scarps and the quick-clay susceptibility index (QCSI). We tested the algorithm in the Göta River valley. Several different verification measures were used to compare results with observed landslides and thereby identify the optimal algorithm parameters. Our results show that even though a relationship between the cross-sectional angle threshold and the QCSI could be established, no significant improvement of the overall modeling performance could be achieved by using these geographically specific, soil-based thresholds. Our results indicate that lowering the cross-sectional angle threshold from 1 : 10 (the general value used in Sweden) to 1 : 13 improves results slightly. We also show that an application of the automatic filtering procedure that removes areas initially classified as prone to landslides not only removes artifacts and makes the maps visually more appealing, but it also improves the model performance.
Optical rate sensor algorithms
NASA Astrophysics Data System (ADS)
Uhde-Lacovara, Jo A.
1989-12-01
Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.
Optical rate sensor algorithms
NASA Technical Reports Server (NTRS)
Uhde-Lacovara, Jo A.
1989-01-01
Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.
Proactive Alleviation Procedure to Handle Black Hole Attack and Its Version
Babu, M. Rajesh; Dian, S. Moses; Chelladurai, Siva; Palaniappan, Mathiyalagan
2015-01-01
The world is moving towards a new realm of computing such as Internet of Things. The Internet of Things, however, envisions connecting almost all objects within the world to the Internet by recognizing them as smart objects. In doing so, the existing networks which include wired, wireless, and ad hoc networks should be utilized. Moreover, apart from other networks, the ad hoc network is full of security challenges. For instance, the MANET (mobile ad hoc network) is susceptible to various attacks in which the black hole attacks and its versions do serious damage to the entire MANET infrastructure. The severity of this attack increases, when the compromised MANET nodes work in cooperation with each other to make a cooperative black hole attack. Therefore this paper proposes an alleviation procedure which consists of timely mandate procedure, hole detection algorithm, and sensitive guard procedure to detect the maliciously behaving nodes. It has been observed that the proposed procedure is cost-effective and ensures QoS guarantee by assuring resource availability thus making the MANET appropriate for Internet of Things. PMID:26495430
Proactive Alleviation Procedure to Handle Black Hole Attack and Its Version.
Babu, M Rajesh; Dian, S Moses; Chelladurai, Siva; Palaniappan, Mathiyalagan
2015-01-01
The world is moving towards a new realm of computing such as Internet of Things. The Internet of Things, however, envisions connecting almost all objects within the world to the Internet by recognizing them as smart objects. In doing so, the existing networks which include wired, wireless, and ad hoc networks should be utilized. Moreover, apart from other networks, the ad hoc network is full of security challenges. For instance, the MANET (mobile ad hoc network) is susceptible to various attacks in which the black hole attacks and its versions do serious damage to the entire MANET infrastructure. The severity of this attack increases, when the compromised MANET nodes work in cooperation with each other to make a cooperative black hole attack. Therefore this paper proposes an alleviation procedure which consists of timely mandate procedure, hole detection algorithm, and sensitive guard procedure to detect the maliciously behaving nodes. It has been observed that the proposed procedure is cost-effective and ensures QoS guarantee by assuring resource availability thus making the MANET appropriate for Internet of Things. PMID:26495430
Proactive Alleviation Procedure to Handle Black Hole Attack and Its Version.
Babu, M Rajesh; Dian, S Moses; Chelladurai, Siva; Palaniappan, Mathiyalagan
2015-01-01
The world is moving towards a new realm of computing such as Internet of Things. The Internet of Things, however, envisions connecting almost all objects within the world to the Internet by recognizing them as smart objects. In doing so, the existing networks which include wired, wireless, and ad hoc networks should be utilized. Moreover, apart from other networks, the ad hoc network is full of security challenges. For instance, the MANET (mobile ad hoc network) is susceptible to various attacks in which the black hole attacks and its versions do serious damage to the entire MANET infrastructure. The severity of this attack increases, when the compromised MANET nodes work in cooperation with each other to make a cooperative black hole attack. Therefore this paper proposes an alleviation procedure which consists of timely mandate procedure, hole detection algorithm, and sensitive guard procedure to detect the maliciously behaving nodes. It has been observed that the proposed procedure is cost-effective and ensures QoS guarantee by assuring resource availability thus making the MANET appropriate for Internet of Things.
Factory Acceptance Test Procedure Westinghouse 100 ton Hydraulic Trailer
Aftanas, B.L.
1994-11-16
This Factory Acceptance Test Procedure (FAT) is for the Westinghouse 100 Ton Hydraulic Trailer. The trailer will be used for the removal of the 101-SY pump. This procedure includes: safety check and safety procedures; pre-operation check out; startup; leveling trailer; functional/proofload test; proofload testing; and rolling load test.
10 CFR 34.45 - Operating and emergency procedures.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 1 2010-01-01 2010-01-01 false Operating and emergency procedures. 34.45 Section 34.45... REQUIREMENTS FOR INDUSTRIAL RADIOGRAPHIC OPERATIONS Radiation Safety Requirements § 34.45 Operating and emergency procedures. (a) Operating and emergency procedures must include, as a minimum, instructions in...
A Stepwise Canonical Procedure and the Shrinkage of Canonical Correlations.
ERIC Educational Resources Information Center
Rim, Eui-Do
A stepwise canonical procedure, including two selection indices for variable deletion and a rule for stopping the iterative procedure, was derived as a method of selecting core variables from predictors and criteria. The procedure was applied to simulated data varying in the degree of built in structures in population correlation matrices, number…
Long-length contaminated equipment burial containers fabrication process procedures
McCormick, W.A., Fluor Daniel Hanford
1997-03-11
These special process procedures cover the detailed step-by-step procedures required by the supplier who will manufacture the Long-Length Contaminated Equipment (LLCE) Burial Container design. Also included are detailed step-by-step procedures required by the disposal process for completion of the LLCE Burial Containers at Hanford.
34 CFR 303.208 - Public participation policies and procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 34 Education 2 2013-07-01 2013-07-01 false Public participation policies and procedures. 303.208... Public participation policies and procedures. (a) Application. At least 60 days prior to being submitted..., the lead agency— (1) Holds public hearings on the new policy or procedure (including any revision...
34 CFR 303.208 - Public participation policies and procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 34 Education 2 2012-07-01 2012-07-01 false Public participation policies and procedures. 303.208... Public participation policies and procedures. (a) Application. At least 60 days prior to being submitted..., the lead agency— (1) Holds public hearings on the new policy or procedure (including any revision...
34 CFR 303.208 - Public participation policies and procedures.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 34 Education 2 2014-07-01 2013-07-01 true Public participation policies and procedures. 303.208... Public participation policies and procedures. (a) Application. At least 60 days prior to being submitted..., the lead agency— (1) Holds public hearings on the new policy or procedure (including any revision...
New Effective Multithreaded Matching Algorithms
Manne, Fredrik; Halappanavar, Mahantesh
2014-05-19
Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.
Paradigms for Realizing Machine Learning Algorithms.
Agneeswaran, Vijay Srinivas; Tonpay, Pranay; Tiwary, Jayati
2013-12-01
The article explains the three generations of machine learning algorithms-with all three trying to operate on big data. The first generation tools are SAS, SPSS, etc., while second generation realizations include Mahout and RapidMiner (that work over Hadoop), and the third generation paradigms include Spark and GraphLab, among others. The essence of the article is that for a number of machine learning algorithms, it is important to look beyond the Hadoop's Map-Reduce paradigm in order to make them work on big data. A number of promising contenders have emerged in the third generation that can be exploited to realize deep analytics on big data.
A fast portable implementation of the Secure Hash Algorithm, III.
McCurley, Kevin S.
1992-10-01
In 1992, NIST announced a proposed standard for a collision-free hash function. The algorithm for producing the hash value is known as the Secure Hash Algorithm (SHA), and the standard using the algorithm in known as the Secure Hash Standard (SHS). Later, an announcement was made that a scientist at NSA had discovered a weakness in the original algorithm. A revision to this standard was then announced as FIPS 180-1, and includes a slight change to the algorithm that eliminates the weakness. This new algorithm is called SHA-1. In this report we describe a portable and efficient implementation of SHA-1 in the C language. Performance information is given, as well as tips for porting the code to other architectures. We conclude with some observations on the efficiency of the algorithm, and a discussion of how the efficiency of SHA might be improved.
CARVE--a constructive algorithm for real-valued examples.
Young, S; Downs, T
1998-01-01
A constructive neural-network algorithm is presented. For any consistent classification task on real-valued training vectors, the algorithm constructs a feedforward network with a single hidden layer of threshold units which implements the task. The algorithm, which we call CARVE, extends the "sequential learning" algorithm of Marchand et al. from Boolean inputs to the real-valued input case, and uses convex hull methods for the determination of the network weights. The algorithm is an efficient training scheme for producing near-minimal network solutions for arbitrary classification tasks. The algorithm is applied to a number of benchmark problems including Gorman and Sejnowski's sonar data, the Monks problems and Fisher's iris data. A significant application of the constructive algorithm is in providing an initial network topology and initial weights for other neural-network training schemes and this is demonstrated by application to backpropagation.
Kim, S.
1994-12-31
Parallel iterative procedures based on domain decomposition techniques are defined and analyzed for the numerical solution of wave propagation by finite element and finite difference methods. For finite element methods, in a Lagrangian framework, an efficient way for choosing the algorithm parameter as well as the algorithm convergence are indicated. Some heuristic arguments for finding the algorithm parameter for finite difference schemes are addressed. Numerical results are presented to indicate the effectiveness of the methods.
Standards of neurosurgical procedures.
Steiger, H J
2001-01-01
Written specifications with regard to procedures performed, equipment used, and training of the involved personnel are widely used in the industry and aviation to guarantee constant quality. Similar systems are progressively being introduced to medicine. We have made an effort to standardize surgical procedures by introducing step-by-step guidelines and checklists. The current experience shows that a system of written standards is applicable to neurosurgery and that the use of checklists contributes to the prevention of forgetting essential details. Written standards and checklists are also a useful training tool within a university hospital and facilitate communication of essentials to the residents. Comparison with aviation suggests that standardization leads to a remarkable but nonetheless limited reduction of adverse incidents. PMID:11840739
Automatic design of decision-tree algorithms with evolutionary algorithms.
Barros, Rodrigo C; Basgalupp, Márcio P; de Carvalho, André C P L F; Freitas, Alex A
2013-01-01
This study reports the empirical analysis of a hyper-heuristic evolutionary algorithm that is capable of automatically designing top-down decision-tree induction algorithms. Top-down decision-tree algorithms are of great importance, considering their ability to provide an intuitive and accurate knowledge representation for classification problems. The automatic design of these algorithms seems timely, given the large literature accumulated over more than 40 years of research in the manual design of decision-tree induction algorithms. The proposed hyper-heuristic evolutionary algorithm, HEAD-DT, is extensively tested using 20 public UCI datasets and 10 microarray gene expression datasets. The algorithms automatically designed by HEAD-DT are compared with traditional decision-tree induction algorithms, such as C4.5 and CART. Experimental results show that HEAD-DT is capable of generating algorithms which are significantly more accurate than C4.5 and CART.
Resuscitation algorithm for management of acute emergencies.
Shoemaker, W C; Hopkins, J A; Greenfield, S; Chang, P C; Umof, P; Shabot, M M; Spenler, C W; State, D
1978-10-01
Assuming that unrecognized or inadequately corrected hypovolemia results in higher mortality and morbidity rates, we developed a systematic approach to resuscitation that would: 1) identify criteria to aid in the recognition of hypovolemia and ensure the expeditious correction of this defect without interfering with diagnostic workup and management; 2) define criteria to prevent fluid overload which may jeopardize the patient's course, and 3) express these criteria in an explicit, systematic, patient care algorithm, ie, protocol, useful to both the resident and the practicing physician. We are now conducting prospective clinical trials with one service using the algorithm and the others acting as the control group. Preliminary results comparing patient outcomes suggest that the algorithm improves patient care by shortening resuscitation time and results in fewer hospital days, intensive care unit days, febrile days, and days on mechanical ventilation as well as reduced mortality. The algorithm provides a systematic plan to organize patient care so that the most urgently needed procedures are not delayed or overlooked.
An investigation of messy genetic algorithms
NASA Technical Reports Server (NTRS)
Goldberg, David E.; Deb, Kalyanmoy; Korb, Bradley
1990-01-01
Genetic algorithms (GAs) are search procedures based on the mechanics of natural selection and natural genetics. They combine the use of string codings or artificial chromosomes and populations with the selective and juxtapositional power of reproduction and recombination to motivate a surprisingly powerful search heuristic in many problems. Despite their empirical success, there has been a long standing objection to the use of GAs in arbitrarily difficult problems. A new approach was launched. Results to a 30-bit, order-three-deception problem were obtained using a new type of genetic algorithm called a messy genetic algorithm (mGAs). Messy genetic algorithms combine the use of variable-length strings, a two-phase selection scheme, and messy genetic operators to effect a solution to the fixed-coding problem of standard simple GAs. The results of the study of mGAs in problems with nonuniform subfunction scale and size are presented. The mGA approach is summarized, both its operation and the theory of its use. Experiments on problems of varying scale, varying building-block size, and combined varying scale and size are presented.
Partially linearized algorithms in gyrokinetic particle simulation
Dimits, A.M.; Lee, W.W.
1990-10-01
In this paper, particle simulation algorithms with time-varying weights for the gyrokinetic Vlasov-Poisson system have been developed. The primary purpose is to use them for the removal of the selected nonlinearities in the simulation of gradient-driven microturbulence so that the relative importance of the various nonlinear effects can be assessed. It is hoped that the use of these procedures will result in a better understanding of the transport mechanisms and scaling in tokamaks. Another application of these algorithms is for the improvement of the numerical properties of the simulation plasma. For instance, implementations of such algorithms (1) enable us to suppress the intrinsic numerical noise in the simulation, and (2) also make it possible to regulate the weights of the fast-moving particles and, in turn, to eliminate the associated high frequency oscillations. Examples of their application to drift-type instabilities in slab geometry are given. We note that the work reported here represents the first successful use of the weighted algorithms in particle codes for the nonlinear simulation of plasmas.
An algorithm for linearizing convex extremal problems
Gorskaya, Elena S
2010-06-09
This paper suggests a method of approximating the solution of minimization problems for convex functions of several variables under convex constraints is suggested. The main idea of this approach is the approximation of a convex function by a piecewise linear function, which results in replacing the problem of convex programming by a linear programming problem. To carry out such an approximation, the epigraph of a convex function is approximated by the projection of a polytope of greater dimension. In the first part of the paper, the problem is considered for functions of one variable. In this case, an algorithm for approximating the epigraph of a convex function by a polygon is presented, it is shown that this algorithm is optimal with respect to the number of vertices of the polygon, and exact bounds for this number are obtained. After this, using an induction procedure, the algorithm is generalized to certain classes of functions of several variables. Applying the suggested method, polynomial algorithms for an approximate calculation of the L{sub p}-norm of a matrix and of the minimum of the entropy function on a polytope are obtained. Bibliography: 19 titles.
An algorithm for linearizing convex extremal problems
NASA Astrophysics Data System (ADS)
Gorskaya, Elena S.
2010-06-01
This paper suggests a method of approximating the solution of minimization problems for convex functions of several variables under convex constraints is suggested. The main idea of this approach is the approximation of a convex function by a piecewise linear function, which results in replacing the problem of convex programming by a linear programming problem. To carry out such an approximation, the epigraph of a convex function is approximated by the projection of a polytope of greater dimension. In the first part of the paper, the problem is considered for functions of one variable. In this case, an algorithm for approximating the epigraph of a convex function by a polygon is presented, it is shown that this algorithm is optimal with respect to the number of vertices of the polygon, and exact bounds for this number are obtained. After this, using an induction procedure, the algorithm is generalized to certain classes of functions of several variables. Applying the suggested method, polynomial algorithms for an approximate calculation of the L_p-norm of a matrix and of the minimum of the entropy function on a polytope are obtained. Bibliography: 19 titles.
Practical procedures: oxygen therapy.
Olive, Sandra
Knowing when to start patients on oxygen therapy can save lives, but ongoing assessment and evaluation must be carried out to ensure the treatment is safe and effective. This article outlines when oxygen therapy should be used and the procedures to follow. It also describes the delivery methods applicable to different patient groups, along with the appropriate target saturation ranges, and details relevant nurse competencies.
The Superintendent and Grievance Procedures.
ERIC Educational Resources Information Center
Kleinmann, Jack H.
Grievance adjustment between teachers and administrators is viewed as a misunderstood process. The problem is treated under four main headings: (1) Purposes and characteristics of an effective grievance procedure, (2) status of grievance procedures in education, (3) relationship of grievance procedures to professional negotiation procedures, and…
Painting with polygons: a procedural watercolor engine.
DiVerdi, Stephen; Krishnaswamy, Aravind; Měch, Radomír; Ito, Daichi
2013-05-01
Existing natural media painting simulations have produced high-quality results, but have required powerful compute hardware and have been limited to screen resolutions. Digital artists would like to be able to use watercolor-like painting tools, but at print resolutions and on lower end hardware such as laptops or even slates. We present a procedural algorithm for generating watercolor-like dynamic paint behaviors in a lightweight manner. Our goal is not to exactly duplicate watercolor painting, but to create a range of dynamic behaviors that allow users to achieve a similar style of process and result, while at the same time having a unique character of its own. Our stroke representation is vector based, allowing for rendering at arbitrary resolutions, and our procedural pigment advection algorithm is fast enough to support painting on slate devices. We demonstrate our technique in a commercially available slate application used by professional artists. Finally, we present a detailed analysis of the different vector-rendering technologies available.
Painting with polygons: a procedural watercolor engine.
DiVerdi, Stephen; Krishnaswamy, Aravind; Měch, Radomír; Ito, Daichi
2013-05-01
Existing natural media painting simulations have produced high-quality results, but have required powerful compute hardware and have been limited to screen resolutions. Digital artists would like to be able to use watercolor-like painting tools, but at print resolutions and on lower end hardware such as laptops or even slates. We present a procedural algorithm for generating watercolor-like dynamic paint behaviors in a lightweight manner. Our goal is not to exactly duplicate watercolor painting, but to create a range of dynamic behaviors that allow users to achieve a similar style of process and result, while at the same time having a unique character of its own. Our stroke representation is vector based, allowing for rendering at arbitrary resolutions, and our procedural pigment advection algorithm is fast enough to support painting on slate devices. We demonstrate our technique in a commercially available slate application used by professional artists. Finally, we present a detailed analysis of the different vector-rendering technologies available. PMID:23492376
Improving DTI tractography by including diagonal tract propagation.
Taylor, Paul A; Cho, Kuan-Hung; Lin, Ching-Po; Biswal, Bharat B
2012-01-01
Tractography algorithms have been developed to reconstruct likely WM pathways in the brain from diffusion tensor imaging (DTI) data. In this study, an elegant and simple means for improving existing tractography algorithms is proposed by allowing tracts to propagate through diagonal trajectories between voxels, instead of only rectilinearly to their facewise neighbors. A series of tests (using both real and simulated data sets) are utilized to show several benefits of this new approach. First, the inclusion of diagonal tract propagation decreases the dependence of an algorithm on the arbitrary orientation of coordinate axes and therefore reduces numerical errors associated with that bias (which are also demonstrated here). Moreover, both quantitatively and qualitatively, including diagonals decreases overall noise sensitivity of results and leads to significantly greater efficiency in scanning protocols; that is, the obtained tracts converge much more quickly (i.e., in a smaller amount of scanning time) to those of data sets with high SNR and spatial resolution. Importantly, the inclusion of diagonal propagation adds essentially no appreciable time of calculation or computational costs to standard methods. This study focuses on the widely-used streamline tracking method, FACT (fiber assessment by continuous tracking), and the modified method is termed "FACTID" (FACT including diagonals). PMID:22970125
Genetic algorithms in adaptive fuzzy control
NASA Technical Reports Server (NTRS)
Karr, C. Lucas; Harper, Tony R.
1992-01-01
Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust fuzzy membership functions in response to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific computer-simulated chemical system is used to demonstrate the ideas presented.
NASA Astrophysics Data System (ADS)
Lisitsa, Y. V.; Yatskou, M. M.; Apanasovich, V. V.; Apanasovich, T. V.
2015-09-01
We have developed an algorithm for segmentation of cancer cell nuclei in three-channel luminescent images of microbiological specimens. The algorithm is based on using a correlation between fluorescence signals in the detection channels for object segmentation, which permits complete automation of the data analysis procedure. We have carried out a comparative analysis of the proposed method and conventional algorithms implemented in the CellProfiler and ImageJ software packages. Our algorithm has an object localization uncertainty which is 2-3 times smaller than for the conventional algorithms, with comparable segmentation accuracy.
Saleh, Marwan D; Eswaran, C; Mueen, Ahmed
2011-08-01
This paper focuses on the detection of retinal blood vessels which play a vital role in reducing the proliferative diabetic retinopathy and for preventing the loss of visual capability. The proposed algorithm which takes advantage of the powerful preprocessing techniques such as the contrast enhancement and thresholding offers an automated segmentation procedure for retinal blood vessels. To evaluate the performance of the new algorithm, experiments are conducted on 40 images collected from DRIVE database. The results show that the proposed algorithm performs better than the other known algorithms in terms of accuracy. Furthermore, the proposed algorithm being simple and easy to implement, is best suited for fast processing applications.
The Applications of Genetic Algorithms in Medicine.
Ghaheri, Ali; Shoar, Saeed; Naderan, Mohammad; Hoseini, Sayed Shahabuddin
2015-11-01
A great wealth of information is hidden amid medical research data that in some cases cannot be easily analyzed, if at all, using classical statistical methods. Inspired by nature, metaheuristic algorithms have been developed to offer optimal or near-optimal solutions to complex data analysis and decision-making tasks in a reasonable time. Due to their powerful features, metaheuristic algorithms have frequently been used in other fields of sciences. In medicine, however, the use of these algorithms are not known by physicians who may well benefit by applying them to solve complex medical problems. Therefore, in this paper, we introduce the genetic algorithm and its applications in medicine. The use of the genetic algorithm has promising implications in various medical specialties including radiology, radiotherapy, oncology, pediatrics, cardiology, endocrinology, surgery, obstetrics and gynecology, pulmonology, infectious diseases, orthopedics, rehabilitation medicine, neurology, pharmacotherapy, and health care management. This review introduces the applications of the genetic algorithm in disease screening, diagnosis, treatment planning, pharmacovigilance, prognosis, and health care management, and enables physicians to envision possible applications of this metaheuristic method in their medical career.]. PMID:26676060
The Applications of Genetic Algorithms in Medicine
Ghaheri, Ali; Shoar, Saeed; Naderan, Mohammad; Hoseini, Sayed Shahabuddin
2015-01-01
A great wealth of information is hidden amid medical research data that in some cases cannot be easily analyzed, if at all, using classical statistical methods. Inspired by nature, metaheuristic algorithms have been developed to offer optimal or near-optimal solutions to complex data analysis and decision-making tasks in a reasonable time. Due to their powerful features, metaheuristic algorithms have frequently been used in other fields of sciences. In medicine, however, the use of these algorithms are not known by physicians who may well benefit by applying them to solve complex medical problems. Therefore, in this paper, we introduce the genetic algorithm and its applications in medicine. The use of the genetic algorithm has promising implications in various medical specialties including radiology, radiotherapy, oncology, pediatrics, cardiology, endocrinology, surgery, obstetrics and gynecology, pulmonology, infectious diseases, orthopedics, rehabilitation medicine, neurology, pharmacotherapy, and health care management. This review introduces the applications of the genetic algorithm in disease screening, diagnosis, treatment planning, pharmacovigilance, prognosis, and health care management, and enables physicians to envision possible applications of this metaheuristic method in their medical career.] PMID:26676060
Comprehensive eye evaluation algorithm
NASA Astrophysics Data System (ADS)
Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.
2016-03-01
In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.
Quantum gate decomposition algorithms.
Slepoy, Alexander
2006-07-01
Quantum computing algorithms can be conveniently expressed in a format of a quantum logical circuits. Such circuits consist of sequential coupled operations, termed ''quantum gates'', or quantum analogs of bits called qubits. We review a recently proposed method [1] for constructing general ''quantum gates'' operating on an qubits, as composed of a sequence of generic elementary ''gates''.
2005-03-30
The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.
Data Structures and Algorithms.
ERIC Educational Resources Information Center
Wirth, Niklaus
1984-01-01
Built-in data structures are the registers and memory words where binary values are stored; hard-wired algorithms are the fixed rules, embodied in electronic logic circuits, by which stored data are interpreted as instructions to be executed. Various topics related to these two basic elements of every computer program are discussed. (JN)
ERIC Educational Resources Information Center
Drake, Michael
2011-01-01
One debate that periodically arises in mathematics education is the issue of how to teach calculation more effectively. "Modern" approaches seem to initially favour mental calculation, informal methods, and the development of understanding before introducing written forms, while traditionalists tend to champion particular algorithms. The debate is…
Sampling Within k-Means Algorithm to Cluster Large Datasets
Bejarano, Jeremy; Bose, Koushiki; Brannan, Tyler; Thomas, Anita; Adragni, Kofi; Neerchal, Nagaraj; Ostrouchov, George
2011-08-01
Due to current data collection technology, our ability to gather data has surpassed our ability to analyze it. In particular, k-means, one of the simplest and fastest clustering algorithms, is ill-equipped to handle extremely large datasets on even the most powerful machines. Our new algorithm uses a sample from a dataset to decrease runtime by reducing the amount of data analyzed. We perform a simulation study to compare our sampling based k-means to the standard k-means algorithm by analyzing both the speed and accuracy of the two methods. Results show that our algorithm is significantly more efficient than the existing algorithm with comparable accuracy. Further work on this project might include a more comprehensive study both on more varied test datasets as well as on real weather datasets. This is especially important considering that this preliminary study was performed on rather tame datasets. Also, these datasets should analyze the performance of the algorithm on varied values of k. Lastly, this paper showed that the algorithm was accurate for relatively low sample sizes. We would like to analyze this further to see how accurate the algorithm is for even lower sample sizes. We could find the lowest sample sizes, by manipulating width and confidence level, for which the algorithm would be acceptably accurate. In order for our algorithm to be a success, it needs to meet two benchmarks: match the accuracy of the standard k-means algorithm and significantly reduce runtime. Both goals are accomplished for all six datasets analyzed. However, on datasets of three and four dimension, as the data becomes more difficult to cluster, both algorithms fail to obtain the correct classifications on some trials. Nevertheless, our algorithm consistently matches the performance of the standard algorithm while becoming remarkably more efficient with time. Therefore, we conclude that analysts can use our algorithm, expecting accurate results in considerably less time.
Computational algorithms to predict Gene Ontology annotations
2015-01-01
Background Gene function annotations, which are associations between a gene and a term of a controlled vocabulary describing gene functional features, are of paramount importance in modern biology. Datasets of these annotations, such as the ones provided by the Gene Ontology Consortium, are used to design novel biological experiments and interpret their results. Despite their importance, these sources of information have some known issues. They are incomplete, since biological knowledge is far from being definitive and it rapidly evolves, and some erroneous annotations may be present. Since the curation process of novel annotations is a costly procedure, both in economical and time terms, computational tools that can reliably predict likely annotations, and thus quicken the discovery of new gene annotations, are very useful. Methods We used a set of computational algorithms and weighting schemes to infer novel gene annotations from a set of known ones. We used the latent semantic analysis approach, implementing two popular algorithms (Latent Semantic Indexing and Probabilistic Latent Semantic Analysis) and propose a novel method, the Semantic IMproved Latent Semantic Analysis, which adds a clustering step on the set of considered genes. Furthermore, we propose the improvement of these algorithms by weighting the annotations in the input set. Results We tested our methods and their weighted variants on the Gene Ontology annotation sets of three model organism genes (Bos taurus, Danio rerio and Drosophila melanogaster ). The methods showed their ability in predicting novel gene annotations and the weighting procedures demonstrated to lead to a valuable improvement, although the obtained results vary according to the dimension of the input annotation set and the considered algorithm. Conclusions Out of the three considered methods, the Semantic IMproved Latent Semantic Analysis is the one that provides better results. In particular, when coupled with a proper
Fractal Landscape Algorithms for Environmental Simulations
NASA Astrophysics Data System (ADS)
Mao, H.; Moran, S.
2014-12-01
Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.
ERIC Educational Resources Information Center
Arsenault, Cathy; Lemoyne, Gisele
2000-01-01
Analyzes a didactical sequence for the teaching of addition and subtraction procedures and algorithms. Uses didactical procedures by children in problem solving activities in order to gain a better understanding of the interaction between numbers, numeration, and operations knowledge which are involved in the construction of addition and…
Learning interpretive decision algorithm for severe storm forecasting support
Gaffney, J.E. Jr.; Racer, I.R.
1983-01-01
As part of its ongoing program to develop new and better forecasting procedures and techniques, the National Weather Service has initiated an effort in interpretive processing. Investigation has begun to determine the applicability of artificial intelligence (AI)/expert system technology to interpretive processing. This paper presents an expert system algorithm that is being investigated to support the forecasting of severe thunderstorms. 14 references.
Chain Gang: A Framegame for Teaching Algorithms and Heuristics.
ERIC Educational Resources Information Center
Thiagarajan, Sivasailam; Pasigna, Aida L.
1985-01-01
Describes basic structure of a framegame, Chain Gang, in which self-instructional modules teach a cognitive skill. Procedures are presented for loading new content into the game's basic framework to teach algorithms or heuristics and for game modification to suit different situations. Handouts used in the basic game are appended. (MBR)
Element-by-element factorization algorithms for heat conduction
NASA Technical Reports Server (NTRS)
Hughes, T. J. R.; Winget, J. M.; Park, K. C.
1983-01-01
Element-by-element solution strategies are developed for transient heat conduction problems. Results of numerical tests indicate the effectiveness of the procedures proposed. The small database requirements and attractive architectural features of the algorithms suggest considerable potential for solving large scale problems.
Phase correction algorithms for a snapshot hyperspectral imaging system
NASA Astrophysics Data System (ADS)
Chan, Victoria C.; Kudenov, Michael; Dereniak, Eustace
2015-09-01
We present image processing algorithms that improve spatial and spectral resolution on the Snapshot Hyperspectral Imaging Fourier Transform (SHIFT) spectrometer. Final measurements are stored in the form of threedimensional datacubes containing the scene's spatial and spectral information. We discuss calibration procedures, review post-processing methods, and present preliminary results from proof-of-concept experiments.
The evaluation of the OSGLR algorithm for restructurable controls
NASA Technical Reports Server (NTRS)
Bonnice, W. F.; Wagner, E.; Hall, S. R.; Motyka, P.
1986-01-01
The detection and isolation of commercial aircraft control surface and actuator failures using the orthogonal series generalized likelihood ratio (OSGLR) test was evaluated. The OSGLR algorithm was chosen as the most promising algorithm based on a preliminary evaluation of three failure detection and isolation (FDI) algorithms (the detection filter, the generalized likelihood ratio test, and the OSGLR test) and a survey of the literature. One difficulty of analytic FDI techniques and the OSGLR algorithm in particular is their sensitivity to modeling errors. Therefore, methods of improving the robustness of the algorithm were examined with the incorporation of age-weighting into the algorithm being the most effective approach, significantly reducing the sensitivity of the algorithm to modeling errors. The steady-state implementation of the algorithm based on a single cruise linear model was evaluated using a nonlinear simulation of a C-130 aircraft. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling the linear models used by the algorithm on dynamic pressure and flap deflection was also considered. Since simply scheduling the linear models over the entire flight envelope is unlikely to be adequate, scheduling of the steady-state implentation of the algorithm was briefly investigated.
Algorithms for reactions of nonholonomic constraints and servo-constraints
NASA Astrophysics Data System (ADS)
Slawianowski, J. J.
Various procedures for deriving equations of motion of constrained mechanical systems are discussed and compared. A geometric interpretation of the procedures is given, stressing both linear and nonlinear nonholonomic constraints. Certain qualitative differences are analyzed between models of nonholonomic dynamics based on different procedures. Two algorithms of particular interest are: (1) the d'Alembert principle and its Appell-Tshetajev generalization, and (2) the variational Hamiltonian principle with subsidiary conditions. It is argued that the Hamiltonian principle, although not accepted in traditional technical applications, is more promising in generalizations concerning systems with higher differential constraints, or the more general functional constraints appearing in feedback and control systems.
Schwarz-Based Algorithms for Compressible Flows
NASA Technical Reports Server (NTRS)
Tidriri, M. D.
1996-01-01
We investigate in this paper the application of Schwarz-based algorithms to compressible flows. First we study the combination of these methods with defect-correction procedures. We then study the effect on the Schwarz-based methods of replacing the explicit treatment of the boundary conditions by an implicit one. In the last part of this paper we study the combination of these methods with Newton-Krylov matrix-free methods. Numerical experiments that show the performance of our approaches are then presented.
Procedural Learning and Individual Differences in Language
Lee, Joanna C.; Tomblin, J. Bruce
2014-01-01
The aim of the current study was to examine different aspects of procedural memory in young adults who varied with regard to their language abilities. We selected a sample of procedural memory tasks, each of which represented a unique type of procedural learning, and has been linked, at least partially, to the functionality of the corticostriatal system. The findings showed that variance in language abilities is associated with performance on different domains of procedural memory, including the motor domain (as shown in the pursuit rotor task), the cognitive domain (as shown in the weather prediction task), and the linguistic domain (as shown in the nonword repetition priming task). These results implicate the corticostriatal system in individual differences in language. PMID:26190949
Transonic Wing Shape Optimization Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)
2002-01-01
A method for aerodynamic shape optimization based on a genetic algorithm approach is demonstrated. The algorithm is coupled with a transonic full potential flow solver and is used to optimize the flow about transonic wings including multi-objective solutions that lead to the generation of pareto fronts. The results indicate that the genetic algorithm is easy to implement, flexible in application and extremely reliable.
Pena-Cristóbal, Maite; Otero-Rey, Eva-María; Tomás, Inmaculada; Blanco-Carrión, Andrés
2016-01-01
Objectives To determine the diagnostic value of diascopy and other non-invasive clinical aids on recent differential diagnosis algorithms of oral mucosal pigmentations affecting subjects of any age. Material and Methods Data Sources: this systematic review was conducted by searching PubMed, Scopus, Dentistry & Oral Sciences Source and the Cochrane Library (2000-2015); Study Selection: two reviewers independently selected all types of English articles describing differential diagnosis algorithms of oral pigmentations and checked the references of finally included papers; Data Extraction: one reviewer performed the data extraction and quality assessment based on previously defined fields while the other reviewer checked their validity. Results Data Synthesis: eight narrative reviews and one single case report met the inclusion criteria. Diascopy was used on six algorithms (66.67%) and X-ray was included once (11.11%; 44.44% with text mentions); these were considered helpful tools in the diagnosis of intravascular and exogenous pigmentations, respectively. Surface rubbing was described once in the text (11.11%). Conclusions Diascopy was the most applied method followed by X-ray and surface rubbing. The limited scope of these procedures only makes them useful when a positive result is obtained, turning biopsy into the most recommended technique when diagnosis cannot be established on clinical grounds alone. Key words:Algorithm, differential diagnosis, flow chart, oral mucosa, oral pigmentation, systematic review. PMID:27703615
Genetic Algorithms and Local Search
NASA Technical Reports Server (NTRS)
Whitley, Darrell
1996-01-01
The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.
Klaphake, Eric
2006-05-01
Rodents are commonly owned exotic animal pets that may be seen by veterinary practitioners. Although most owners presenting their animals do care about their pets, they may not be aware of the diagnostic possibilities and challenges that can be offered by rodents to the veterinarian. Understanding clinical anatomy, proper hand-ling technique, realistic management of emergency presentations,correct and feasible diagnostic sampling, anesthesia, and humane euthanasia procedures is important to enhancing the doctor-client-patient relationship, especially when financial constraints may be imposed by the owner. PMID:16759953
Surface cleanliness measurement procedure
Schroder, Mark Stewart; Woodmansee, Donald Ernest; Beadie, Douglas Frank
2002-01-01
A procedure and tools for quantifying surface cleanliness are described. Cleanliness of a target surface is quantified by wiping a prescribed area of the surface with a flexible, bright white cloth swatch, preferably mounted on a special tool. The cloth picks up a substantial amount of any particulate surface contamination. The amount of contamination is determined by measuring the reflectivity loss of the cloth before and after wiping on the contaminated system and comparing that loss to a previous calibration with similar contamination. In the alternative, a visual comparison of the contaminated cloth to a contamination key provides an indication of the surface cleanliness.
Radiometric correction procedure study
NASA Technical Reports Server (NTRS)
Colby, C.; Sands, R.; Murphrey, S.
1978-01-01
A comparison of MSS radiometric processing techniques identified as a preferred radiometric processing technique a procedure which equalizes the mean and standard deviation of detector-specific histograms of uncalibrated scene data. Evaluation of MSS calibration data demonstrated that the relationship between detector responses is essentially linear over the range of intensities typically observed in MSS data, and that the calibration wedge data possess a high degree of temporal stability. An analysis of the preferred radiometric processing technique showed that it could be incorporated into the MDP-MSS system without a major redesign of the system, and with minimal impact on system throughput.
CELT optics Alignment Procedure
NASA Astrophysics Data System (ADS)
Mast, Terry S.; Nelson, Jerry E.; Chanan, Gary A.; Noethe, Lothar
2003-01-01
The California Extremely Large Telescope (CELT) is a project to build a 30-meter diameter telescope for research in astronomy at visible and infrared wavelengths. The current optical design calls for a primary, secondary, and tertiary mirror with Ritchey-Chretién foci at two Nasmyth platforms. The primary mirror is a mosaic of 1080 actively-stabilized hexagonal segments. This paper summarizes a CELT report that describes a step-by-step procedure for aligning the many degrees of freedom of the CELT optics.
Multitree Algorithms for Large-Scale Astrostatistics
NASA Astrophysics Data System (ADS)
March, William B.; Ozakin, Arkadas; Lee, Dongryeol; Riegel, Ryan; Gray, Alexander G.
2012-03-01
Common astrostatistical operations. A number of common "subroutines" occur over and over again in the statistical analysis of astronomical data. Some of the most powerful, and computationally expensive, of these additionally share the common trait that they involve distance comparisons between all pairs of data points—or in some cases, all triplets or worse. These include: * All Nearest Neighbors (AllNN): For each query point in a dataset, find the k-nearest neighbors among the points in another dataset—naively O(N2) to compute, for O(N) data points. * n-Point Correlation Functions: The main spatial statistic used for comparing two datasets in various ways—naively O(N2) for the 2-point correlation, O(N3) for the 3-point correlation, etc. * Euclidean Minimum Spanning Tree (EMST): The basis for "single-linkage hierarchical clustering,"the main procedure for generating a hierarchical grouping of the data points at all scales, aka "friends-of-friends"—naively O(N2). * Kernel Density Estimation (KDE): The main method for estimating the probability density function of the data, nonparametrically (i.e., with virtually no assumptions on the functional form of the pdf)—naively O(N2). * Kernel Regression: A powerful nonparametric method for regression, or predicting a continuous target value—naively O(N2). * Kernel Discriminant Analysis (KDA): A powerful nonparametric method for classification, or predicting a discrete class label—naively O(N2). (Note that the "two datasets" may in fact be the same dataset, as in two-point autocorrelations, or the so-called monochromatic AllNN problem, or the leave-one-out cross-validation needed in kernel estimation.) The need for fast algorithms for such analysis subroutines is particularly acute in the modern age of exploding dataset sizes in astronomy. The Sloan Digital Sky Survey yielded hundreds of millions of objects, and the next generation of instruments such as the Large Synoptic Survey Telescope will yield roughly
... conditions, allergies and medications you’re taking, including herbal supplements and aspirin. You may be advised to stop ... doctor all medications that you are taking, including herbal supplements, and if you have any allergies, especially to ...
Reactive Collision Avoidance Algorithm
NASA Technical Reports Server (NTRS)
Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred
2010-01-01
The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on
An improved algorithm for polar cloud-base detection by ceilometer over the ice sheets
NASA Astrophysics Data System (ADS)
Van Tricht, K.; Gorodetskaya, I. V.; Lhermitte, S.; Turner, D. D.; Schween, J. H.; Van Lipzig, N. P. M.
2014-05-01
Optically thin ice and mixed-phase clouds play an important role in polar regions due to their effect on cloud radiative impact and precipitation. Cloud-base heights can be detected by ceilometers, low-power backscatter lidars that run continuously and therefore have the potential to provide basic cloud statistics including cloud frequency, base height and vertical structure. The standard cloud-base detection algorithms of ceilometers are designed to detect optically thick liquid-containing clouds, while the detection of thin ice clouds requires an alternative approach. This paper presents the polar threshold (PT) algorithm that was developed to be sensitive to optically thin hydrometeor layers (minimum optical depth τ ≥ 0.01). The PT algorithm detects the first hydrometeor layer in a vertical attenuated backscatter profile exceeding a predefined threshold in combination with noise reduction and averaging procedures. The optimal backscatter threshold of 3 × 10-4 km-1 sr-1 for cloud-base detection near the surface was derived based on a sensitivity analysis using data from Princess Elisabeth, Antarctica and Summit, Greenland. At higher altitudes where the average noise level is higher than the backscatter threshold, the PT algorithm becomes signal-to-noise ratio driven. The algorithm defines cloudy conditions as any atmospheric profile containing a hydrometeor layer at least 90 m thick. A comparison with relative humidity measurements from radiosondes at Summit illustrates the algorithm's ability to significantly discriminate between clear-sky and cloudy conditions. Analysis of the cloud statistics derived from the PT algorithm indicates a year-round monthly mean cloud cover fraction of 72% (±10%) at Summit without a seasonal cycle. The occurrence of optically thick layers, indicating the presence of supercooled liquid water droplets, shows a seasonal cycle at Summit with a monthly mean summer peak of 40 % (±4%). The monthly mean cloud occurrence frequency