Science.gov

Sample records for algorithm development activities

  1. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  2. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  3. Scheduling language and algorithm development study. Appendix: Study approach and activity summary

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The approach and organization of the study to develop a high level computer programming language and a program library are presented. The algorithm and problem modeling analyses are summarized. The approach used to identify and specify the capabilities required in the basic language is described. Results of the analyses used to define specifications for the scheduling module library are presented.

  4. Parallel algorithm development

    SciTech Connect

    Adams, T.F.

    1996-06-01

    Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.

  5. STAR Algorithm Integration Team - Facilitating operational algorithm development

    NASA Astrophysics Data System (ADS)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  6. Messy genetic algorithms: Recent developments

    SciTech Connect

    Kargupta, H.

    1996-09-01

    Messy genetic algorithms define a rare class of algorithms that realize the need for detecting appropriate relations among members of the search domain in optimization. This paper reviews earlier works in messy genetic algorithms and describes some recent developments. It also describes the gene expression messy GA (GEMGA)--an {Omicron}({Lambda}{sup {kappa}}({ell}{sup 2} + {kappa})) sample complexity algorithm for the class of order-{kappa} delineable problems (problems that can be solved by considering no higher than order-{kappa} relations) of size {ell} and alphabet size {Lambda}. Experimental results are presented to demonstrate the scalability of the GEMGA.

  7. Multisensor data fusion algorithm development

    SciTech Connect

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  8. HEAVY DUTY DIESEL VEHICLE LOAD ESTIMATION: DEVELOPMENT OF VEHICLE ACTIVITY OPTIMIZATION ALGORITHM

    EPA Science Inventory

    The Heavy-Duty Vehicle Modal Emission Model (HDDV-MEM) developed by the Georgia Institute of Technology(Georgia Tech) has a capability to model link-specific second-by-second emissions using speed/accleration matrices. To estimate emissions, engine power demand calculated usin...

  9. Development of advanced algorithms to detect, characterize and forecast solar activities

    NASA Astrophysics Data System (ADS)

    Yuan, Yuan

    Study of the solar activity is an important part of space weather research. It is facing serious challenges because of large data volume, which requires application of state-of-the-art machine learning and computer vision techniques. This dissertation targets at two essential aspects in space weather research: automatic feature detection and forecasting of eruptive events. Feature detection includes solar filament detection and solar fibril tracing. A solar filament consists of a mass of gas suspended over the chromosphere by magnetic fields and seen as a dark, ribbon-shaped feature on the bright solar disk in Halpha (Hydrogen-alpha) full-disk solar images. In this dissertation, an automatic solar filament detection and characterization method is presented. The investigation illustrates that the statistical distribution of the Laplacian filter responses of a solar disk contains a special signature which can be used to identify the best threshold value for solar filament segmentation. Experimental results show that this property holds across different solar images obtained by different solar observatories. Evaluation of the proposed method shows that the accuracy rate for filament detection is more than 95% as measured by filament number and more than 99% as measured by filament area, which indicates that only a small fraction of tiny filaments are missing from the detection results. Comparisons indicate that the proposed method outperforms a previous method. Based on the proposed filament segmentation and characterization method, a filament tracking method is put forward, which is capable of tracking filaments throughout their disk passage. With filament tracking, the variation of filaments can be easily recorded. Solar fibrils are tiny dark threads of masses in Halpha images. It is generally believed that fibrils are magnetic field-aligned, primarily due to the reason that the high electrical conductivity of the solar atmosphere freezes the ionized mass in

  10. Developing Scoring Algorithms

    Cancer.gov

    We developed scoring procedures to convert screener responses to estimates of individual dietary intake for fruits and vegetables, dairy, added sugars, whole grains, fiber, and calcium using the What We Eat in America 24-hour dietary recall data from the 2003-2006 NHANES.

  11. ALGORITHM DEVELOPMENT FOR SPATIAL OPERATORS.

    USGS Publications Warehouse

    Claire, Robert W.

    1984-01-01

    An approach is given that develops spatial operators about the basic geometric elements common to spatial data structures. In this fashion, a single set of spatial operators may be accessed by any system that reduces its operands to such basic generic representations. Algorithms based on this premise have been formulated to perform operations such as separation, overlap, and intersection. Moreover, this generic approach is well suited for algorithms that exploit concurrent properties of spatial operators. The results may provide a framework for a geometry engine to support fundamental manipulations within a geographic information system.

  12. Infrared algorithm development for ocean observations

    NASA Technical Reports Server (NTRS)

    Brown, Otis B.

    1995-01-01

    Efforts continue under this contract to develop algorithms for the computation of sea surface temperature (SST) from MODIS infrared retrievals. This effort includes radiative transfer modeling, comparison of in situ and satellite observations, development and evaluation of processing and networking methodologies for algorithm computation and data accession, evaluation of surface validation approaches for IR radiances, and participation in MODIS (project) related activities. Efforts in this contract period have focused on radiative transfer modeling, evaluation of atmospheric correction methodologies, involvement in field studies, production and evaluation of new computer networking strategies, and objective analysis approaches.

  13. Progress in AMSR Snow Algorithm Development

    NASA Technical Reports Server (NTRS)

    Chang, Alfred; Koike, Toshio

    1998-01-01

    Advanced Microwave Scanning Radiometer (AMSR) will be flown on-board of the Japanese Advanced Earth Observing Satellite-II (ADEOS-II) and United States Earth Observation System (EOS) PM-1 satellite. AMSR is a passive microwave radiometer with frequency ranges from 6.9 GHz to 89 GHz. It scans conically with a constant incidence angle of 55 deg at the Earth's surface. The swath width is about 1600 km. With a large antenna, AMSR will provide the best spatial resolution of multi-frequency radiometer from space. This provides us an opportunity to improve the snow parameter retrieval. Accurate determination of snow parameters from space is a challenging effort. Over the years, many different techniques have been used to account for the complicated snow parameters such as the density, stratigraphy, snow grain size, temperature variation of the snow-pack. Forest type, fractional forest cover and land use type also need to be considered in developing an improved retrieval algorithm. However, snow is such a dynamic variable, snow-pack parameter keeps changing once the snow is deposited on the earth surface. Currently, NASDA and NASA are developing AMSR snow retrieval algorithms. These algorithms are now being carefully tested and evaluated using the SSM/I data. Due to limited snow-pack data available for comparison, this activity is progressing slowly. However, it is clear that in order to improve the snow retrieval algorithm, it is necessary to model the metamorphism history of the snow-pack.

  14. Evolutionary development of path planning algorithms

    SciTech Connect

    Hage, M

    1998-09-01

    This paper describes the use of evolutionary software techniques for developing both genetic algorithms and genetic programs. Genetic algorithms are evolved to solve a specific problem within a fixed and known environment. While genetic algorithms can evolve to become very optimized for their task, they often are very specialized and perform poorly if the environment changes. Genetic programs are evolved through simultaneous training in a variety of environments to develop a more general controller behavior that operates in unknown environments. Performance of genetic programs is less optimal than a specially bred algorithm for an individual environment, but the controller performs acceptably under a wider variety of circumstances. The example problem addressed in this paper is evolutionary development of algorithms and programs for path planning in nuclear environments, such as Chernobyl.

  15. Algorithm Development Library for Environmental Satellite Missions

    NASA Astrophysics Data System (ADS)

    Smith, D. C.; Grant, K. D.; Miller, S. W.; Jamilkowski, M. L.

    2012-12-01

    science will need to migrate into the operational system. In addition, as new techniques are found to improve, supplement, or replace existing products, these changes will also require implementation into the operational system. In the past, operationalizing science algorithms and integrating them into active systems often required months of work. In order to significantly shorten the time and effort required for this activity, Raytheon has developed the Algorithm Development Library (ADL). The ADL enables scientist and researchers to develop algorithms on their own platforms, and provide these to Raytheon in a form that can be rapidly integrated directly into the operational baseline. As the JPSS CGS is a multi-mission ground system, algorithms are not restricted to Suomi NPP or JPSS missions. The ADL provides a development environment that any environmental remote sensing mission scientist can use to create algorithms that will plug into a JPSS CGS instantiation. This paper describes the ADL and how scientists and researchers can use it in their own environments.

  16. Motion Cueing Algorithm Development: Piloted Performance Testing of the Cueing Algorithms

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    The relative effectiveness in simulating aircraft maneuvers with both current and newly developed motion cueing algorithms was assessed with an eleven-subject piloted performance evaluation conducted on the NASA Langley Visual Motion Simulator (VMS). In addition to the current NASA adaptive algorithm, two new cueing algorithms were evaluated: the optimal algorithm and the nonlinear algorithm. The test maneuvers included a straight-in approach with a rotating wind vector, an offset approach with severe turbulence and an on/off lateral gust that occurs as the aircraft approaches the runway threshold, and a takeoff both with and without engine failure after liftoff. The maneuvers were executed with each cueing algorithm with added visual display delay conditions ranging from zero to 200 msec. Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. Piloted performance parameters for the approach maneuvers, the vertical velocity upon touchdown and the runway touchdown position, were also analyzed but did not show any noticeable difference among the cueing algorithms. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach were less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.

  17. Passive microwave algorithm development and evaluation

    NASA Technical Reports Server (NTRS)

    Petty, Grant W.

    1995-01-01

    The scientific objectives of this grant are: (1) thoroughly evaluate, both theoretically and empirically, all available Special Sensor Microwave Imager (SSM/I) retrieval algorithms for column water vapor, column liquid water, and surface wind speed; (2) where both appropriate and feasible, develop, validate, and document satellite passive microwave retrieval algorithms that offer significantly improved performance compared with currently available algorithms; and (3) refine and validate a novel physical inversion scheme for retrieving rain rate over the ocean. This report summarizes work accomplished or in progress during the first year of a three year grant. The emphasis during the first year has been on the validation and refinement of the rain rate algorithm published by Petty and on the analysis of independent data sets that can be used to help evaluate the performance of rain rate algorithms over remote areas of the ocean. Two articles in the area of global oceanic precipitation are attached.

  18. Operational algorithm development and refinement approaches

    NASA Astrophysics Data System (ADS)

    Ardanuy, Philip E.

    2003-11-01

    Next-generation polar and geostationary systems, such as the National Polar-orbiting Operational Environmental Satellite System (NPOESS) and the Geostationary Operational Environmental Satellite (GOES)-R, will deploy new generations of electro-optical reflective and emissive capabilities. These will include low-radiometric-noise, improved spatial resolution multi-spectral and hyperspectral imagers and sounders. To achieve specified performances (e.g., measurement accuracy, precision, uncertainty, and stability), and best utilize the advanced space-borne sensing capabilities, a new generation of retrieval algorithms will be implemented. In most cases, these advanced algorithms benefit from ongoing testing and validation using heritage research mission algorithms and data [e.g., the Earth Observing System (EOS)] Moderate-resolution Imaging Spectroradiometer (MODIS) and Shuttle Ozone Limb Scattering Experiment (SOLSE)/Limb Ozone Retreival Experiment (LORE). In these instances, an algorithm's theoretical basis is not static, but rather improves with time. Once frozen, an operational algorithm can "lose ground" relative to research analogs. Cost/benefit analyses provide a basis for change management. The challenge is in reconciling and balancing the stability, and "comfort," that today"s generation of operational platforms provide (well-characterized, known, sensors and algorithms) with the greatly improved quality, opportunities, and risks, that the next generation of operational sensors and algorithms offer. By using the best practices and lessons learned from heritage/groundbreaking activities, it is possible to implement an agile process that enables change, while managing change. This approach combines a "known-risk" frozen baseline with preset completion schedules with insertion opportunities for algorithm advances as ongoing validation activities identify and repair areas of weak performance. This paper describes an objective, adaptive implementation roadmap that

  19. Connected-Health Algorithm: Development and Evaluation.

    PubMed

    Vlahu-Gjorgievska, Elena; Koceski, Saso; Kulev, Igor; Trajkovik, Vladimir

    2016-04-01

    Nowadays, there is a growing interest towards the adoption of novel ICT technologies in the field of medical monitoring and personal health care systems. This paper proposes design of a connected health algorithm inspired from social computing paradigm. The purpose of the algorithm is to give a recommendation for performing a specific activity that will improve user's health, based on his health condition and set of knowledge derived from the history of the user and users with similar attitudes to him. The algorithm could help users to have bigger confidence in choosing their physical activities that will improve their health. The proposed algorithm has been experimentally validated using real data collected from a community of 1000 active users. The results showed that the recommended physical activity, contributed towards weight loss of at least 0.5 kg, is found in the first half of the ordered list of recommendations, generated by the algorithm, with the probability > 0.6 with 1 % level of significance. PMID:26922593

  20. Infrared Algorithm Development for Ocean Observations with EOS/MODIS

    NASA Technical Reports Server (NTRS)

    Brown, Otis B.

    1997-01-01

    Efforts continue under this contract to develop algorithms for the computation of sea surface temperature (SST) from MODIS infrared measurements. This effort includes radiative transfer modeling, comparison of in situ and satellite observations, development and evaluation of processing and networking methodologies for algorithm computation and data accession, evaluation of surface validation approaches for IR radiances, development of experimental instrumentation, and participation in MODIS (project) related activities. Activities in this contract period have focused on radiative transfer modeling, evaluation of atmospheric correction methodologies, undertake field campaigns, analysis of field data, and participation in MODIS meetings.

  1. System development of the Screwworm Eradication Data System (SEDS) algorithm

    NASA Technical Reports Server (NTRS)

    Arp, G.; Forsberg, F.; Giddings, L.; Phinney, D.

    1976-01-01

    The use of remotely sensed data is reported in the eradication of the screwworm and in the study of the role of the weather in the activity and development of the screwworm fly. As a result, the Screwworm Eradication Data System (SEDS) algorithm was developed.

  2. Infrared algorithm development for ocean observations with EOS/MODIS

    NASA Technical Reports Server (NTRS)

    Brown, Otis B.

    1994-01-01

    Efforts continue under this contract to develop algorithms for the computation of sea surface temperature (SST) from MODIS infrared retrievals. This effort includes radiative transfer modeling, comparison of in situ and satellite observations, development and evaluation of processing and networking methodologies for algorithm computation and data accession, evaluation of surface validation approaches for IR radiances, and participation in MODIS (project) related activities. Efforts in this contract period have focused on radiative transfer modeling and evaluation of atmospheric path radiance efforts on SST estimation, exploration of involvement in ongoing field studies, evaluation of new computer networking strategies, and objective analysis approaches.

  3. Development of activity pencil beam algorithm using measured distribution data of positron emitter nuclei generated by proton irradiation of targets containing {sup 12}C, {sup 16}O, and {sup 40}Ca nuclei in preparation of clinical application

    SciTech Connect

    Miyatake, Aya; Nishio, Teiji; Ogino, Takashi

    2011-10-15

    Purpose: The purpose of this study is to develop a new calculation algorithm that is satisfactory in terms of the requirements for both accuracy and calculation time for a simulation of imaging of the proton-irradiated volume in a patient body in clinical proton therapy. Methods: The activity pencil beam algorithm (APB algorithm), which is a new technique to apply the pencil beam algorithm generally used for proton dose calculations in proton therapy to the calculation of activity distributions, was developed as a calculation algorithm of the activity distributions formed by positron emitter nuclei generated from target nuclear fragment reactions. In the APB algorithm, activity distributions are calculated using an activity pencil beam kernel. In addition, the activity pencil beam kernel is constructed using measured activity distributions in the depth direction and calculations in the lateral direction. {sup 12}C, {sup 16}O, and {sup 40}Ca nuclei were determined as the major target nuclei that constitute a human body that are of relevance for calculation of activity distributions. In this study, ''virtual positron emitter nuclei'' was defined as the integral yield of various positron emitter nuclei generated from each target nucleus by target nuclear fragment reactions with irradiated proton beam. Compounds, namely, polyethylene, water (including some gelatin) and calcium oxide, which contain plenty of the target nuclei, were irradiated using a proton beam. In addition, depth activity distributions of virtual positron emitter nuclei generated in each compound from target nuclear fragment reactions were measured using a beam ON-LINE PET system mounted a rotating gantry port (BOLPs-RGp). The measured activity distributions depend on depth or, in other words, energy. The irradiated proton beam energies were 138, 179, and 223 MeV, and measurement time was about 5 h until the measured activity reached the background level. Furthermore, the activity pencil beam data

  4. Computational Fluid Dynamics. [numerical methods and algorithm development

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.

  5. An Experimental Method for the Active Learning of Greedy Algorithms

    ERIC Educational Resources Information Center

    Velazquez-Iturbide, J. Angel

    2013-01-01

    Greedy algorithms constitute an apparently simple algorithm design technique, but its learning goals are not simple to achieve.We present a didacticmethod aimed at promoting active learning of greedy algorithms. The method is focused on the concept of selection function, and is based on explicit learning goals. It mainly consists of an…

  6. Global Precipitation Measurement: GPM Microwave Imager (GMI) Algorithm Development Approach

    NASA Technical Reports Server (NTRS)

    Stocker, Erich Franz

    2009-01-01

    This slide presentation reviews the approach to the development of the Global Precipitation Measurement algorithm. This presentation includes information about the responsibilities for the development of the algorithm, and the calibration. Also included is information about the orbit, and the sun angle. The test of the algorithm code will be done with synthetic data generated from the Precipitation Processing System (PPS).

  7. Classifying Volcanic Activity Using an Empirical Decision Making Algorithm

    NASA Astrophysics Data System (ADS)

    Junek, W. N.; Jones, W. L.; Woods, M. T.

    2012-12-01

    Detection and classification of developing volcanic activity is vital to eruption forecasting. Timely information regarding an impending eruption would aid civil authorities in determining the proper response to a developing crisis. In this presentation, volcanic activity is characterized using an event tree classifier and a suite of empirical statistical models derived through logistic regression. Forecasts are reported in terms of the United States Geological Survey (USGS) volcano alert level system. The algorithm employs multidisciplinary data (e.g., seismic, GPS, InSAR) acquired by various volcano monitoring systems and source modeling information to forecast the likelihood that an eruption, with a volcanic explosivity index (VEI) > 1, will occur within a quantitatively constrained area. Logistic models are constructed from a sparse and geographically diverse dataset assembled from a collection of historic volcanic unrest episodes. Bootstrapping techniques are applied to the training data to allow for the estimation of robust logistic model coefficients. Cross validation produced a series of receiver operating characteristic (ROC) curves with areas ranging between 0.78-0.81, which indicates the algorithm has good predictive capabilities. The ROC curves also allowed for the determination of a false positive rate and optimum detection for each stage of the algorithm. Forecasts for historic volcanic unrest episodes in North America and Iceland were computed and are consistent with the actual outcome of the events.

  8. Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms

    NASA Technical Reports Server (NTRS)

    Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)

    2000-01-01

    In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.

  9. Development of Speckle Interferometry Algorithm and System

    SciTech Connect

    Shamsir, A. A. M.; Jafri, M. Z. M.; Lim, H. S.

    2011-05-25

    Electronic speckle pattern interferometry (ESPI) method is a wholefield, non destructive measurement method widely used in the industries such as detection of defects on metal bodies, detection of defects in intergrated circuits in digital electronics components and in the preservation of priceless artwork. In this research field, this method is widely used to develop algorithms and to develop a new laboratory setup for implementing the speckle pattern interferometry. In speckle interferometry, an optically rough test surface is illuminated with an expanded laser beam creating a laser speckle pattern in the space surrounding the illuminated region. The speckle pattern is optically mixed with a second coherent light field that is either another speckle pattern or a smooth light field. This produces an interferometric speckle pattern that will be detected by sensor to count the change of the speckle pattern due to force given. In this project, an experimental setup of ESPI is proposed to analyze a stainless steel plate using 632.8 nm (red) wavelength of lights.

  10. Development of a genetic algorithm for molecular scale catalyst design

    SciTech Connect

    McLeod, A.S.; Gladden, L.F.; Johnston, M.E.

    1997-04-01

    A genetic algorithm has been developed to determine the optimal design of a two-component catalyst for the diffusion-limited A + B AB{up_arrow} reaction in which each species is adsorbed specifically on one of two types of sites. Optimization of the distribution of catalytic sites on the surface is achieved by means of an evolutionary algorithm which repeatedly selects the more active surfaces from a population of possible solutions leading to a gradual improvement in the activity of the catalyst surface. A Monte Carlo simulation is used to determine the activity of each of the catalyst surfaces. It is found that for a reacting mixture composed of equal amounts of each component the optimal active site distribution is that of a checkerboard, this solution being approximately 25% more active than a random site distribution. Study of a range of reactant compositions has shown the optimal distribution of catalytically active sites to be dependent on the composition of the ratio of A to B in the reacting mixture. The potential for application of the optimization method introduced here to other catalysts systems is discussed. 27 refs., 7 figs.

  11. A Developed ESPRIT Algorithm for DOA Estimation

    NASA Astrophysics Data System (ADS)

    Fayad, Youssef; Wang, Caiyun; Cao, Qunsheng; Hafez, Alaa El-Din Sayed

    2015-05-01

    A novel algorithm for estimating direction of arrival (DOAE) for target, which aspires to contribute to increase the estimation process accuracy and decrease the calculation costs, has been carried out. It has introduced time and space multiresolution in Estimation of Signal Parameter via Rotation Invariance Techniques (ESPRIT) method (TS-ESPRIT) to realize subspace approach that decreases errors caused by the model's nonlinearity effect. The efficacy of the proposed algorithm is verified by using Monte Carlo simulation, the DOAE accuracy has evaluated by closed-form Cramér-Rao bound (CRB) which reveals that the proposed algorithm's estimated results are better than those of the normal ESPRIT methods leading to the estimator performance enhancement.

  12. Predicting mining activity with parallel genetic algorithms

    USGS Publications Warehouse

    Talaie, S.; Leigh, R.; Louis, S.J.; Raines, G.L.; Beyer, H.G.; O'Reilly, U.M.; Banzhaf, Arnold D.; Blum, W.; Bonabeau, C.; Cantu-Paz, E.W.; ,; ,

    2005-01-01

    We explore several different techniques in our quest to improve the overall model performance of a genetic algorithm calibrated probabilistic cellular automata. We use the Kappa statistic to measure correlation between ground truth data and data predicted by the model. Within the genetic algorithm, we introduce a new evaluation function sensitive to spatial correctness and we explore the idea of evolving different rule parameters for different subregions of the land. We reduce the time required to run a simulation from 6 hours to 10 minutes by parallelizing the code and employing a 10-node cluster. Our empirical results suggest that using the spatially sensitive evaluation function does indeed improve the performance of the model and our preliminary results also show that evolving different rule parameters for different regions tends to improve overall model performance. Copyright 2005 ACM.

  13. Probabilistic structural analysis algorithm development for computational efficiency

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1991-01-01

    The PSAM (Probabilistic Structural Analysis Methods) program is developing a probabilistic structural risk assessment capability for the SSME components. An advanced probabilistic structural analysis software system, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), is being developed as part of the PSAM effort to accurately simulate stochastic structures operating under severe random loading conditions. One of the challenges in developing the NESSUS system is the development of the probabilistic algorithms that provide both efficiency and accuracy. The main probability algorithms developed and implemented in the NESSUS system are efficient, but approximate in nature. In the last six years, the algorithms have improved very significantly.

  14. Subsurface biological activity zone detection using genetic search algorithms

    SciTech Connect

    Mahinthakumar, G.; Gwo, J.P.; Moline, G.R.; Webb, O.F.

    1999-12-01

    Use of generic search algorithms for detection of subsurface biological activity zones (BAZ) is investigated through a series of hypothetical numerical biostimulation experiments. Continuous injection of dissolved oxygen and methane with periodically varying concentration stimulates the cometabolism of indigenous methanotropic bacteria. The observed breakthroughs of methane are used to deduce possible BAZ in the subsurface. The numerical experiments are implemented in a parallel computing environment to make possible the large number of simultaneous transport simulations required by the algorithm. The results show that genetic algorithms are very efficient in locating multiple activity zones, provided the observed signals adequately sample the BAZ.

  15. Advances in fracture algorithm development in GRIM

    NASA Astrophysics Data System (ADS)

    Cullis, I.; Church, P.; Greenwood, P.; Huntington-Thresher, W.; Reynolds, M.

    2003-09-01

    The numerical treatment of fracture processes has long been a major challenge in any hydrocode, but has been particularly acute in Eulerian Hydrocodes. This is due to the difficulties in establishing a consistent process for treating failure and the post failure treatment, which is complicated by advection, mixed cell and interface issues, particularly post failure. This alone increase the complexity of incorporating and validating a failure model compared to a Lagrange hydrocode, where the numerical treatment is much simpler. This paper outlines recent significant progress in the incorporation of fracture models in GRIM and the advection of damage across cell boundaries within the mesh. This has allowed a much more robust treatment of fracture in an Eulerian frame of reference and has greatly expanded the scope of tractable dynamic fracture scenarios. The progress has been possible due to a careful integration of the fracture algorithm within the numerical integration scheme to maintain a consistent representation of the physics. The paper describes various applications, which demonstrate the robustness and efficiency of the scheme and highlight some of the future challenges.

  16. Developer Tools for Evaluating Multi-Objective Algorithms

    NASA Technical Reports Server (NTRS)

    Giuliano, Mark E.; Johnston, Mark D.

    2011-01-01

    Multi-objective algorithms for scheduling offer many advantages over the more conventional single objective approach. By keeping user objectives separate instead of combined, more information is available to the end user to make trade-offs between competing objectives. Unlike single objective algorithms, which produce a single solution, multi-objective algorithms produce a set of solutions, called a Pareto surface, where no solution is strictly dominated by another solution for all objectives. From the end-user perspective a Pareto-surface provides a tool for reasoning about trade-offs between competing objectives. From the perspective of a software developer multi-objective algorithms provide an additional challenge. How can you tell if one multi-objective algorithm is better than another? This paper presents formal and visual tools for evaluating multi-objective algorithms and shows how the developer process of selecting an algorithm parallels the end-user process of selecting a solution for execution out of the Pareto-Surface.

  17. Development and Evaluation of Algorithms for Breath Alcohol Screening

    PubMed Central

    Ljungblad, Jonas; Hök, Bertil; Ekström, Mikael

    2016-01-01

    Breath alcohol screening is important for traffic safety, access control and other areas of health promotion. A family of sensor devices useful for these purposes is being developed and evaluated. This paper is focusing on algorithms for the determination of breath alcohol concentration in diluted breath samples using carbon dioxide to compensate for the dilution. The examined algorithms make use of signal averaging, weighting and personalization to reduce estimation errors. Evaluation has been performed by using data from a previously conducted human study. It is concluded that these features in combination will significantly reduce the random error compared to the signal averaging algorithm taken alone. PMID:27043576

  18. Development and Evaluation of Algorithms for Breath Alcohol Screening.

    PubMed

    Ljungblad, Jonas; Hök, Bertil; Ekström, Mikael

    2016-01-01

    Breath alcohol screening is important for traffic safety, access control and other areas of health promotion. A family of sensor devices useful for these purposes is being developed and evaluated. This paper is focusing on algorithms for the determination of breath alcohol concentration in diluted breath samples using carbon dioxide to compensate for the dilution. The examined algorithms make use of signal averaging, weighting and personalization to reduce estimation errors. Evaluation has been performed by using data from a previously conducted human study. It is concluded that these features in combination will significantly reduce the random error compared to the signal averaging algorithm taken alone. PMID:27043576

  19. Developing A Navier-Stokes Algorithm For Supercomputers

    NASA Technical Reports Server (NTRS)

    Swisshelm, Julie M.

    1992-01-01

    Report discusses development of algorithm for solution of Navier-Stokes equations of flow on parallel-processing supercomputers. Involves combination of prior techniques to form algorithm to compute flows in complicated three-dimensional configurations. Includes explicit finite-difference numerical-integration scheme applicable to flows represented by hierarchy of mathematical models ranging from Euler to full Navier-Stokes. Of interest to researchers looking for ways to structure problems for greater computational efficiency.

  20. Algorithm development for Maxwell's equations for computational electromagnetism

    NASA Technical Reports Server (NTRS)

    Goorjian, Peter M.

    1990-01-01

    A new algorithm has been developed for solving Maxwell's equations for the electromagnetic field. It solves the equations in the time domain with central, finite differences. The time advancement is performed implicitly, using an alternating direction implicit procedure. The space discretization is performed with finite volumes, using curvilinear coordinates with electromagnetic components along those directions. Sample calculations are presented of scattering from a metal pin, a square and a circle to demonstrate the capabilities of the new algorithm.

  1. Development and Testing of Data Mining Algorithms for Earth Observation

    NASA Technical Reports Server (NTRS)

    Glymour, Clark

    2005-01-01

    The new algorithms developed under this project included a principled procedure for classification of objects, events or circumstances according to a target variable when a very large number of potential predictor variables is available but the number of cases that can be used for training a classifier is relatively small. These "high dimensional" problems require finding a minimal set of variables -called the Markov Blanket-- sufficient for predicting the value of the target variable. An algorithm, the Markov Blanket Fan Search, was developed, implemented and tested on both simulated and real data in conjunction with a graphical model classifier, which was also implemented. Another algorithm developed and implemented in TETRAD IV for time series elaborated on work by C. Granger and N. Swanson, which in turn exploited some of our earlier work. The algorithms in question learn a linear time series model from data. Given such a time series, the simultaneous residual covariances, after factoring out time dependencies, may provide information about causal processes that occur more rapidly than the time series representation allow, so called simultaneous or contemporaneous causal processes. Working with A. Monetta, a graduate student from Italy, we produced the correct statistics for estimating the contemporaneous causal structure from time series data using the TETRAD IV suite of algorithms. Two economists, David Bessler and Kevin Hoover, have independently published applications using TETRAD style algorithms to the same purpose. These implementations and algorithmic developments were separately used in two kinds of studies of climate data: Short time series of geographically proximate climate variables predicting agricultural effects in California, and longer duration climate measurements of temperature teleconnections.

  2. Development and Validation of a Polar Cloud Algorithm for CERES

    NASA Technical Reports Server (NTRS)

    1999-01-01

    The objectives of this project, as described in the original proposal, were to develop an algorithm for diagnosing cloud properties over snow- and ice-covered surfaces, particularly at night, using satellite radiances from the Advanced Very High Resolution Radiometer (AVHRR) and High-resolution Infrared Radiation Sounder (HIRS) sensors. Products from this algorithm include a cloud mask and additional cloud properties such as cloud phase, amount, and height. The SIVIS software package, developed as a part of the CERES project, was originally the primary tool used to develop the algorithm, but as it is no longer supported we have had to pursue a new tool to enable the combination and analysis of collocated radiances from AVHRR and HIRS. This turned out to be a much larger endeavor than we expected, but we now have the data sets collocated (with many thanks to B. Baum for the fundamental code) and we have developed a nighttime cloud detection algorithm. Using this algorithm we have also computed realistic-looking cloud fractions from AVHRR brightness temperatures. A method to identify cloud phase has also been implemented. Atmospheric information from the TIROS Operational Vertical Sounder (TOVS) Polar Pathfinder Data Set, which includes temperature and moisture profiles as well as surface information, provides information required for determining cloud-top height.

  3. Active-passive correlation spectroscopy - A new technique for identifying ocean color algorithm spectral regions

    NASA Technical Reports Server (NTRS)

    Hoge, F. E.; Swift, R. N.

    1986-01-01

    A new active-passive airborne data correlation technique has been developed which allows the validation of existing in-water oceoan color algorithms and the rapid search, identification, and evaluation of new sensor band locations and algorithm wavelength intervals. Thus far, applied only in conjunction with the spectral curvature algorithm (SCA), the active-passive correlation spectroscopy (APCS) technique shows that (1) the usual 490-nm (center-band) chlorophyll SCA could satisfactorily be placed anywhere within the nominal 460-510-nm interval, and (2) two other spectral regions, 645-660 and 680-695 nm, show considerable promise for chlorophyll pigment measurement. Additionally, the APCS method reveals potentially useful wavelength regions (at 600 and about 670 nm) of very low chlorophyll-in-water spectral curvature into which accessory pigment algorithms for phycoerythrin might be carefully positioned. In combination, the APCS and SCA methods strongly suggest that significant information content resides within the seemingly featureless ocean color spectrum.

  4. On the development of protein pKa calculation algorithms

    SciTech Connect

    Carstensen, Tommy; Farrell, Damien; Huang, Yong; Baker, Nathan A.; Nielsen, Jens E.

    2011-12-01

    Protein pKa calculation algorithms are typically developed to reproduce experimental pKa values and provide us with a better understanding of the fundamental importance of electrostatics for protein structure and function. However, the approximations and adjustable parameters employed in almost all pKa calculation methods means that there is the risk that pKa calculation algorithms are 'over-fitted' to the available datasets, and that these methods therefore do not model protein physics realistically. We employ simulations of the protein pKa calculation algorithm development process to show that careful optimization procedures and non-biased experimental datasets must be applied to ensure a realistic description of the underlying physical terms. We furthermore investigate the effect of experimental noise and find a significant effect on the pKa calculation algorithm optimization landscape. Finally, we comment on strategies for ensuring the physical realism of protein pKa calculation algorithms and we assess the overall state of the field with a view to predicting future directions of development.

  5. Development and Application of a Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Fulton, Christopher E.; Maul, William A.; Sowers, T. Shane

    2007-01-01

    This paper describes the development and initial demonstration of a Portable Health Algorithms Test (PHALT) System that is being developed by researchers at the NASA Glenn Research Center (GRC). The PHALT System was conceived as a means of evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT System allows systems health management algorithms to be developed in a graphical programming environment; to be tested and refined using system simulation or test data playback; and finally, to be evaluated in a real-time hardware-in-the-loop mode with a live test article. In this paper, PHALT System development is described through the presentation of a functional architecture, followed by the selection and integration of hardware and software. Also described is an initial real-time hardware-in-the-loop demonstration that used sensor data qualification algorithms to diagnose and isolate simulated sensor failures in a prototype Power Distribution Unit test-bed. Success of the initial demonstration is highlighted by the correct detection of all sensor failures and the absence of any real-time constraint violations.

  6. An efficient algorithm to identify coordinately activated transcription factors.

    PubMed

    Hu, Haiyan

    2010-03-01

    Identification of transcription factor (TF) activities associated with a certain physiological/experimental condition is one of the preliminary steps to reconstruct transcriptional regulatory networks and to identify signal transduction pathways. TF activities are often indicated by the activities of its target genes. Existing studies on identifying TF activities through target genes usually assume the equivalence between co-regulation and co-expression. However, genes with correlated expression profiles may not be co-regulated. In the mean time, although multiple TFs can be activated coordinately, there is a lack of efficient methods to identify coordinately activated TFs. In this paper, we propose an efficient algorithm embedding a dynamic programming procedure to identify a subset of TFs that are potentially coordinately activated under a given condition by utilizing ranked lists of differentially expressed target genes. Applying our algorithm to microarray expression data sets for a number of diseases, our approach found subsets of TFs that are highly likely associated with the given disease processes. PMID:20060041

  7. Using Hypertext To Develop an Algorithmic Approach to Teaching Statistics.

    ERIC Educational Resources Information Center

    Halavin, James; Sommer, Charles

    Hypertext and its more advanced form Hypermedia represent a powerful authoring tool with great potential for allowing statistics teachers to develop documents to assist students in an algorithmic fashion. An introduction to the use of Hypertext is presented, with an example of its use. Hypertext is an approach to information management in which…

  8. Development, Comparisons and Evaluation of Aerosol Retrieval Algorithms

    NASA Astrophysics Data System (ADS)

    de Leeuw, G.; Holzer-Popp, T.; Aerosol-cci Team

    2011-12-01

    The Climate Change Initiative (cci) of the European Space Agency (ESA) has brought together a team of European Aerosol retrieval groups working on the development and improvement of aerosol retrieval algorithms. The goal of this cooperation is the development of methods to provide the best possible information on climate and climate change based on satellite observations. To achieve this, algorithms are characterized in detail as regards the retrieval approaches, the aerosol models used in each algorithm, cloud detection and surface treatment. A round-robin intercomparison of results from the various participating algorithms serves to identify the best modules or combinations of modules for each sensor. Annual global datasets including their uncertainties will then be produced and validated. The project builds on 9 existing algorithms to produce spectral aerosol optical depth (AOD and Ångström exponent) as well as other aerosol information; two instruments are included to provide the absorbing aerosol index (AAI) and stratospheric aerosol information. The algorithms included are: - 3 for ATSR (ORAC developed by RAL / Oxford university, ADV developed by FMI and the SU algorithm developed by Swansea University ) - 2 for MERIS (BAER by Bremen university and the ESA standard handled by HYGEOS) - 1 for POLDER over ocean (LOA) - 1 for synergetic retrieval (SYNAER by DLR ) - 1 for OMI retreival of the absorbing aerosol index with averaging kernel information (KNMI) - 1 for GOMOS stratospheric extinction profile retrieval (BIRA) The first seven algorithms aim at the retrieval of the AOD. However, each of the algorithms used differ in their approach, even for algorithms working with the same instrument such as ATSR or MERIS. To analyse the strengths and weaknesses of each algorithm several tests are made. The starting point for comparison and measurement of improvements is a retrieval run for 1 month, September 2008. The data from the same month are subsequently used for

  9. Algorithm integration using ADL (Algorithm Development Library) for improving CrIMSS EDR science product quality

    NASA Astrophysics Data System (ADS)

    Das, B.; Wilson, M.; Divakarla, M. G.; Chen, W.; Barnet, C.; Wolf, W.

    2013-05-01

    Algorithm Development Library (ADL) is a framework that mimics the operational system IDPS (Interface Data Processing Segment) that is currently being used to process data from instruments aboard Suomi National Polar-orbiting Partnership (S-NPP) satellite. The satellite was launched successfully in October 2011. The Cross-track Infrared and Microwave Sounder Suite (CrIMSS) consists of the Advanced Technology Microwave Sounder (ATMS) and Cross-track Infrared Sounder (CrIS) instruments that are on-board of S-NPP. These instruments will also be on-board of JPSS (Joint Polar Satellite System) that will be launched in early 2017. The primary products of the CrIMSS Environmental Data Record (EDR) include global atmospheric vertical temperature, moisture, and pressure profiles (AVTP, AVMP and AVPP) and Ozone IP (Intermediate Product from CrIS radiances). Several algorithm updates have recently been proposed by CrIMSS scientists that include fixes to the handling of forward modeling errors, a more conservative identification of clear scenes, indexing corrections for daytime products, and relaxed constraints between surface temperature and air temperature for daytime land scenes. We have integrated these improvements into the ADL framework. This work compares the results from ADL emulation of future IDPS system incorporating all the suggested algorithm updates with the current official processing results by qualitative and quantitative evaluations. The results prove these algorithm updates improve science product quality.

  10. Datasets for radiation network algorithm development and testing

    SciTech Connect

    Rao, Nageswara S; Sen, Satyabrata; Berry, M. L..; Wu, Qishi; Grieme, M.; Brooks, Richard R; Cordone, G.

    2016-01-01

    Domestic Nuclear Detection Office s (DNDO) Intelligence Radiation Sensors Systems (IRSS) program supported the development of networks of commercial-off-the-shelf (COTS) radiation counters for detecting, localizing, and identifying low-level radiation sources. Under this program, a series of indoor and outdoor tests were conducted with multiple source strengths and types, different background profiles, and various types of source and detector movements. Following the tests, network algorithms were replayed in various re-constructed scenarios using sub-networks. These measurements and algorithm traces together provide a rich collection of highly valuable datasets for testing the current and next generation radiation network algorithms, including the ones (to be) developed by broader R&D communities such as distributed detection, information fusion, and sensor networks. From this multiple TeraByte IRSS database, we distilled out and packaged the first batch of canonical datasets for public release. They include measurements from ten indoor and two outdoor tests which represent increasingly challenging baseline scenarios for robustly testing radiation network algorithms.

  11. Developing and Implementing the Data Mining Algorithms in RAVEN

    SciTech Connect

    Sen, Ramazan Sonat; Maljovec, Daniel Patrick; Alfonsi, Andrea; Rabiti, Cristian

    2015-09-01

    The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantification analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.

  12. Geomagnetic Activity Forecasting Using Self-Learning Algorithms: Application in Space Weather Studies

    NASA Astrophysics Data System (ADS)

    Khalil, A. F.; Barakat, A. R.; McKee, M.

    2005-05-01

    The ability to forecast the geomagnetic activities is becoming more important as human activity in space becomes more prevalent. For example, early warning of geomagnetic storms could help mitigate their harmful effects on space electronics and on electrical power lines. Moreover, recently developed space weather algorithms that utilize physics-based models require future values of Kp as an input in order to forecast the ionospheric behavior. Computational learning theory and data-driven modeling techniques are new and rapidly expanding areas of research that aim at developing efficient learning algorithms. Here we compare self-learning algorithms regarding their abilities to forecast the level of geomagnetic activities, as represented by Kp. In particular, we consider the following algorithms: artificial neural networks, locally weighted projection regression, support vector machines, and relevance vector machines. Different parameters are considered such as: (1) length of forecasting time, (2) type and size of input data, and (3) training set size. These learning machines are compared regarding their generalization capabilities and structure reliabilities. The relative strengths and limitations of these algorithms will be presented.

  13. Experimental evaluation of leaky least-mean-square algorithms for active noise reduction in communication headsets

    NASA Astrophysics Data System (ADS)

    Cartes, David A.; Ray, Laura R.; Collier, Robert D.

    2002-04-01

    An adaptive leaky normalized least-mean-square (NLMS) algorithm has been developed to optimize stability and performance of active noise cancellation systems. The research addresses LMS filter performance issues related to insufficient excitation, nonstationary noise fields, and time-varying signal-to-noise ratio. The adaptive leaky NLMS algorithm is based on a Lyapunov tuning approach in which three candidate algorithms, each of which is a function of the instantaneous measured reference input, measurement noise variance, and filter length, are shown to provide varying degrees of tradeoff between stability and noise reduction performance. Each algorithm is evaluated experimentally for reduction of low frequency noise in communication headsets, and stability and noise reduction performance are compared with that of traditional NLMS and fixed-leakage NLMS algorithms. Acoustic measurements are made in a specially designed acoustic test cell which is based on the original work of Ryan et al. [``Enclosure for low frequency assessment of active noise reducing circumaural headsets and hearing protection,'' Can. Acoust. 21, 19-20 (1993)] and which provides a highly controlled and uniform acoustic environment. The stability and performance of the active noise reduction system, including a prototype communication headset, are investigated for a variety of noise sources ranging from stationary tonal noise to highly nonstationary measured F-16 aircraft noise over a 20 dB dynamic range. Results demonstrate significant improvements in stability of Lyapunov-tuned LMS algorithms over traditional leaky or nonleaky normalized algorithms, while providing noise reduction performance equivalent to that of the NLMS algorithm for idealized noise fields.

  14. Development of microwave rainfall retrieval algorithm for climate applications

    NASA Astrophysics Data System (ADS)

    KIM, J. H.; Shin, D. B.

    2014-12-01

    With the accumulated satellite datasets for decades, it is possible that satellite-based data could contribute to sustained climate applications. Level-3 products from microwave sensors for climate applications can be obtained from several algorithms. For examples, the Microwave Emission brightness Temperature Histogram (METH) algorithm produces level-3 rainfalls directly, whereas the Goddard profiling (GPROF) algorithm first generates instantaneous rainfalls and then temporal and spatial averaging process leads to level-3 products. The rainfall algorithm developed in this study follows a similar approach to averaging instantaneous rainfalls. However, the algorithm is designed to produce instantaneous rainfalls at an optimal resolution showing reduced non-linearity in brightness temperature (TB)-rain rate(R) relations. It is found that the resolution tends to effectively utilize emission channels whose footprints are relatively larger than those of scattering channels. This algorithm is mainly composed of a-priori databases (DBs) and a Bayesian inversion module. The DB contains massive pairs of simulated microwave TBs and rain rates, obtained by WRF (version 3.4) and RTTOV (version 11.1) simulations. To improve the accuracy and efficiency of retrieval process, data mining technique is additionally considered. The entire DB is classified into eight types based on Köppen climate classification criteria using reanalysis data. Among these sub-DBs, only one sub-DB which presents the most similar physical characteristics is selected by considering the thermodynamics of input data. When the Bayesian inversion is applied to the selected DB, instantaneous rain rate with 6 hours interval is retrieved. The retrieved monthly mean rainfalls are statistically compared with CMAP and GPCP, respectively.

  15. Oscillation Detection Algorithm Development Summary Report and Test Plan

    SciTech Connect

    Zhou, Ning; Huang, Zhenyu; Tuffner, Francis K.; Jin, Shuangshuang

    2009-10-03

    Small signal stability problems are one of the major threats to grid stability and reliability in California and the western U.S. power grid. An unstable oscillatory mode can cause large-amplitude oscillations and may result in system breakup and large-scale blackouts. There have been several incidents of system-wide oscillations. Of them, the most notable is the August 10, 1996 western system breakup produced as a result of undamped system-wide oscillations. There is a great need for real-time monitoring of small-signal oscillations in the system. In power systems, a small-signal oscillation is the result of poor electromechanical damping. Considerable understanding and literature have been developed on the small-signal stability problem over the past 50+ years. These studies have been mainly based on a linearized system model and eigenvalue analysis of its characteristic matrix. However, its practical feasibility is greatly limited as power system models have been found inadequate in describing real-time operating conditions. Significant efforts have been devoted to monitoring system oscillatory behaviors from real-time measurements in the past 20 years. The deployment of phasor measurement units (PMU) provides high-precision time-synchronized data needed for estimating oscillation modes. Measurement-based modal analysis, also known as ModeMeter, uses real-time phasor measure-ments to estimate system oscillation modes and their damping. Low damping indicates potential system stability issues. Oscillation alarms can be issued when the power system is lightly damped. A good oscillation alarm tool can provide time for operators to take remedial reaction and reduce the probability of a system breakup as a result of a light damping condition. Real-time oscillation monitoring requires ModeMeter algorithms to have the capability to work with various kinds of measurements: disturbance data (ringdown signals), noise probing data, and ambient data. Several measurement

  16. Utilization of Ancillary Data Sets for SMAP Algorithm Development and Product Generation

    NASA Technical Reports Server (NTRS)

    ONeill, P.; Podest, E.; Njoku, E.

    2011-01-01

    Algorithms being developed for the Soil Moisture Active Passive (SMAP) mission require a variety of both static and ancillary data. The selection of the most appropriate source for each ancillary data parameter is driven by a number of considerations, including accuracy, latency, availability, and consistency across all SMAP products and with SMOS (Soil Moisture Ocean Salinity). It is anticipated that initial selection of all ancillary datasets, which are needed for ongoing algorithm development activities on the SMAP algorithm testbed at JPL, will be completed within the year. These datasets will be updated as new or improved sources become available, and all selections and changes will be documented for the benefit of the user community. Wise choices in ancillary data will help to enable SMAP to provide new global measurements of soil moisture and freeze/thaw state at the targeted accuracy necessary to tackle hydrologically-relevant societal issues.

  17. Development and Evaluation of Model Algorithms to Account for Chemical Transformation in the Nearroad Environment

    EPA Science Inventory

    We describe the development and evaluation of two new model algorithms for NOx chemistry in the R-LINE near-road dispersion model for traffic sources. With increased urbanization, there is increased mobility leading to higher amount of traffic related activity on a global scale. ...

  18. Development of a biomimetic robotic fish and its control algorithm.

    PubMed

    Yu, Junzhi; Tan, Min; Wang, Shuo; Chen, Erkui

    2004-08-01

    This paper is concerned with the design of a robotic fish and its motion control algorithms. A radio-controlled, four-link biomimetic robotic fish is developed using a flexible posterior body and an oscillating foil as a propeller. The swimming speed of the robotic fish is adjusted by modulating joint's oscillating frequency, and its orientation is tuned by different joint's deflections. Since the motion control of a robotic fish involves both hydrodynamics of the fluid environment and dynamics of the robot, it is very difficult to establish a precise mathematical model employing purely analytical methods. Therefore, the fish's motion control task is decomposed into two control systems. The online speed control implements a hybrid control strategy and a proportional-integral-derivative (PID) control algorithm. The orientation control system is based on a fuzzy logic controller. In our experiments, a point-to-point (PTP) control algorithm is implemented and an overhead vision system is adopted to provide real-time visual feedback. The experimental results confirm the effectiveness of the proposed algorithms.

  19. Development of antibiotic regimens using graph based evolutionary algorithms.

    PubMed

    Corns, Steven M; Ashlock, Daniel A; Bryden, Kenneth M

    2013-12-01

    This paper examines the use of evolutionary algorithms in the development of antibiotic regimens given to production animals. A model is constructed that combines the lifespan of the animal and the bacteria living in the animal's gastro-intestinal tract from the early finishing stage until the animal reaches market weight. This model is used as the fitness evaluation for a set of graph based evolutionary algorithms to assess the impact of diversity control on the evolving antibiotic regimens. The graph based evolutionary algorithms have two objectives: to find an antibiotic treatment regimen that maintains the weight gain and health benefits of antibiotic use and to reduce the risk of spreading antibiotic resistant bacteria. This study examines different regimens of tylosin phosphate use on bacteria populations divided into Gram positive and Gram negative types, with a focus on Campylobacter spp. Treatment regimens were found that provided decreased antibiotic resistance relative to conventional methods while providing nearly the same benefits as conventional antibiotic regimes. By using a graph to control the information flow in the evolutionary algorithm, a variety of solutions along the Pareto front can be found automatically for this and other multi-objective problems.

  20. Data inversion algorithm development for the hologen occultation experiment

    NASA Technical Reports Server (NTRS)

    Gordley, Larry L.; Mlynczak, Martin G.

    1986-01-01

    The successful retrieval of atmospheric parameters from radiometric measurement requires not only the ability to do ideal radiometric calculations, but also a detailed understanding of instrument characteristics. Therefore a considerable amount of time was spent in instrument characterization in the form of test data analysis and mathematical formulation. Analyses of solar-to-reference interference (electrical cross-talk), detector nonuniformity, instrument balance error, electronic filter time-constants and noise character were conducted. A second area of effort was the development of techniques for the ideal radiometric calculations required for the Halogen Occultation Experiment (HALOE) data reduction. The computer code for these calculations must be extremely complex and fast. A scheme for meeting these requirements was defined and the algorithms needed form implementation are currently under development. A third area of work included consulting on the implementation of the Emissivity Growth Approximation (EGA) method of absorption calculation into a HALOE broadband radiometer channel retrieval algorithm.

  1. SMMR Simulator radiative transfer calibration model. 2: Algorithm development

    NASA Technical Reports Server (NTRS)

    Link, S.; Calhoon, C.; Krupp, B.

    1980-01-01

    Passive microwave measurements performed from Earth orbit can be used to provide global data on a wide range of geophysical and meteorological phenomena. A Scanning Multichannel Microwave Radiometer (SMMR) is being flown on the Nimbus-G satellite. The SMMR Simulator duplicates the frequency bands utilized in the spacecraft instruments through an amalgamate of radiometer systems. The algorithm developed utilizes data from the fall 1978 NASA CV-990 Nimbus-G underflight test series and subsequent laboratory testing.

  2. Development of an Inverse Algorithm for Resonance Inspection

    SciTech Connect

    Lai, Canhai; Xu, Wei; Sun, Xin

    2012-10-01

    Resonance inspection (RI), which employs the natural frequency spectra shift between the good and the anomalous part populations to detect defects, is a non-destructive evaluation (NDE) technique with many advantages such as low inspection cost, high testing speed, and broad applicability to structures with complex geometry compared to other contemporary NDE methods. It has already been widely used in the automobile industry for quality inspections of safety critical parts. Unlike some conventionally used NDE methods, the current RI technology is unable to provide details, i.e. location, dimension, or types, of the flaws for the discrepant parts. Such limitation severely hinders its wide spread applications and further development. In this study, an inverse RI algorithm based on maximum correlation function is proposed to quantify the location and size of flaws for a discrepant part. A dog-bone shaped stainless steel sample with and without controlled flaws are used for algorithm development and validation. The results show that multiple flaws can be accurately pinpointed back using the algorithms developed, and the prediction accuracy decreases with increasing flaw numbers and decreasing distance between flaws.

  3. Geothermal materials development activities

    SciTech Connect

    Kukacka, L.E.

    1993-06-01

    This ongoing R&D program is a part of the Core Research Category of the Department of Energy/Geothermal Division initiative to accelerate the utilization of geothermal resources. High risk materials problems that if successfully solved will result in significant reductions in well drilling, fluid transport and energy conversion costs, are emphasized. The project has already developed several advanced materials systems that are being used by the geothermal industry and by Northeastern Electric, Gas and Steam Utilities. Specific topics currently being addressed include lightweight C0{sub 2}-resistant well cements, thermally conductive scale and corrosion resistant liner systems, chemical systems for lost circulation control, elastomer-metal bonding systems, and corrosion mitigation at the Geysers. Efforts to enhance the transfer of the technologies developed in these activities to other sectors of the economy are also underway.

  4. Development of a two wheeled self balancing robot with speech recognition and navigation algorithm

    NASA Astrophysics Data System (ADS)

    Rahman, Md. Muhaimin; Ashik-E-Rasul, Haq, Nowab. Md. Aminul; Hassan, Mehedi; Hasib, Irfan Mohammad Al; Hassan, K. M. Rafidh

    2016-07-01

    This paper is aimed to discuss modeling, construction and development of navigation algorithm of a two wheeled self balancing mobile robot in an enclosure. In this paper, we have discussed the design of two of the main controller algorithms, namely PID algorithms, on the robot model. Simulation is performed in the SIMULINK environment. The controller is developed primarily for self-balancing of the robot and also it's positioning. As for the navigation in an enclosure, template matching algorithm is proposed for precise measurement of the robot position. The navigation system needs to be calibrated before navigation process starts. Almost all of the earlier template matching algorithms that can be found in the open literature can only trace the robot. But the proposed algorithm here can also locate the position of other objects in an enclosure, like furniture, tables etc. This will enable the robot to know the exact location of every stationary object in the enclosure. Moreover, some additional features, such as Speech Recognition and Object Detection, are added. For Object Detection, the single board Computer Raspberry Pi is used. The system is programmed to analyze images captured via the camera, which are then processed through background subtraction, followed by active noise reduction.

  5. Shared memory, cache, and frontwidth considerations in multifrontal algorithm development

    SciTech Connect

    Benner, R.E.

    1986-01-23

    A concurrent, multifrontal algorithm (Benner and Weigand 1986) for solution of finite element equations was modified to better use the cache and shared memories on the ELXSI 6400, and to achieve better load balancing between 'child' processes via frontwidth reduction. The changes were also tailored to use distributed memory machines efficiently by making most local to individual processors. The test code initially used 8 Mbytes of incached shared memory and 155 cp (concurrent processor) sec (a speedup of 1.4) when run on 4 processors. The changes left only 50 Kbytes of uncached, and 470 Kbytes of cached, shared memory, plus 530 Kbytes of data local to each 'child' process. Total cp time was reduced to 57 sec and speedup increased to 2.8 on 4 processors. Based on those results an addition to the ELXSI multitasking software, asynchronous I/O between processes, is proposed that would further decrease the shared memory requirements of the algorithm and make the ELXSI look like a distributed memory machine as far as algorithm development is concerned. This would make the ELXSI an extremely useful tool for further development of special-purpose, finite element computations. 16 refs., 8 tabs.

  6. Ocean Observations with EOS/MODIS: Algorithm Development and Post Launch Studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.; Conboy, Barbara (Technical Monitor)

    1999-01-01

    This separation has been logical thus far; however, as launch of AM-1 approaches, it must be recognized that many of these activities will shift emphasis from algorithm development to validation. For example, the second, third, and fifth bullets will become almost totally validation-focussed activities in the post-launch era, providing the core of our experimental validation effort. Work under the first bullet will continue into the post-launch time frame, driven in part by algorithm deficiencies revealed as a result of validation activities. Prior to the start of the 1999 fiscal year (FY99) we were requested to prepare a brief plan for our FY99 activities. This plan is included as Appendix 1. The present report describes the progress made on our planned activities.

  7. Developing Information Power Grid Based Algorithms and Software

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This exploratory study initiated our effort to understand performance modeling on parallel systems. The basic goal of performance modeling is to understand and predict the performance of a computer program or set of programs on a computer system. Performance modeling has numerous applications, including evaluation of algorithms, optimization of code implementations, parallel library development, comparison of system architectures, parallel system design, and procurement of new systems. Our work lays the basis for the construction of parallel libraries that allow for the reconstruction of application codes on several distinct architectures so as to assure performance portability. Following our strategy, once the requirements of applications are well understood, one can then construct a library in a layered fashion. The top level of this library will consist of architecture-independent geometric, numerical, and symbolic algorithms that are needed by the sample of applications. These routines should be written in a language that is portable across the targeted architectures.

  8. The development of a whole-body algorithm

    NASA Technical Reports Server (NTRS)

    Kay, F. J.

    1973-01-01

    The whole-body algorithm is envisioned as a mathematical model that utilizes human physiology to simulate the behavior of vital body systems. The objective of this model is to determine the response of selected body parameters within these systems to various input perturbations, or stresses. Perturbations of interest are exercise, chemical unbalances, gravitational changes and other abnormal environmental conditions. This model provides for a study of man's physiological response in various space applications, underwater applications, normal and abnormal workloads and environments, and the functioning of the system with physical impairments or decay of functioning components. Many methods or approaches to the development of a whole-body algorithm are considered. Of foremost concern is the determination of the subsystems to be included, the detail of the subsystems and the interaction between the subsystems.

  9. Algorithm for automatic forced spirometry quality assessment: technological developments.

    PubMed

    Melia, Umberto; Burgos, Felip; Vallverdú, Montserrat; Velickovski, Filip; Lluch-Ariet, Magí; Roca, Josep; Caminal, Pere

    2014-01-01

    We hypothesized that the implementation of automatic real-time assessment of quality of forced spirometry (FS) may significantly enhance the potential for extensive deployment of a FS program in the community. Recent studies have demonstrated that the application of quality criteria defined by the ATS/ERS (American Thoracic Society/European Respiratory Society) in commercially available equipment with automatic quality assessment can be markedly improved. To this end, an algorithm for assessing quality of FS automatically was reported. The current research describes the mathematical developments of the algorithm. An innovative analysis of the shape of the spirometric curve, adding 23 new metrics to the traditional 4 recommended by ATS/ERS, was done. The algorithm was created through a two-step iterative process including: (1) an initial version using the standard FS curves recommended by the ATS; and, (2) a refined version using curves from patients. In each of these steps the results were assessed against one expert's opinion. Finally, an independent set of FS curves from 291 patients was used for validation purposes. The novel mathematical approach to characterize the FS curves led to appropriate FS classification with high specificity (95%) and sensitivity (96%). The results constitute the basis for a successful transfer of FS testing to non-specialized professionals in the community.

  10. Collaborative workbench for cyberinfrastructure to accelerate science algorithm development

    NASA Astrophysics Data System (ADS)

    Ramachandran, R.; Maskey, M.; Kuo, K.; Lynnes, C.

    2013-12-01

    There are significant untapped resources for information and knowledge creation within the Earth Science community in the form of data, algorithms, services, analysis workflows or scripts, and the related knowledge about these resources. Despite the huge growth in social networking and collaboration platforms, these resources often reside on an investigator's workstation or laboratory and are rarely shared. A major reason for this is that there are very few scientific collaboration platforms, and those that exist typically require the use of a new set of analysis tools and paradigms to leverage the shared infrastructure. As a result, adoption of these collaborative platforms for science research is inhibited by the high cost to an individual scientist of switching from his or her own familiar environment and set of tools to a new environment and tool set. This presentation will describe an ongoing project developing an Earth Science Collaborative Workbench (CWB). The CWB approach will eliminate this barrier by augmenting a scientist's current research environment and tool set to allow him or her to easily share diverse data and algorithms. The CWB will leverage evolving technologies such as commodity computing and social networking to design an architecture for scalable collaboration that will support the emerging vision of an Earth Science Collaboratory. The CWB is being implemented on the robust and open source Eclipse framework and will be compatible with widely used scientific analysis tools such as IDL. The myScience Catalog built into CWB will capture and track metadata and provenance about data and algorithms for the researchers in a non-intrusive manner with minimal overhead. Seamless interfaces to multiple Cloud services will support sharing algorithms, data, and analysis results, as well as access to storage and computer resources. A Community Catalog will track the use of shared science artifacts and manage collaborations among researchers.

  11. Leadership development in the age of the algorithm.

    PubMed

    Buckingham, Marcus

    2012-06-01

    By now we expect personalized content--it's routinely served up by online retailers and news services, for example. But the typical leadership development program still takes a formulaic, one-size-fits-all approach. And it rarely happens that an excellent technique can be effectively transferred from one leader to all others. Someone trying to adopt a practice from a leader with a different style usually seems stilted and off--a Franken-leader. Breakthrough work at Hilton Hotels and other organizations shows how companies can use an algorithmic model to deliver training tips uniquely suited to each individual's style. It's a five-step process: First, a company must choose a tool with which to identify each person's leadership type. Second, it should assess its best leaders, and third, it should interview them about their techniques. Fourth, it should use its algorithmic model to feed tips drawn from those techniques to developing leaders of the same type. And fifth, it should make the system dynamically intelligent, with user reactions sharpening the content and targeting of tips. The power of this kind of system--highly customized, based on peer-to-peer sharing, and continually evolving--will soon overturn the generic model of leadership development. And such systems will inevitably break through any one organization, until somewhere in the cloud the best leadership tips from all over are gathered, sorted, and distributed according to which ones suit which people best.

  12. Leadership development in the age of the algorithm.

    PubMed

    Buckingham, Marcus

    2012-06-01

    By now we expect personalized content--it's routinely served up by online retailers and news services, for example. But the typical leadership development program still takes a formulaic, one-size-fits-all approach. And it rarely happens that an excellent technique can be effectively transferred from one leader to all others. Someone trying to adopt a practice from a leader with a different style usually seems stilted and off--a Franken-leader. Breakthrough work at Hilton Hotels and other organizations shows how companies can use an algorithmic model to deliver training tips uniquely suited to each individual's style. It's a five-step process: First, a company must choose a tool with which to identify each person's leadership type. Second, it should assess its best leaders, and third, it should interview them about their techniques. Fourth, it should use its algorithmic model to feed tips drawn from those techniques to developing leaders of the same type. And fifth, it should make the system dynamically intelligent, with user reactions sharpening the content and targeting of tips. The power of this kind of system--highly customized, based on peer-to-peer sharing, and continually evolving--will soon overturn the generic model of leadership development. And such systems will inevitably break through any one organization, until somewhere in the cloud the best leadership tips from all over are gathered, sorted, and distributed according to which ones suit which people best. PMID:22741421

  13. The development of solution algorithms for compressible flows

    NASA Astrophysics Data System (ADS)

    Slack, David Christopher

    Three main topics were examined. The first is the development and comparison of time integration schemes on 2-D unstructured meshes. Both explicit and implicit solution grids are presented. Cell centered and cell vertex finite volume upwind schemes using Roe's approximate Riemann solver are developed. The second topic involves an interactive adaptive remeshing algorithm which uses a frontal grid generator and is compared to a single grid calculation. The final topic examined is the capabilities developed for a structured 3-D code called GASP. The capabilities include: generalized chemistry and thermodynamic modeling, space marching, memory management through the use of binary C I/O, and algebraic and two equation eddy viscosity turbulence modeling. Results are given for Mach 1.7 3-D analytic forebody, a Mach 1.38 axisymmetric nozzle with hydrogen-air combustion, a Mach 14.15 deg ramp, and Mach 0.3 viscous flow over a flat plate.

  14. Communication: Active space decomposition with multiple sites: Density matrix renormalization group algorithm

    SciTech Connect

    Parker, Shane M.; Shiozaki, Toru

    2014-12-07

    We extend the active space decomposition method, recently developed by us, to more than two active sites using the density matrix renormalization group algorithm. The fragment wave functions are described by complete or restricted active-space wave functions. Numerical results are shown on a benzene pentamer and a perylene diimide trimer. It is found that the truncation errors in our method decrease almost exponentially with respect to the number of renormalization states M, allowing for numerically exact calculations (to a few μE{sub h} or less) with M = 128 in both cases. This rapid convergence is because the renormalization steps are used only for the interfragment electron correlation.

  15. Algorithm for quantifying advanced carotid artery atherosclerosis in humans using MRI and active contours

    NASA Astrophysics Data System (ADS)

    Adams, Gareth; Vick, G. W., III; Bordelon, Cassius; Insull, William; Morrisett, Joel

    2002-05-01

    A new algorithm for measuring carotid artery volumes and estimating atherosclerotic plaque volumes from MRI images has been developed and validated using pressure-perfusion-fixed cadaveric carotid arteries. Our method uses an active contour algorithm with the generalized gradient vector field force as the external force to localize the boundaries of the artery on each MRI cross-section. Plaque volume is estimated by an automated algorithm based on estimating the normal wall thickness for each branch of the carotid. Triplicate volume measurements were performed by a single observer on thirty-eight pairs of cadaveric carotid arteries. The coefficient of variance (COV) was used to quantify measurement reproducibility. Aggregate volumes were computed for nine contiguous slices bounding the carotid bifurcation. The median (mean +/- SD) COV for the 76 aggregate arterial volumes was 0.93% (1.47% +/- 1.52%) for the lumen volume, 0.95% (1.06% +/- 0.67%) for the total artery volume, and 4.69% (5.39% +/- 3.97%) for the plaque volume. These results indicate that our algorithm provides repeatable measures of arterial volumes and a repeatable estimate of plaque volume of cadaveric carotid specimens through analysis of MRI images. The algorithm also significantly decreases the amount of time necessary to generate these measurements.

  16. Active and passive computed tomography algorithm with a constrained conjugate gradient solution

    SciTech Connect

    Goodman, D.; Jackson, J. A.; Martz, H. E.; Roberson, G. P.

    1998-10-01

    An active and passive computed tomographic technique (A&PCT) has been developed at the Lawrence Livermore National Laboratory (LLNL). The technique uses an external radioactive source and active tomography to map the attenuation within a waste drum as a function of mono-energetic gamma-ray energy. Passive tomography is used to localize and identify specific radioactive waste within the same container. The passive data is corrected for attenuation using the active data and this yields a quantitative assay of drum activity. A&PCT involves the development of a detailed system model that combines the data from the active scans with the geometry of the imaging system. Using the system model, iterative optimization techniques are used to reconstruct the image from the passive data. Requirements for high throughput yield measured emission levels in waste barrels that are too low to apply optimization techniques involving the usual Gaussian statistics. In this situation a Poisson distribution, typically used for cases with low counting statistics, is used to create an effective maximum likelihood estimation function. An optimization algorithm, Constrained Conjugate Gradient (CCG), is used to determine a solution for A&PCT quantitative assay. CCG, which was developed at LLNL, has proven to be an efficient and effective optimization method to solve limited-data problems. A detailed explanation of the algorithms used in developing the model and optimization codes is given.

  17. Advanced three-dimensional Eulerian hydrodynamic algorithm development

    SciTech Connect

    Rider, W.J.; Kothe, D.B.; Mosso, S.

    1998-11-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The purpose of this project is to investigate, implement, and evaluate algorithms that have high potential for improving the robustness, fidelity and accuracy of three-dimensional Eulerian hydrodynamic simulations. Eulerian computations are necessary to simulate a number of important physical phenomena ranging from the molding process for metal parts to nuclear weapons safety issues to astrophysical phenomena such as that associated with a Type 2 supernovae. A number of algorithmic issues were explored in the course of this research including interface/volume tracking, surface physics integration, high resolution integration techniques, multilevel iterative methods, multimaterial hydrodynamics and coupling radiation with hydrodynamics. This project combines core strengths of several Laboratory divisions. The project has high institutional benefit given the renewed emphasis on numerical simulations in Science-Based Stockpile Stewardship and the Accelerated Strategic Computing Initiative and LANL`s tactical goals related to high performance computing and simulation.

  18. Active Control of Automotive Intake Noise under Rapid Acceleration using the Co-FXLMS Algorithm

    NASA Astrophysics Data System (ADS)

    Lee, Hae-Jin; Lee, Gyeong-Tae; Oh, Jae-Eung

    The method of reducing automotive intake noise can be classified by passive and active control techniques. However, passive control has a limited effect of noise reduction at low frequency range (below 500 Hz) and is limited by the space of the engine room. However, active control can overcome these passive control limitations. The active control technique mostly uses the Least-Mean-Square (LMS) algorithm, because the LMS algorithm can easily obtain the complex transfer function in real-time, particularly when the Filtered-X LMS (FXLMS) algorithm is applied to an active noise control (ANC) system. However, the convergence performance of the LMS algorithm decreases significantly when the FXLMS algorithm is applied to the active control of intake noise under rapidly accelerating driving conditions. Therefore, in this study, the Co-FXLMS algorithm was proposed to improve the control performance of the FXLMS algorithm during rapid acceleration. The Co-FXLMS algorithm is realized by using an estimate of the cross correlation between the adaptation error and the filtered input signal to control the step size. The performance of the Co-FXLMS algorithm is presented in comparison with that of the FXLMS algorithm. Experimental results show that active noise control using Co-FXLMS is effective in reducing automotive intake noise during rapid acceleration.

  19. Stoffenmanager exposure model: development of a quantitative algorithm.

    PubMed

    Tielemans, Erik; Noy, Dook; Schinkel, Jody; Heussen, Henri; Van Der Schaaf, Doeke; West, John; Fransman, Wouter

    2008-08-01

    In The Netherlands, the web-based tool called 'Stoffenmanager' was initially developed to assist small- and medium-sized enterprises to prioritize and control risks of handling chemical products in their workplaces. The aim of the present study was to explore the accuracy of the Stoffenmanager exposure algorithm. This was done by comparing its semi-quantitative exposure rankings for specific substances with exposure measurements collected from several occupational settings to derive a quantitative exposure algorithm. Exposure data were collected using two strategies. First, we conducted seven surveys specifically for validation of the Stoffenmanager. Second, existing occupational exposure data sets were collected from various sources. This resulted in 378 and 320 measurements for solid and liquid scenarios, respectively. The Spearman correlation coefficients between Stoffenmanager scores and exposure measurements appeared to be good for handling solids (r(s) = 0.80, N = 378, P < 0.0001) and liquid scenarios (r(s) = 0.83, N = 320, P < 0.0001). However, the correlation for liquid scenarios appeared to be lower when calculated separately for sets of volatile substances with a vapour pressure >10 Pa (r(s) = 0.56, N = 104, P < 0.0001) and non-volatile substances with a vapour pressure < or =10 Pa (r(s) = 0.53, N = 216, P < 0.0001). The mixed-effect regression models with natural log-transformed Stoffenmanager scores as independent parameter explained a substantial part of the total exposure variability (52% for solid scenarios and 76% for liquid scenarios). Notwithstanding the good correlation, the data show substantial variability in exposure measurements given a certain Stoffenmanager score. The overall performance increases our confidence in the use of the Stoffenmanager as a generic tool for risk assessment. The mixed-effect regression models presented in this paper may be used for assessment of so-called reasonable worst case exposures. This evaluation is

  20. Development and evaluation of thermal model reduction algorithms for spacecraft

    NASA Astrophysics Data System (ADS)

    Deiml, Michael; Suderland, Martin; Reiss, Philipp; Czupalla, Markus

    2015-05-01

    This paper is concerned with the topic of the reduction of thermal models of spacecraft. The work presented here has been conducted in cooperation with the company OHB AG, formerly Kayser-Threde GmbH, and the Institute of Astronautics at Technische Universität München with the goal to shorten and automatize the time-consuming and manual process of thermal model reduction. The reduction of thermal models can be divided into the simplification of the geometry model for calculation of external heat flows and radiative couplings and into the reduction of the underlying mathematical model. For simplification a method has been developed which approximates the reduced geometry model with the help of an optimization algorithm. Different linear and nonlinear model reduction techniques have been evaluated for their applicability in reduction of the mathematical model. Thereby the compatibility with the thermal analysis tool ESATAN-TMS is of major concern, which restricts the useful application of these methods. Additional model reduction methods have been developed, which account to these constraints. The Matrix Reduction method allows the approximation of the differential equation to reference values exactly expect for numerical errors. The summation method enables a useful, applicable reduction of thermal models that can be used in industry. In this work a framework for model reduction of thermal models has been created, which can be used together with a newly developed graphical user interface for the reduction of thermal models in industry.

  1. Developing Information Power Grid Based Algorithms and Software

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This was an exploratory study to enhance our understanding of problems involved in developing large scale applications in a heterogeneous distributed environment. It is likely that the large scale applications of the future will be built by coupling specialized computational modules together. For example, efforts now exist to couple ocean and atmospheric prediction codes to simulate a more complete climate system. These two applications differ in many respects. They have different grids, the data is in different unit systems and the algorithms for inte,-rating in time are different. In addition the code for each application is likely to have been developed on different architectures and tend to have poor performance when run on an architecture for which the code was not designed, if it runs at all. Architectural differences may also induce differences in data representation which effect precision and convergence criteria as well as data transfer issues. In order to couple such dissimilar codes some form of translation must be present. This translation should be able to handle interpolation from one grid to another as well as construction of the correct data field in the correct units from available data. Even if a code is to be developed from scratch, a modular approach will likely be followed in that standard scientific packages will be used to do the more mundane tasks such as linear algebra or Fourier transform operations. This approach allows the developers to concentrate on their science rather than becoming experts in linear algebra or signal processing. Problems associated with this development approach include difficulties associated with data extraction and translation from one module to another, module performance on different nodal architectures, and others. In addition to these data and software issues there exists operational issues such as platform stability and resource management.

  2. Understanding disordered systems through numerical simulation and algorithm development

    NASA Astrophysics Data System (ADS)

    Sweeney, Sean Michael

    Disordered systems arise in many physical contexts. Not all matter is uniform, and impurities or heterogeneities can be modeled by fixed random disorder. Numerous complex networks also possess fixed disorder, leading to applications in transportation systems, telecommunications, social networks, and epidemic modeling, to name a few. Due to their random nature and power law critical behavior, disordered systems are difficult to study analytically. Numerical simulation can help overcome this hurdle by allowing for the rapid computation of system states. In order to get precise statistics and extrapolate to the thermodynamic limit, large systems must be studied over many realizations. Thus, innovative algorithm development is essential in order reduce memory or running time requirements of simulations. This thesis presents a review of disordered systems, as well as a thorough study of two particular systems through numerical simulation, algorithm development and optimization, and careful statistical analysis of scaling properties. Chapter 1 provides a thorough overview of disordered systems, the history of their study in the physics community, and the development of techniques used to study them. Topics of quenched disorder, phase transitions, the renormalization group, criticality, and scale invariance are discussed. Several prominent models of disordered systems are also explained. Lastly, analysis techniques used in studying disordered systems are covered. In Chapter 2, minimal spanning trees on critical percolation clusters are studied, motivated in part by an analytic perturbation expansion by Jackson and Read that I check against numerical calculations. This system has a direct mapping to the ground state of the strongly disordered spin glass. We compute the path length fractal dimension of these trees in dimensions d = {2, 3, 4, 5} and find our results to be compatible with the analytic results suggested by Jackson and Read. In Chapter 3, the random bond Ising

  3. Mars Entry Atmospheric Data System Modelling and Algorithm Development

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Beck, Roger E.; OKeefe, Stephen A.; Siemers, Paul; White, Brady; Engelund, Walter C.; Munk, Michelle M.

    2009-01-01

    The Mars Entry Atmospheric Data System (MEADS) is being developed as part of the Mars Science Laboratory (MSL), Entry, Descent, and Landing Instrumentation (MEDLI) project. The MEADS project involves installing an array of seven pressure transducers linked to ports on the MSL forebody to record the surface pressure distribution during atmospheric entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. In particular, the quantities to be estimated from the MEADS pressure measurements include the total pressure, dynamic pressure, Mach number, angle of attack, and angle of sideslip. Secondary objectives are to estimate atmospheric winds by coupling the pressure measurements with the on-board Inertial Measurement Unit (IMU) data. This paper provides details of the algorithm development, MEADS system performance based on calibration, and uncertainty analysis for the aerodynamic and atmospheric quantities of interest. The work presented here is part of the MEDLI performance pre-flight validation and will culminate with processing flight data after Mars entry in 2012.

  4. Development of hybrid genetic algorithms for product line designs.

    PubMed

    Balakrishnan, P V Sundar; Gupta, Rakesh; Jacob, Varghese S

    2004-02-01

    In this paper, we investigate the efficacy of artificial intelligence (AI) based meta-heuristic techniques namely genetic algorithms (GAs), for the product line design problem. This work extends previously developed methods for the single product design problem. We conduct a large scale simulation study to determine the effectiveness of such an AI based technique for providing good solutions and bench mark the performance of this against the current dominant approach of beam search (BS). We investigate the potential advantages of pursuing the avenue of developing hybrid models and then implement and study such hybrid models using two very distinct approaches: namely, seeding the initial GA population with the BS solution, and employing the BS solution as part of the GA operator's process. We go on to examine the impact of two alternate string representation formats on the quality of the solutions obtained by the above proposed techniques. We also explicitly investigate a critical managerial factor of attribute importance in terms of its impact on the solutions obtained by the alternate modeling procedures. The alternate techniques are then evaluated, using statistical analysis of variance, on a fairy large number of data sets, as to the quality of the solutions obtained with respect to the state-of-the-art benchmark and in terms of their ability to provide multiple, unique product line options.

  5. A new radar technique for satellite rainfall algorithm development

    NASA Technical Reports Server (NTRS)

    Jameson, Arthur R.

    1987-01-01

    A potential new radar parameter was investigated for measuring rainfall, namely the summation of the phase shifts at horizontal and vertical polarizations due to propagation through precipitation. The proposed radar technique has several potential advantages over other approaches because it is insensitive to the drop size distribution and to the shapes of the raindrops. Such a parameter could greatly assist the development of satellite rainfall estimation algorithms by providing comparative measurements near the ground. It could also provide hydrologically useful information for such practical applications as urban hydrology. Results of the investigation showed that the parameters can not be measured by radar. However, a closely related radar parameter, propagation differential phase shift, can be readily measured using a polarization diversity radar. It is recommended that propagation differential phase shift be further investigated and developed for radar monitoring of rainfall using a polarization agile radar. It is also recommended that a prototype multiple frequency microwave link be constructed for attenuation measurements not possible by existing radar systems.

  6. Algorithm development for Prognostics and Health Management (PHM).

    SciTech Connect

    Swiler, Laura Painton; Campbell, James E.; Doser, Adele Beatrice; Lowder, Kelly S.

    2003-10-01

    This report summarizes the results of a three-year LDRD project on prognostics and health management. System failure over some future time interval (an alternative definition is the capability to predict the remaining useful life of a system). Prognostics are integrated with health monitoring (through inspections, sensors, etc.) to provide an overall PHM capability that optimizes maintenance actions and results in higher availability at a lower cost. Our goal in this research was to develop PHM tools that could be applied to a wide variety of equipment (repairable, non-repairable, manufacturing, weapons, battlefield equipment, etc.) and require minimal customization to move from one system to the next. Thus, our approach was to develop a toolkit of reusable software objects/components and architecture for their use. We have developed two software tools: an Evidence Engine and a Consequence Engine. The Evidence Engine integrates information from a variety of sources in order to take into account all the evidence that impacts a prognosis for system health. The Evidence Engine has the capability for feature extraction, trend detection, information fusion through Bayesian Belief Networks (BBN), and estimation of remaining useful life. The Consequence Engine involves algorithms to analyze the consequences of various maintenance actions. The Consequence Engine takes as input a maintenance and use schedule, spares information, and time-to-failure data on components, then generates maintenance and failure events, and evaluates performance measures such as equipment availability, mission capable rate, time to failure, and cost. This report summarizes the capabilities we have developed, describes the approach and architecture of the two engines, and provides examples of their use. 'Prognostics' refers to the capability to predict the probability of

  7. Aerospace Activities and Language Development

    ERIC Educational Resources Information Center

    Jones, Robert M.; Piper, Martha

    1975-01-01

    Describes how science activities can be used to stimulate language development in the elementary grades. Two aerospace activities are described involving liquid nitrogen and the launching of a weather balloon which integrate aerospace interests into the development of language skills. (BR)

  8. Dataset exploited for the development and validation of automated cyanobacteria quantification algorithm, ACQUA.

    PubMed

    Gandola, Emanuele; Antonioli, Manuela; Traficante, Alessio; Franceschini, Simone; Scardi, Michele; Congestri, Roberta

    2016-09-01

    The estimation and quantification of potentially toxic cyanobacteria in lakes and reservoirs are often used as a proxy of risk for water intended for human consumption and recreational activities. Here, we present data sets collected from three volcanic Italian lakes (Albano, Vico, Nemi) that present filamentous cyanobacteria strains at different environments. Presented data sets were used to estimate abundance and morphometric characteristics of potentially toxic cyanobacteria comparing manual Vs. automated estimation performed by ACQUA ("ACQUA: Automated Cyanobacterial Quantification Algorithm for toxic filamentous genera using spline curves, pattern recognition and machine learning" (Gandola et al., 2016) [1]). This strategy was used to assess the algorithm performance and to set up the denoising algorithm. Abundance and total length estimations were used for software development, to this aim we evaluated the efficiency of statistical tools and mathematical algorithms, here described. The image convolution with the Sobel filter has been chosen to denoise input images from background signals, then spline curves and least square method were used to parameterize detected filaments and to recombine crossing and interrupted sections aimed at performing precise abundances estimations and morphometric measurements. PMID:27500194

  9. Algorithm development for deeply buried threat detection in GPR data

    NASA Astrophysics Data System (ADS)

    Reichman, Daniël.; Malof, Jordan M.; Collins, Leslie M.

    2016-05-01

    Ground penetrating radar (GPR) is a popular remote sensing modality for buried threat detection. Many algorithms have been developed to detect buried threats using GPR data. One on-going challenge with GPR is the detection of very deeply buried targets. In this work a detection approach is proposed that improves the detection of very deeply buried targets, and interestingly, shallow targets as well. First, it is shown that the signal of a target (the target "signature") is well localized in time, and well correlated with the target's burial depth. This motivates the proposed approach, where GPR data is split into two disjoint subsets: an early and late portion corresponding to the time at which shallow and deep target signatures appear, respectively. Experiments are conducted on real GPR data using the previously published histogram of oriented gradients (HOG) prescreener: a fast supervised processing method operated on HOG features. The results show substantial improvements in detection of very deeply buried targets (4.1% to 17.2%) and in overall detection performance (81.1% to 83.9%). Further, it is shown that the performance of the proposed approach is relatively insensitive to the time at which the data is split. These results suggest that other detection methods may benefit from depth-based processing as well.

  10. Toward Developing Genetic Algorithms to Aid in Critical Infrastructure Modeling

    SciTech Connect

    Not Available

    2007-05-01

    Today’s society relies upon an array of complex national and international infrastructure networks such as transportation, telecommunication, financial and energy. Understanding these interdependencies is necessary in order to protect our critical infrastructure. The Critical Infrastructure Modeling System, CIMS©, examines the interrelationships between infrastructure networks. CIMS© development is sponsored by the National Security Division at the Idaho National Laboratory (INL) in its ongoing mission for providing critical infrastructure protection and preparedness. A genetic algorithm (GA) is an optimization technique based on Darwin’s theory of evolution. A GA can be coupled with CIMS© to search for optimum ways to protect infrastructure assets. This includes identifying optimum assets to enforce or protect, testing the addition of or change to infrastructure before implementation, or finding the optimum response to an emergency for response planning. This paper describes the addition of a GA to infrastructure modeling for infrastructure planning. It first introduces the CIMS© infrastructure modeling software used as the modeling engine to support the GA. Next, the GA techniques and parameters are defined. Then a test scenario illustrates the integration with CIMS© and the preliminary results.

  11. Phase 2 development of Great Lakes algorithms for Nimbus-7 coastal zone color scanner

    NASA Technical Reports Server (NTRS)

    Tanis, Fred J.

    1984-01-01

    A series of experiments have been conducted in the Great Lakes designed to evaluate the application of the NIMBUS-7 Coastal Zone Color Scanner (CZCS). Atmospheric and water optical models were used to relate surface and subsurface measurements to satellite measured radiances. Absorption and scattering measurements were reduced to obtain a preliminary optical model for the Great Lakes. Algorithms were developed for geometric correction, correction for Rayleigh and aerosol path radiance, and prediction of chlorophyll-a pigment and suspended mineral concentrations. The atmospheric algorithm developed compared favorably with existing algorithms and was the only algorithm found to adequately predict the radiance variations in the 670 nm band. The atmospheric correction algorithm developed was designed to extract needed algorithm parameters from the CZCS radiance values. The Gordon/NOAA ocean algorithms could not be demonstrated to work for Great Lakes waters. Predicted values of chlorophyll-a concentration compared favorably with expected and measured data for several areas of the Great Lakes.

  12. Implementation of FFT Algorithm using DSP TMS320F28335 for Shunt Active Power Filter

    NASA Astrophysics Data System (ADS)

    Patel, Pinkal Jashvantbhai; Patel, Rajesh M.; Patel, Vinod

    2016-07-01

    This work presents simulation, analysis and experimental verification of Fast Fourier Transform (FFT) algorithm for shunt active power filter based on three-level inverter. Different types of filters can be used for elimination of harmonics in the power system. In this work, FFT algorithm for reference current generation is discussed. FFT control algorithm is verified using PSIM simulation results with DLL block and C-code. Simulation results are compared with experimental results for FFT algorithm using DSP TMS320F28335 for shunt active power filter application.

  13. Status of GCOM-W1/AMSR2 development, algorithms, and products

    NASA Astrophysics Data System (ADS)

    Maeda, Takashi; Imaoka, Keiji; Kachi, Misako; Fujii, Hideyuki; Shibata, Akira; Naoki, Kazuhiro; Kasahara, Marehito; Ito, Norimasa; Nakagawa, Keizo; Oki, Taikan

    2011-11-01

    The Global Change Observation Mission (GCOM) consists of two polar orbiting satellite observing systems, GCOM-W (Water) and GCOM-C (Climate), and three generations to achieve global and long-term monitoring of the Earth. GCOM-W1 is the first satellite of the GCOM-W series and scheduled to be launched in Japanese fiscal year 2011. The Advanced Microwave Scanning Radiometer-2 (AMSR2) will be the mission instrument of GCOM-W1. AMSR2 will extend the observation of currently ongoing AMSR-E on EOS Aqua platform. Development of GCOM-W1 and AMSR2 is progressing on schedule. Proto-flight test (PFT) of AMSR2 was completed and delivered to the GCOM-W1 satellite system. Currently, the GCOM-W1 system is under PFT at Tsukuba Space Center until summer 2011 before shipment to launch site, Tanegashima Space Center. Development of retrieval algorithms has been also progressing with the collaboration of the principal investigators. Based on the algorithm comparison results, at-launch standard algorithms were selected and implemented into the processing system. These algorithms will be validated and updated during the initial calibration and validation phase. As an instrument calibration activity, a deep space calibration maneuver is planned during the initial checkout phase, to confirm the consistency of cold sky calibration and intra-scan biases. Maintaining and expanding the validation sites are also ongoing activities. A flux tower observing instruments will be introduced into the Murray-Darling basin in Australia, where the validation of other soil moisture instruments (e.g., SMOS and SMAP) is planned.

  14. Developing new algorithms for estimating river discharge from SWOT

    NASA Astrophysics Data System (ADS)

    Pavelsky, T. M.; Durand, M. T.

    2012-12-01

    Flow of water through rivers is a critical component of the global hydrologic cycle, yet discharge on many of the world's rivers remains poorly constrained by ground-based observations. The planned NASA/CNES Surface Water and Ocean Topography (SWOT) satellite mission will provide concurrent observations of inundated area, water surface elevation, and its spatial derivative (surface slope) for rivers wider than 100 m (and perhaps as narrow as 50 m), which will allow a step-change improvement in our ability to characterize river discharge from space. New discharge algorithms must be developed to incorporate SWOT's unprecedented observations. While ground-based discharge is usually measured at river cross-sections, SWOT will estimate discharge over river reaches of variable length. Cross-sectional discharge is often estimated using slope-area scaling methods such as Manning's Equation, and modified forms of these equations could be used to estimate reach-averaged discharge. While some of the parameters required to estimate discharge are measured directly by SWOT, others including baseflow depth and channel roughness (e.g. Manning's n) are not. Promising new methods are under development to estimate baseflow depth in selected river reaches by extrapolating width-stage relationships. In contrast, channel roughness has received relatively little attention. By combining SWOT observations of several reaches over multiple overpasses, however, it may be possible to simultaneously derive both depth and channel roughness from SWOT observations alone. The principle in this method is to start from mass and momentum conservation, apply a slope-area method such as Manning's equation, assume the roughness coefficient and bathymetry are temporally-invariant, then solve for the unknowns by constraining over a number of overpasses. In principle, only four overpasses are needed for this method, but in practice more will likely be needed to obtain an accurate solution; the actual number

  15. Computationally efficient algorithm for high sampling-frequency operation of active noise control

    NASA Astrophysics Data System (ADS)

    Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati

    2015-05-01

    In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.

  16. Deriving rules from activity diary data: A learning algorithm and results of computer experiments

    NASA Astrophysics Data System (ADS)

    Arentze, Theo A.; Hofman, Frank; Timmermans, Harry J. P.

    Activity-based models consider travel as a derived demand from the activities households need to conduct in space and time. Over the last 15 years, computational or rule-based models of activity scheduling have gained increasing interest in time-geography and transportation research. This paper argues that a lack of techniques for deriving rules from empirical data hinders the further development of rule-based systems in this area. To overcome this problem, this paper develops and tests an algorithm for inductively deriving rules from activity-diary data. The decision table formalism is used to exhaustively represent the theoretically possible decision rules that individuals may use in sequencing a given set of activities. Actual activity patterns of individuals are supplied to the system as examples. In an incremental learning process, the system progressively improves on the selection of rules used for reproducing the examples. Computer experiments based on simulated data are performed to fine-tune rule selection and rule value update functions. The results suggest that the system is effective and fairly robust for parameter settings. It is concluded, therefore, that the proposed approach opens up possibilities to derive empirically tested rule-based models of activity scheduling. Follow-up research will be concerned with testing the system on empirical data.

  17. Developing a Direct Search Algorithm for Solving the Capacitated Open Vehicle Routing Problem

    NASA Astrophysics Data System (ADS)

    Simbolon, Hotman

    2011-06-01

    In open vehicle routing problems, the vehicles are not required to return to the depot after completing service. In this paper, we present the first exact optimization algorithm for the open version of the well-known capacitated vehicle routing problem (CVRP). The strategy of releasing nonbasic variables from their bounds, combined with the "active constraint" method and the notion of superbasics, has been developed for efficiently requirements; this strategy is used to force the appropriate non-integer basic variables to move to their neighborhood integer points. A study of criteria for choosing a nonbasic variable to work with in the integerizing strategy has also been made.

  18. A Wolf Pack Algorithm for Active and Reactive Power Coordinated Optimization in Active Distribution Network

    NASA Astrophysics Data System (ADS)

    Zhuang, H. M.; Jiang, X. J.

    2016-08-01

    This paper presents an active and reactive power dynamic optimization model for active distribution network (ADN), whose control variables include the output of distributed generations (DGs), charge or discharge power of energy storage system (ESS) and reactive power from capacitor banks. To solve the high-dimension nonlinear optimization model, a new heuristic swarm intelligent method, namely wolf pack algorithm (WPA) with better global convergence and computational robustness, is adapted so that the network loss minimization can be achieved. In this paper, the IEEE33-bus system is used to show the effectiveness of WPA technique compared with other techniques. Numerical tests on the modified IEEE 33-bus system show that WPA for active and reactive multi-period optimization of ADN is exact and effective.

  19. MODIS algorithm development and data visualization using ACTS

    NASA Technical Reports Server (NTRS)

    Abbott, Mark R.

    1992-01-01

    The study of the Earth as a system will require the merger of scientific and data resources on a much larger scale than has been done in the past. New methods of scientific research, particularly in the development of geographically dispersed, interdisciplinary teams, are necessary if we are to understand the complexity of the Earth system. Even the planned satellite missions themselves, such as the Earth Observing System, will require much more interaction between researchers and engineers if they are to produce scientifically useful data products. A key component in these activities is the development of flexible, high bandwidth data networks that can be used to move large amounts of data as well as allow researchers to communicate in new ways, such as through video. The capabilities of the Advanced Communications Technology Satellite (ACTS) will allow the development of such networks. The Pathfinder global AVHRR data set and the upcoming SeaWiFS Earthprobe mission would serve as a testbed in which to develop the tools to share data and information among geographically distributed researchers. Our goal is to develop a 'Distributed Research Environment' that can be used as a model for scientific collaboration in the EOS era. The challenge is to unite the advances in telecommunications with the parallel advances in computing and networking.

  20. An Active Learning Algorithm for Control of Epidural Electrostimulation.

    PubMed

    Desautels, Thomas A; Choe, Jaehoon; Gad, Parag; Nandra, Mandheerej S; Roy, Roland R; Zhong, Hui; Tai, Yu-Chong; Edgerton, V Reggie; Burdick, Joel W

    2015-10-01

    Epidural electrostimulation has shown promise for spinal cord injury therapy. However, finding effective stimuli on the multi-electrode stimulating arrays employed requires a laborious manual search of a vast space for each patient. Widespread clinical application of these techniques would be greatly facilitated by an autonomous, algorithmic system which choses stimuli to simultaneously deliver effective therapy and explore this space. We propose a method based on GP-BUCB, a Gaussian process bandit algorithm. In n = 4 spinally transected rats, we implant epidural electrode arrays and examine the algorithm's performance in selecting bipolar stimuli to elicit specified muscle responses. These responses are compared with temporally interleaved intra-animal stimulus selections by a human expert. GP-BUCB successfully controlled the spinal electrostimulation preparation in 37 testing sessions, selecting 670 stimuli. These sessions included sustained autonomous operations (ten-session duration). Delivered performance with respect to the specified metric was as good as or better than that of the human expert. Despite receiving no information as to anatomically likely locations of effective stimuli, GP-BUCB also consistently discovered such a pattern. Further, GP-BUCB was able to extrapolate from previous sessions' results to make predictions about performance in new testing sessions, while remaining sufficiently flexible to capture temporal variability. These results provide validation for applying automated stimulus selection methods to the problem of spinal cord injury therapy.

  1. Research and Development. Laboratory Activities.

    ERIC Educational Resources Information Center

    Gallaway, Ann, Ed.

    Research and Development is a laboratory-oriented course that includes the appropriate common essential elements for industrial technology education plus concepts and skills related to research and development. This guide provides teachers of the course with learning activities for secondary students. Introductory materials include an…

  2. Human activity recognition based on feature selection in smart home using back-propagation algorithm.

    PubMed

    Fang, Hongqing; He, Lei; Si, Hao; Liu, Peng; Xie, Xiaolei

    2014-09-01

    In this paper, Back-propagation(BP) algorithm has been used to train the feed forward neural network for human activity recognition in smart home environments, and inter-class distance method for feature selection of observed motion sensor events is discussed and tested. And then, the human activity recognition performances of neural network using BP algorithm have been evaluated and compared with other probabilistic algorithms: Naïve Bayes(NB) classifier and Hidden Markov Model(HMM). The results show that different feature datasets yield different activity recognition accuracy. The selection of unsuitable feature datasets increases the computational complexity and degrades the activity recognition accuracy. Furthermore, neural network using BP algorithm has relatively better human activity recognition performances than NB classifier and HMM.

  3. A novel algorithm for detecting active propulsion in wheelchair users following spinal cord injury.

    PubMed

    Popp, Werner L; Brogioli, Michael; Leuenberger, Kaspar; Albisser, Urs; Frotzler, Angela; Curt, Armin; Gassert, Roger; Starkey, Michelle L

    2016-03-01

    Physical activity in wheelchair-bound individuals can be assessed by monitoring their mobility as this is one of the most intense upper extremity activities they perform. Current accelerometer-based approaches for describing wheelchair mobility do not distinguish between self- and attendant-propulsion and hence may overestimate total physical activity. The aim of this study was to develop and validate an inertial measurement unit based algorithm to monitor wheel kinematics and the type of wheelchair propulsion (self- or attendant-) within a "real-world" situation. Different sensor set-ups were investigated, ranging from a high precision set-up including four sensor modules with a relatively short measurement duration of 24 h, to a less precise set-up with only one module attached at the wheel exceeding one week of measurement because the gyroscope of the sensor was turned off. The "high-precision" algorithm distinguished self- and attendant-propulsion with accuracy greater than 93% whilst the long-term measurement set-up showed an accuracy of 82%. The estimation accuracy of kinematic parameters was greater than 97% for both set-ups. The possibility of having different sensor set-ups allows the use of the inertial measurement units as high precision tools for researchers as well as unobtrusive and simple tools for manual wheelchair users. PMID:26868046

  4. A novel algorithm for detecting active propulsion in wheelchair users following spinal cord injury.

    PubMed

    Popp, Werner L; Brogioli, Michael; Leuenberger, Kaspar; Albisser, Urs; Frotzler, Angela; Curt, Armin; Gassert, Roger; Starkey, Michelle L

    2016-03-01

    Physical activity in wheelchair-bound individuals can be assessed by monitoring their mobility as this is one of the most intense upper extremity activities they perform. Current accelerometer-based approaches for describing wheelchair mobility do not distinguish between self- and attendant-propulsion and hence may overestimate total physical activity. The aim of this study was to develop and validate an inertial measurement unit based algorithm to monitor wheel kinematics and the type of wheelchair propulsion (self- or attendant-) within a "real-world" situation. Different sensor set-ups were investigated, ranging from a high precision set-up including four sensor modules with a relatively short measurement duration of 24 h, to a less precise set-up with only one module attached at the wheel exceeding one week of measurement because the gyroscope of the sensor was turned off. The "high-precision" algorithm distinguished self- and attendant-propulsion with accuracy greater than 93% whilst the long-term measurement set-up showed an accuracy of 82%. The estimation accuracy of kinematic parameters was greater than 97% for both set-ups. The possibility of having different sensor set-ups allows the use of the inertial measurement units as high precision tools for researchers as well as unobtrusive and simple tools for manual wheelchair users.

  5. Decoding neural events from fMRI BOLD signal: a comparison of existing approaches and development of a new algorithm.

    PubMed

    Bush, Keith; Cisler, Josh

    2013-07-01

    Neuroimaging methodology predominantly relies on the blood oxygenation level dependent (BOLD) signal. While the BOLD signal is a valid measure of neuronal activity, variances in fluctuations of the BOLD signal are not only due to fluctuations in neural activity. Thus, a remaining problem in neuroimaging analyses is developing methods that ensure specific inferences about neural activity that are not confounded by unrelated sources of noise in the BOLD signal. Here, we develop and test a new algorithm for performing semiblind (i.e., no knowledge of stimulus timings) deconvolution of the BOLD signal that treats the neural event as an observable, but intermediate, probabilistic representation of the system's state. We test and compare this new algorithm against three other recent deconvolution algorithms under varied levels of autocorrelated and Gaussian noise, hemodynamic response function (HRF) misspecification and observation sampling rate. Further, we compare the algorithms' performance using two models to simulate BOLD data: a convolution of neural events with a known (or misspecified) HRF versus a biophysically accurate balloon model of hemodynamics. We also examine the algorithms' performance on real task data. The results demonstrated good performance of all algorithms, though the new algorithm generally outperformed the others (3.0% improvement) under simulated resting-state experimental conditions exhibiting multiple, realistic confounding factors (as well as 10.3% improvement on a real Stroop task). The simulations also demonstrate that the greatest negative influence on deconvolution accuracy is observation sampling rate. Practical and theoretical implications of these results for improving inferences about neural activity from fMRI BOLD signal are discussed.

  6. Item Selection for the Development of Short Forms of Scales Using an Ant Colony Optimization Algorithm

    ERIC Educational Resources Information Center

    Leite, Walter L.; Huang, I-Chan; Marcoulides, George A.

    2008-01-01

    This article presents the use of an ant colony optimization (ACO) algorithm for the development of short forms of scales. An example 22-item short form is developed for the Diabetes-39 scale, a quality-of-life scale for diabetes patients, using a sample of 265 diabetes patients. A simulation study comparing the performance of the ACO algorithm and…

  7. Environmental Monitoring Networks Optimization Using Advanced Active Learning Algorithms

    NASA Astrophysics Data System (ADS)

    Kanevski, Mikhail; Volpi, Michele; Copa, Loris

    2010-05-01

    The problem of environmental monitoring networks optimization (MNO) belongs to one of the basic and fundamental tasks in spatio-temporal data collection, analysis, and modeling. There are several approaches to this problem, which can be considered as a design or redesign of monitoring network by applying some optimization criteria. The most developed and widespread methods are based on geostatistics (family of kriging models, conditional stochastic simulations). In geostatistics the variance is mainly used as an optimization criterion which has some advantages and drawbacks. In the present research we study an application of advanced techniques following from the statistical learning theory (SLT) - support vector machines (SVM) and the optimization of monitoring networks when dealing with a classification problem (data are discrete values/classes: hydrogeological units, soil types, pollution decision levels, etc.) is considered. SVM is a universal nonlinear modeling tool for classification problems in high dimensional spaces. The SVM solution is maximizing the decision boundary between classes and has a good generalization property for noisy data. The sparse solution of SVM is based on support vectors - data which contribute to the solution with nonzero weights. Fundamentally the MNO for classification problems can be considered as a task of selecting new measurement points which increase the quality of spatial classification and reduce the testing error (error on new independent measurements). In SLT this is a typical problem of active learning - a selection of the new unlabelled points which efficiently reduce the testing error. A classical approach (margin sampling) to active learning is to sample the points closest to the classification boundary. This solution is suboptimal when points (or generally the dataset) are redundant for the same class. In the present research we propose and study two new advanced methods of active learning adapted to the solution of

  8. Status report: Data management program algorithm evaluation activity at Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R., Jr.

    1977-01-01

    An algorithm evaluation activity was initiated to study the problems associated with image processing by assessing the independent and interdependent effects of registration, compression, and classification techniques on LANDSAT data for several discipline applications. The objective of the activity was to make recommendations on selected applicable image processing algorithms in terms of accuracy, cost, and timeliness or to propose alternative ways of processing the data. As a means of accomplishing this objective, an Image Coding Panel was established. The conduct of the algorithm evaluation is described.

  9. Multi-objective decoupling algorithm for active distance control of intelligent hybrid electric vehicle

    NASA Astrophysics Data System (ADS)

    Luo, Yugong; Chen, Tao; Li, Keqiang

    2015-12-01

    The paper presents a novel active distance control strategy for intelligent hybrid electric vehicles (IHEV) with the purpose of guaranteeing an optimal performance in view of the driving functions, optimum safety, fuel economy and ride comfort. Considering the complexity of driving situations, the objects of safety and ride comfort are decoupled from that of fuel economy, and a hierarchical control architecture is adopted to improve the real-time performance and the adaptability. The hierarchical control structure consists of four layers: active distance control object determination, comprehensive driving and braking torque calculation, comprehensive torque distribution and torque coordination. The safety distance control and the emergency stop algorithms are designed to achieve the safety and ride comfort goals. The optimal rule-based energy management algorithm of the hybrid electric system is developed to improve the fuel economy. The torque coordination control strategy is proposed to regulate engine torque, motor torque and hydraulic braking torque to improve the ride comfort. This strategy is verified by simulation and experiment using a forward simulation platform and a prototype vehicle. The results show that the novel control strategy can achieve the integrated and coordinated control of its multiple subsystems, which guarantees top performance of the driving functions and optimum safety, fuel economy and ride comfort.

  10. The development of a simplified epithelial tissue phantom for the evaluation of an autofluorescence mitigation algorithm

    NASA Astrophysics Data System (ADS)

    Hou, Vivian W.; Yang, Chenying; Nelson, Leonard Y.; Seibel, Eric J.

    2014-03-01

    Previously we developed an ultrathin, flexible, multimodal scanning fiber endoscope (SFE) for concurrent white light and fluorescence imaging. Autofluorescence (AF) arising from endogenous fluorophores (primarily collagen in the esophagus) act as major confounders in fluorescence-aided detection. To address the issue of AF, a real-time mitigation algorithm was developed and has been show to successfully remove AF during SFE imaging. To test our algorithm, we previously developed flexible, color-matched, synthetic phantoms featuring a homogenous distribution of collagen. In order to more rigorously test the AF mitigation algorithm, a phantom that better mimicked the in-vivo distribution of collagen in tissue was developed.

  11. Development of a Smart Release Algorithm for Mid-Air Separation of Parachute Test Articles

    NASA Technical Reports Server (NTRS)

    Moore, James W.

    2011-01-01

    The Crew Exploration Vehicle Parachute Assembly System (CPAS) project is currently developing an autonomous method to separate a capsule-shaped parachute test vehicle from an air-drop platform for use in the test program to develop and validate the parachute system for the Orion spacecraft. The CPAS project seeks to perform air-drop tests of an Orion-like boilerplate capsule. Delivery of the boilerplate capsule to the test condition has proven to be a critical and complicated task. In the current concept, the boilerplate vehicle is extracted from an aircraft on top of a Type V pallet and then separated from the pallet in mid-air. The attitude of the vehicles at separation is critical to avoiding re-contact and successfully deploying the boilerplate into a heatshield-down orientation. Neither the pallet nor the boilerplate has an active control system. However, the attitude of the mated vehicle as a function of time is somewhat predictable. CPAS engineers have designed an avionics system to monitor the attitude of the mated vehicle as it is extracted from the aircraft and command a release when the desired conditions are met. The algorithm includes contingency capabilities designed to release the test vehicle before undesirable orientations occur. The algorithm was verified with simulation and ground testing. The pre-flight development and testing is discussed and limitations of ground testing are noted. The CPAS project performed a series of three drop tests as a proof-of-concept of the release technique. These tests helped to refine the attitude instrumentation and software algorithm to be used on future tests. The drop tests are described in detail and the evolution of the release system with each test is described.

  12. A multi-resolution filtered-x LMS algorithm based on discrete wavelet transform for active noise control

    NASA Astrophysics Data System (ADS)

    Qiu, Z.; Lee, C.-M.; Xu, Z. H.; Sui, L. N.

    2016-01-01

    We have developed a new active control algorithm based on discrete wavelet transform (DWT) for both stationary and non-stationary noise control. First, the Mallat pyramidal algorithm is introduced to implement the DWT, which can decompose the reference signal into several sub-bands with multi-resolution and provides a perfect reconstruction (PR) procedure. To reduce the extra computational complexity introduced by DWT, an efficient strategy is proposed that updates the adaptive filter coefficients in the frequency domainDeepthi B.B using a fast Fourier transform (FFT). Based on the reference noise source, a 'Haar' wavelet is employed and by decomposing the noise signal into two sub-band (3-band), the proposed DWT-FFT-based FXLMS (DWT-FFT-FXLMS) algorithm has greatly reduced complexity and a better convergence performance compared to a time domain filtered-x least mean square (TD-FXLMS) algorithm. As a result of the outstanding time-frequency characteristics of wavelet analysis, the proposed DWT-FFT-FXLMS algorithm can effectively cancel both stationary and non-stationary noise, whereas the frequency domain FXLMS (FD-FXLMS) algorithm cannot approach this point.

  13. Motion Cueing Algorithm Development: New Motion Cueing Program Implementation and Tuning

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    A computer program has been developed for the purpose of driving the NASA Langley Research Center Visual Motion Simulator (VMS). This program includes two new motion cueing algorithms, the optimal algorithm and the nonlinear algorithm. A general description of the program is given along with a description and flowcharts for each cueing algorithm, and also descriptions and flowcharts for subroutines used with the algorithms. Common block variable listings and a program listing are also provided. The new cueing algorithms have a nonlinear gain algorithm implemented that scales each aircraft degree-of-freedom input with a third-order polynomial. A description of the nonlinear gain algorithm is given along with past tuning experience and procedures for tuning the gain coefficient sets for each degree-of-freedom to produce the desired piloted performance. This algorithm tuning will be needed when the nonlinear motion cueing algorithm is implemented on a new motion system in the Cockpit Motion Facility (CMF) at the NASA Langley Research Center.

  14. Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.

    2005-01-01

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.

  15. Application of activity pencil beam algorithm using measured distribution data of positron emitter nuclei for therapeutic SOBP proton beam

    SciTech Connect

    Miyatake, Aya; Nishio, Teiji

    2013-09-15

    Purpose: Recently, much research on imaging the clinical proton-irradiated volume using positron emitter nuclei based on target nuclear fragment reaction has been carried out. The purpose of this study is to develop an activity pencil beam (APB) algorithm for a simulation system for proton-activated positron-emitting imaging in clinical proton therapy using spread-out Bragg peak (SOBP) beams.Methods: The target nuclei of activity distribution calculations are {sup 12}C nuclei, {sup 16}O nuclei, and {sup 40}Ca nuclei, which are the main elements in a human body. Depth activity distributions with SOBP beam irradiations were obtained from the material information of ridge filter (RF) and depth activity distributions of compounds of the three target nuclei measured by BOLPs-RGp (beam ON-LINE PET system mounted on a rotating gantry port) with mono-energetic Bragg peak (MONO) beam irradiations. The calculated data of depth activity distributions with SOBP beam irradiations were sorted in terms of kind of nucleus, energy of proton beam, SOBP width, and thickness of fine degrader (FD), which were verified. The calculated depth activity distributions with SOBP beam irradiations were compared with the measured ones. APB kernels were made from the calculated depth activity distributions with SOBP beam irradiations to construct a simulation system using the APB algorithm for SOBP beams.Results: The depth activity distributions were prepared using the material information of RF and the measured depth activity distributions with MONO beam irradiations for clinical therapy using SOBP beams. With the SOBP width widening, the distal fall-offs of depth activity distributions and the difference from the depth dose distributions were large. The shapes of the calculated depth activity distributions nearly agreed with those of the measured ones upon comparison between the two. The APB kernels of SOBP beams were prepared by making use of the data on depth activity distributions with SOBP

  16. Design requirements and development of an airborne descent path definition algorithm for time navigation

    NASA Technical Reports Server (NTRS)

    Izumi, K. H.; Thompson, J. L.; Groce, J. L.; Schwab, R. W.

    1986-01-01

    The design requirements for a 4D path definition algorithm are described. These requirements were developed for the NASA ATOPS as an extension of the Local Flow Management/Profile Descent algorithm. They specify the processing flow, functional and data architectures, and system input requirements, and recommended the addition of a broad path revision (reinitialization) function capability. The document also summarizes algorithm design enhancements and the implementation status of the algorithm on an in-house PDP-11/70 computer. Finally, the requirements for the pilot-computer interfaces, the lateral path processor, and guidance and steering function are described.

  17. Effects of activity and energy budget balancing algorithm on laboratory performance of a fish bioenergetics model

    USGS Publications Warehouse

    Madenjian, Charles P.; David, Solomon R.; Pothoven, Steven A.

    2012-01-01

    We evaluated the performance of the Wisconsin bioenergetics model for lake trout Salvelinus namaycush that were fed ad libitum in laboratory tanks under regimes of low activity and high activity. In addition, we compared model performance under two different model algorithms: (1) balancing the lake trout energy budget on day t based on lake trout energy density on day t and (2) balancing the lake trout energy budget on day t based on lake trout energy density on day t + 1. Results indicated that the model significantly underestimated consumption for both inactive and active lake trout when algorithm 1 was used and that the degree of underestimation was similar for the two activity levels. In contrast, model performance substantially improved when using algorithm 2, as no detectable bias was found in model predictions of consumption for inactive fish and only a slight degree of overestimation was detected for active fish. The energy budget was accurately balanced by using algorithm 2 but not by using algorithm 1. Based on the results of this study, we recommend the use of algorithm 2 to estimate food consumption by fish in the field. Our study results highlight the importance of accurately accounting for changes in fish energy density when balancing the energy budget; furthermore, these results have implications for the science of evaluating fish bioenergetics model performance and for more accurate estimation of food consumption by fish in the field when fish energy density undergoes relatively rapid changes.

  18. Update on Development of Mesh Generation Algorithms in MeshKit

    SciTech Connect

    Jain, Rajeev; Vanderzee, Evan; Mahadevan, Vijay

    2015-09-30

    MeshKit uses a graph-based design for coding all its meshing algorithms, which includes the Reactor Geometry (and mesh) Generation (RGG) algorithms. This report highlights the developmental updates of all the algorithms, results and future work. Parallel versions of algorithms, documentation and performance results are reported. RGG GUI design was updated to incorporate new features requested by the users; boundary layer generation and parallel RGG support were added to the GUI. Key contributions to the release, upgrade and maintenance of other SIGMA1 libraries (CGM and MOAB) were made. Several fundamental meshing algorithms for creating a robust parallel meshing pipeline in MeshKit are under development. Results and current status of automated, open-source and high quality nuclear reactor assembly mesh generation algorithms such as trimesher, quadmesher, interval matching and multi-sweeper are reported.

  19. Developments in Human Centered Cueing Algorithms for Control of Flight Simulator Motion Systems

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A.; Telban, Robert J.; Cardullo, Frank M.

    1997-01-01

    The authors conducted further research with cueing algorithms for control of flight simulator motion systems. A variation of the so-called optimal algorithm was formulated using simulated aircraft angular velocity input as a basis. Models of the human vestibular sensation system, i.e. the semicircular canals and otoliths, are incorporated within the algorithm. Comparisons of angular velocity cueing responses showed a significant improvement over a formulation using angular acceleration input. Results also compared favorably with the coordinated adaptive washout algorithm, yielding similar results for angular velocity cues while eliminating false cues and reducing the tilt rate for longitudinal cues. These results were confirmed in piloted tests on the current motion system at NASA-Langley, the Visual Motion Simulator (VMS). Proposed future developments by the authors in cueing algorithms are revealed. The new motion system, the Cockpit Motion Facility (CMF), where the final evaluation of the cueing algorithms will be conducted, is also described.

  20. Evaluating Knowledge Structure-Based Adaptive Testing Algorithms and System Development

    ERIC Educational Resources Information Center

    Wu, Huey-Min; Kuo, Bor-Chen; Yang, Jinn-Min

    2012-01-01

    In recent years, many computerized test systems have been developed for diagnosing students' learning profiles. Nevertheless, it remains a challenging issue to find an adaptive testing algorithm to both shorten testing time and precisely diagnose the knowledge status of students. In order to find a suitable algorithm, four adaptive testing…

  1. Clustering algorithm evaluation and the development of a replacement for procedure 1. [for crop inventories

    NASA Technical Reports Server (NTRS)

    Lennington, R. K.; Johnson, J. K.

    1979-01-01

    An efficient procedure which clusters data using a completely unsupervised clustering algorithm and then uses labeled pixels to label the resulting clusters or perform a stratified estimate using the clusters as strata is developed. Three clustering algorithms, CLASSY, AMOEBA, and ISOCLS, are compared for efficiency. Three stratified estimation schemes and three labeling schemes are also considered and compared.

  2. Geologist's Field Assistant: Developing Image and Spectral Analyses Algorithms for Remote Science Exploration

    NASA Astrophysics Data System (ADS)

    Gulick, V. C.; Morris, R. L.; Bishop, J.; Gazis, P.; Alena, R.; Sierhuis, M.

    2002-03-01

    We are developing science analyses algorithms to interface with a Geologist's Field Assistant device to allow robotic or human remote explorers to better sense their surroundings during limited surface excursions. Our algorithms will interpret spectral and imaging data obtained by various sensors.

  3. Editorial Commentary: The Importance of Developing an Algorithm When Diagnosing Hip Pain.

    PubMed

    Coleman, Struan H

    2016-08-01

    The differential diagnosis of groin pain is broad and complex. Therefore, it is essential to develop an algorithm when differentiating the hip as a cause of groin pain from other sources. Selective injections in and around the hip can be helpful when making the diagnosis but are only one part of the algorithm.

  4. Development of Online Cognitive and Algorithm Tests as Assessment Tools in Introductory Computer Science Courses

    ERIC Educational Resources Information Center

    Avancena, Aimee Theresa; Nishihara, Akinori; Vergara, John Paul

    2012-01-01

    This paper presents the online cognitive and algorithm tests, which were developed in order to determine if certain cognitive factors and fundamental algorithms correlate with the performance of students in their introductory computer science course. The tests were implemented among Management Information Systems majors from the Philippines and…

  5. Development of a Behavioural Algorithm for Autonomous Spacecraft

    NASA Astrophysics Data System (ADS)

    Radice, G.

    manner with the environment through the use of sensors and actuators. As such, there is little computational effort required to implement such an approach, which is clearly of great benefit for limited micro-satellites. Rather than using complex world models, which have to be updated, the agent is allowed to exploit the dynamics of its environment for cues as to appropriate actions to take to achieve mission goals. The particular artificial agent implementation used here has been borrowed from studies of biological systems, where it has been used successfully to provide models of motivation and opportunistic behaviour. The so called "cue-deficit" action selection algorithm considers the micro-spacecraft to be a non linear dynamical system with a number of observable states. Using optimal control theory rules are derived which determine which of a finite repertoire of behaviours the satellite should select and perform. It will also be shown that in the event of hardware failures the algorithm will resequence the spacecraft actions to ensure survival while still meeting the mission goals, albeit in a degraded manner.

  6. Development of a novel constellation based landmark detection algorithm

    NASA Astrophysics Data System (ADS)

    Ghayoor, Ali; Vaidya, Jatin G.; Johnson, Hans J.

    2013-03-01

    Anatomical landmarks such as the anterior commissure (AC) and posterior commissure (PC) are commonly used by researchers for co-registration of images. In this paper, we present a novel, automated approach for landmark detection that combines morphometric constraining and statistical shape models to provide accurate estimation of landmark points. This method is made robust to large rotations in initial head orientation by extracting extra information of the eye centers using a radial Hough transform and exploiting the centroid of head mass (CM) using a novel estimation approach. To evaluate the effectiveness of this method, the algorithm is trained on a set of 20 images with manually selected landmarks, and a test dataset is used to compare the automatically detected against the manually detected landmark locations of the AC, PC, midbrain-pons junction (MPJ), and fourth ventricle notch (VN4). The results show that the proposed method is accurate as the average error between the automatically and manually labeled landmark points is less than 1 mm. Also, the algorithm is highly robust as it was successfully run on a large dataset that included different kinds of images with various orientation, spacing, and origin.

  7. Remote Sensing of Ocean Color in the Arctic: Algorithm Development and Comparative Validation. Chapter 9

    NASA Technical Reports Server (NTRS)

    Cota, Glenn F.

    2001-01-01

    The overall goal of this effort is to acquire a large bio-optical database, encompassing most environmental variability in the Arctic, to develop algorithms for phytoplankton biomass and production and other optically active constituents. A large suite of bio-optical and biogeochemical observations have been collected in a variety of high latitude ecosystems at different seasons. The Ocean Research Consortium of the Arctic (ORCA) is a collaborative effort between G.F. Cota of Old Dominion University (ODU), W.G. Harrison and T. Platt of the Bedford Institute of Oceanography (BIO), S. Sathyendranath of Dalhousie University and S. Saitoh of Hokkaido University. ORCA has now conducted 12 cruises and collected over 500 in-water optical profiles plus a variety of ancillary data. Observational suites typically include apparent optical properties (AOPs), inherent optical property (IOPs), and a variety of ancillary observations including sun photometry, biogeochemical profiles, and productivity measurements. All quality-assured data have been submitted to NASA's SeaWIFS Bio-Optical Archive and Storage System (SeaBASS) data archive. Our algorithm development efforts address most of the potential bio-optical data products for the Sea-Viewing Wide Field-of-view Sensor (SeaWiFS), Moderate Resolution Imaging Spectroradiometer (MODIS), and GLI, and provides validation for a specific areas of concern, i.e., high latitudes and coastal waters.

  8. Genetic Algorithm Calibration of Probabilistic Cellular Automata for Modeling Mining Permit Activity

    USGS Publications Warehouse

    Louis, S.J.; Raines, G.L.

    2003-01-01

    We use a genetic algorithm to calibrate a spatially and temporally resolved cellular automata to model mining activity on public land in Idaho and western Montana. The genetic algorithm searches through a space of transition rule parameters of a two dimensional cellular automata model to find rule parameters that fit observed mining activity data. Previous work by one of the authors in calibrating the cellular automaton took weeks - the genetic algorithm takes a day and produces rules leading to about the same (or better) fit to observed data. These preliminary results indicate that genetic algorithms are a viable tool in calibrating cellular automata for this application. Experience gained during the calibration of this cellular automata suggests that mineral resource information is a critical factor in the quality of the results. With automated calibration, further refinements of how the mineral-resource information is provided to the cellular automaton will probably improve our model.

  9. An Optimal CDS Construction Algorithm with Activity Scheduling in Ad Hoc Networks.

    PubMed

    Penumalli, Chakradhar; Palanichamy, Yogesh

    2015-01-01

    A new energy efficient optimal Connected Dominating Set (CDS) algorithm with activity scheduling for mobile ad hoc networks (MANETs) is proposed. This algorithm achieves energy efficiency by minimizing the Broadcast Storm Problem [BSP] and at the same time considering the node's remaining energy. The Connected Dominating Set is widely used as a virtual backbone or spine in mobile ad hoc networks [MANETs] or Wireless Sensor Networks [WSN]. The CDS of a graph representing a network has a significant impact on an efficient design of routing protocol in wireless networks. Here the CDS is a distributed algorithm with activity scheduling based on unit disk graph [UDG]. The node's mobility and residual energy (RE) are considered as parameters in the construction of stable optimal energy efficient CDS. The performance is evaluated at various node densities, various transmission ranges, and mobility rates. The theoretical analysis and simulation results of this algorithm are also presented which yield better results.

  10. An algorithm to detect fire activity using Meteosat: fine tuning and quality assesment

    NASA Astrophysics Data System (ADS)

    Amraoui, M.; DaCamara, C. C.; Ermida, S. L.

    2012-04-01

    Hot spot detection by means of sensors on-board geostationary satellites allows studying wildfire activity at hourly and even sub-hourly intervals, an advantage that cannot be met by polar orbiters. Since 1997, the Satellite Application Facility for Land Surface Analysis has been running an operational procedure that allows detecting active fires based on information from Meteosat-8/SEVIRI. This is the so-called Fire Detection and Monitoring (FD&M) product and the procedure takes advantage of the temporal resolution of SEVIRI (one image every 15 min), and relies on information from SEVIRI channels (namely 0.6, 0.8, 3.9, 10.8 and 12.0 μm) together with information on illumination angles. The method is based on heritage from contextual algorithms designed for polar, sun-synchronous instruments, namely NOAA/AVHRR and MODIS/TERRAAQUA. A potential fire pixel is compared with the neighboring ones and the decision is made based on relative thresholds as derived from the pixels in the neighborhood. Generally speaking, the observed fire incidence compares well against hot spots extracted from the global daily active fire product developed by the MODIS Fire Team. However, values of probability of detection (POD) tend to be quite low, a result that may be partially expected by the finer resolution of MODIS. The aim of the present study is to make a systematic assessment of the impacts on POD and False Alarm Ratio (FAR) of the several parameters that are set in the algorithms. Such parameters range from the threshold values of brightness temperature in the IR3.9 and 10.8 channels that are used to select potential fire pixels up to the extent of the background grid and thresholds used to statistically characterize the radiometric departures of a potential pixel from the respective background. The impact of different criteria to identify pixels contaminated by clouds, smoke and sun glint is also evaluated. Finally, the advantages that may be brought to the algorithm by adding

  11. Refinement and evaluation of helicopter real-time self-adaptive active vibration controller algorithms

    NASA Technical Reports Server (NTRS)

    Davis, M. W.

    1984-01-01

    A Real-Time Self-Adaptive (RTSA) active vibration controller was used as the framework in developing a computer program for a generic controller that can be used to alleviate helicopter vibration. Based upon on-line identification of system parameters, the generic controller minimizes vibration in the fuselage by closed-loop implementation of higher harmonic control in the main rotor system. The new generic controller incorporates a set of improved algorithms that gives the capability to readily define many different configurations by selecting one of three different controller types (deterministic, cautious, and dual), one of two linear system models (local and global), and one or more of several methods of applying limits on control inputs (external and/or internal limits on higher harmonic pitch amplitude and rate). A helicopter rotor simulation analysis was used to evaluate the algorithms associated with the alternative controller types as applied to the four-bladed H-34 rotor mounted on the NASA Ames Rotor Test Apparatus (RTA) which represents the fuselage. After proper tuning all three controllers provide more effective vibration reduction and converge more quickly and smoothly with smaller control inputs than the initial RTSA controller (deterministic with external pitch-rate limiting). It is demonstrated that internal limiting of the control inputs a significantly improves the overall performance of the deterministic controller.

  12. Preliminary Development and Evaluation of Lightning Jump Algorithms for the Real-Time Detection of Severe Weather

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2009-01-01

    Previous studies have demonstrated that rapid increases in total lightning activity (intracloud + cloud-to-ground) are often observed tens of minutes in advance of the occurrence of severe weather at the ground. These rapid increases in lightning activity have been termed "lightning jumps." Herein, we document a positive correlation between lightning jumps and the manifestation of severe weather in thunderstorms occurring across the Tennessee Valley and Washington D.C. A total of 107 thunderstorms were examined in this study, with 69 of the 107 thunderstorms falling into the category of non-severe, and 38 into the category of severe. From the dataset of 69 isolated non-severe thunderstorms, an average peak 1 minute flash rate of 10 flashes/min was determined. A variety of severe thunderstorm types were examined for this study including an MCS, MCV, tornadic outer rainbands of tropical remnants, supercells, and pulse severe thunderstorms. Of the 107 thunderstorms, 85 thunderstorms (47 non-severe, 38 severe) from the Tennessee Valley and Washington D.C tested 6 lightning jump algorithm configurations (Gatlin, Gatlin 45, 2(sigma), 3(sigma), Threshold 10, and Threshold 8). Performance metrics for each algorithm were then calculated, yielding encouraging results from the limited sample of 85 thunderstorms. The 2(sigma) lightning jump algorithm had a high probability of detection (POD; 87%), a modest false alarm rate (FAR; 33%), and a solid Heidke Skill Score (HSS; 0.75). A second and more simplistic lightning jump algorithm named the Threshold 8 lightning jump algorithm also shows promise, with a POD of 81% and a FAR of 41%. Average lead times to severe weather occurrence for these two algorithms were 23 minutes and 20 minutes, respectively. The overall goal of this study is to advance the development of an operationally-applicable jump algorithm that can be used with either total lightning observations made from the ground, or in the near future from space using the

  13. New Combined L-band Active/Passive Soil Moisture Retrieval Algorithm Optimized for Argentine Plains

    NASA Astrophysics Data System (ADS)

    Bruscantini, C. A.; Grings, F. M.; Salvia, M.; Ferrazzoli, P.; Karszenbaum, H.

    2015-12-01

    The ability of L-band passive microwave satellite observations to provide soil moisture (mv) measurements is well known. Despite its high sensitivity to near-surface mv, radiometric technology suffers from having a relatively low spatial resolution. Conversely active microwave observations, although their finer resolution, are difficult to be interpreted for mv content due to the confounding effects of vegetation and roughness. There have been and there are strong motivations for the realization of satellite missions that carry passive and active microwave instruments on board. This has also led to important contributions in algorithm development. In this line of work, NASA-CONAE SAC-D/Aquarius mission had on board an L band radiometer and scatterometer. This was followed by the launch of NASA SMAP mission (Soil Moisture Active Passive), as well as several airborne campaigns that provide active and passive measurements. Within this frame, a new combined active/passive mv retrieval algorithm is proposed by deriving an analytical expression of brightness temperature and radar backscattering relation using explicit semi-empirical models. Simple models (i.e. that can be easily inverted and have relatively low amount of ancillary parameters) were selected: ω-τ model (Jackson et al., 1982, Water Resources Research) and radar-only model (Narvekar et al., 2015, IEEE Transactions on Geoscience and Remote Sensing). A major challenge involves coupling the active and passive models to be consistent with observations. Coupling equations can be derived using theoretical active/passive high-order radiative transfer models, such as 3D Numerical Method of Maxwell equations (Zhou et al., 2004, IEEE Transactions on Geoscience and Remote Sensing) and Tor Vergata (Ferrazzoli et al., 1995,Remote Sensing of Environment) models. In this context, different coupling equations can be optimized for different land covers using theoretical forward models with specific parametrization for each

  14. AeroADL: applying the integration of the Suomi-NPP science algorithms with the Algorithm Development Library to the calibration and validation task

    NASA Astrophysics Data System (ADS)

    Houchin, J. S.

    2014-09-01

    A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.

  15. Development of new flux splitting schemes. [computational fluid dynamics algorithms

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Steffen, Christopher J., Jr.

    1992-01-01

    Maximizing both accuracy and efficiency has been the primary objective in designing a numerical algorithm for computational fluid dynamics (CFD). This is especially important for solutions of complex three dimensional systems of Navier-Stokes equations which often include turbulence modeling and chemistry effects. Recently, upwind schemes have been well received for their capability in resolving discontinuities. With this in mind, presented are two new flux splitting techniques for upwind differencing. The first method is based on High-Order Polynomial Expansions (HOPE) of the mass flux vector. The second new flux splitting is based on the Advection Upwind Splitting Method (AUSM). The calculation of the hypersonic conical flow demonstrates the accuracy of the splitting in resolving the flow in the presence of strong gradients. A second series of tests involving the two dimensional inviscid flow over a NACA 0012 airfoil demonstrates the ability of the AUSM to resolve the shock discontinuity at transonic speed. A third case calculates a series of supersonic flows over a circular cylinder. Finally, the fourth case deals with tests of a two dimensional shock wave/boundary layer interaction.

  16. A novel algorithm for QSAR (quantitative structure-activity relationships)

    SciTech Connect

    Carter, S. ); Nikolic, S.; Trinajstic, N. )

    1989-01-01

    A novel approach to quantitative structure-activity relationships (QSAR) is proposed. It is based on the molecular descriptor named the stereo-identification (SID) number. The applicability of this approach to QSAR studies is tested on aquatic toxicities of phenols against fathead minnows (Phimephales promelas). Our approach reproduced successfully the bioactivities of phenols and is superior to the Hall-Kier model based on Randic's connectivity index.

  17. Developing Photo Activated Localization Microscopy

    NASA Astrophysics Data System (ADS)

    Hess, Harald

    2015-03-01

    Photo Activated Localization Microscopy, PALM, acquires super-resolution images by activating a subset of activatable fluorescent labels and estimating the center of the each molecular label to sub-diffractive accuracy. When this process is repeated thousands of times for different subsets of molecules, then an image can be rendered from all the center coordinates of the molecules. I will describe the circuitous story of its development that began with another super-resolution technique, NSOM, developed by my colleague Eric Betzig, who imaged single molecules at room temperature, and later we spectrally resolved individual luminescent centers of quantum wells. These two observations inspired a generalized path to localization microscopy, but that path was abandoned because no really useful fluorescent labels were available. After a decade of nonacademic industrial pursuits and the subsequent freedom of unemployment, we came across a class of genetically expressible fluorescent proteins that were switchable or convertible that enabled the concept to be implemented and be biologically promising. The past ten years have been very active with many groups exploring applications and enhancements of this concept. Demonstrating significant biological relevance will be the metric if its success.

  18. Decoding neural events from fMRI BOLD signal: A comparison of existing approaches and development of a new algorithm

    PubMed Central

    Bush, Keith; Cisler, Josh

    2013-01-01

    Neuroimaging methodology predominantly relies on the blood oxygenation level dependent (BOLD) signal. While the BOLD signal is a valid measure of neuronal activity, variance in fluctuations of the BOLD signal are not only due to fluctuations in neural activity. Thus, a remaining problem in neuroimaging analyses is developing methods that ensure specific inferences about neural activity that are not confounded by unrelated sources of noise in the BOLD signal. Here, we develop and test a new algorithm for performing semi-blind (i.e., no knowledge of stimulus timings) deconvolution of the BOLD signal that treats the neural event as an observable, but intermediate, probabilistic representation of the system’s state. We test and compare this new algorithm against three other recent deconvolution algorithms under varied levels of autocorrelated and Gaussian noise, hemodynamic response function (HRF) misspecification, and observation sampling rate (i.e., TR). Further, we compare the algorithms’ performance using two models to simulate BOLD data: a convolution of neural events with a known (or misspecified) HRF versus a biophysically accurate balloon model of hemodynamics. We also examine the algorithms’ performance on real task data. The results demonstrated good performance of all algorithms, though the new algorithm generally outperformed the others (3.0% improvement) under simulated resting state experimental conditions exhibiting multiple, realistic confounding factors (as well as 10.3% improvement on a real Stroop task). The simulations also demonstrate that the greatest negative influence on deconvolution accuracy is observation sampling rate. Practical and theoretical implications of these results for improving inferences about neural activity from fMRI BOLD signal are discussed. PMID:23602664

  19. Path optimization by a variational reaction coordinate method. I. Development of formalism and algorithms

    SciTech Connect

    Birkholz, Adam B.; Schlegel, H. Bernhard

    2015-12-28

    The development of algorithms to optimize reaction pathways between reactants and products is an active area of study. Existing algorithms typically describe the path as a discrete series of images (chain of states) which are moved downhill toward the path, using various reparameterization schemes, constraints, or fictitious forces to maintain a uniform description of the reaction path. The Variational Reaction Coordinate (VRC) method is a novel approach that finds the reaction path by minimizing the variational reaction energy (VRE) of Quapp and Bofill. The VRE is the line integral of the gradient norm along a path between reactants and products and minimization of VRE has been shown to yield the steepest descent reaction path. In the VRC method, we represent the reaction path by a linear expansion in a set of continuous basis functions and find the optimized path by minimizing the VRE with respect to the linear expansion coefficients. Improved convergence is obtained by applying constraints to the spacing of the basis functions and coupling the minimization of the VRE to the minimization of one or more points along the path that correspond to intermediates and transition states. The VRC method is demonstrated by optimizing the reaction path for the Müller-Brown surface and by finding a reaction path passing through 5 transition states and 4 intermediates for a 10 atom Lennard-Jones cluster.

  20. Path optimization by a variational reaction coordinate method. I. Development of formalism and algorithms.

    PubMed

    Birkholz, Adam B; Schlegel, H Bernhard

    2015-12-28

    The development of algorithms to optimize reaction pathways between reactants and products is an active area of study. Existing algorithms typically describe the path as a discrete series of images (chain of states) which are moved downhill toward the path, using various reparameterization schemes, constraints, or fictitious forces to maintain a uniform description of the reaction path. The Variational Reaction Coordinate (VRC) method is a novel approach that finds the reaction path by minimizing the variational reaction energy (VRE) of Quapp and Bofill. The VRE is the line integral of the gradient norm along a path between reactants and products and minimization of VRE has been shown to yield the steepest descent reaction path. In the VRC method, we represent the reaction path by a linear expansion in a set of continuous basis functions and find the optimized path by minimizing the VRE with respect to the linear expansion coefficients. Improved convergence is obtained by applying constraints to the spacing of the basis functions and coupling the minimization of the VRE to the minimization of one or more points along the path that correspond to intermediates and transition states. The VRC method is demonstrated by optimizing the reaction path for the Müller-Brown surface and by finding a reaction path passing through 5 transition states and 4 intermediates for a 10 atom Lennard-Jones cluster.

  1. Path optimization by a variational reaction coordinate method. I. Development of formalism and algorithms

    NASA Astrophysics Data System (ADS)

    Birkholz, Adam B.; Schlegel, H. Bernhard

    2015-12-01

    The development of algorithms to optimize reaction pathways between reactants and products is an active area of study. Existing algorithms typically describe the path as a discrete series of images (chain of states) which are moved downhill toward the path, using various reparameterization schemes, constraints, or fictitious forces to maintain a uniform description of the reaction path. The Variational Reaction Coordinate (VRC) method is a novel approach that finds the reaction path by minimizing the variational reaction energy (VRE) of Quapp and Bofill. The VRE is the line integral of the gradient norm along a path between reactants and products and minimization of VRE has been shown to yield the steepest descent reaction path. In the VRC method, we represent the reaction path by a linear expansion in a set of continuous basis functions and find the optimized path by minimizing the VRE with respect to the linear expansion coefficients. Improved convergence is obtained by applying constraints to the spacing of the basis functions and coupling the minimization of the VRE to the minimization of one or more points along the path that correspond to intermediates and transition states. The VRC method is demonstrated by optimizing the reaction path for the Müller-Brown surface and by finding a reaction path passing through 5 transition states and 4 intermediates for a 10 atom Lennard-Jones cluster.

  2. Development of an automatic identification algorithm for antibiogram analysis.

    PubMed

    Costa, Luan F R; da Silva, Eduardo S; Noronha, Victor T; Vaz-Moreira, Ivone; Nunes, Olga C; Andrade, Marcelino M de

    2015-12-01

    Routinely, diagnostic and microbiology laboratories perform antibiogram analysis which can present some difficulties leading to misreadings and intra and inter-reader deviations. An Automatic Identification Algorithm (AIA) has been proposed as a solution to overcome some issues associated with the disc diffusion method, which is the main goal of this work. AIA allows automatic scanning of inhibition zones obtained by antibiograms. More than 60 environmental isolates were tested using susceptibility tests which were performed for 12 different antibiotics for a total of 756 readings. Plate images were acquired and classified as standard or oddity. The inhibition zones were measured using the AIA and results were compared with reference method (human reading), using weighted kappa index and statistical analysis to evaluate, respectively, inter-reader agreement and correlation between AIA-based and human-based reading. Agreements were observed in 88% cases and 89% of the tests showed no difference or a <4mm difference between AIA and human analysis, exhibiting a correlation index of 0.85 for all images, 0.90 for standards and 0.80 for oddities with no significant difference between automatic and manual method. AIA resolved some reading problems such as overlapping inhibition zones, imperfect microorganism seeding, non-homogeneity of the circumference, partial action of the antimicrobial, and formation of a second halo of inhibition. Furthermore, AIA proved to overcome some of the limitations observed in other automatic methods. Therefore, AIA may be a practical tool for automated reading of antibiograms in diagnostic and microbiology laboratories. PMID:26513468

  3. Ciliary motility activity measurement using a dense optical flow algorithm.

    PubMed

    Parrilla, Eduardo; Armengot, Miguel; Mata, Manuel; Cortijo, Julio; Riera, Jaime; Hueso, José L; Moratal, David

    2013-01-01

    Persistent respiratory syncytial virus (RSV) infections have been associated with the exacerbation of chronic inflammatory diseases, including chronic obstructive pulmonary disease (COPD). This virus infects the respiratory epithelium, leading to chronic inflammation, and induces the release of mucins and the loss of cilia activity, two factors that determine mucus clearance and the increase in sputum volume. In this study, an automatic method has been established to determine the ciliary motility activity from cell cultures by means of optical flow computation, and has been applied to 136 control cultures and to 144 RSV-infected cultures. The control group presented an average of cell surface with cilia motility per field of 41 ± 15 % (mean ± standard deviation), while the infected group presented a 11 ± 5 %, t-Student p<0.001. The cutoff value to classify a infected specimen was <17.89 % (sensitivity 0.94, specificity 0.93). This methodology has proved to be a robust technique to evaluate cilia motility in cell cultures. PMID:24110720

  4. Characterizing interplanetary shocks for development and optimization of an automated solar wind shock detection algorithm

    NASA Astrophysics Data System (ADS)

    Cash, M. D.; Wrobel, J. S.; Cosentino, K. C.; Reinard, A. A.

    2014-06-01

    Human evaluation of solar wind data for interplanetary (IP) shock identification relies on both heuristics and pattern recognition, with the former lending itself to algorithmic representation and automation. Such detection algorithms can potentially alert forecasters of approaching shocks, providing increased warning of subsequent geomagnetic storms. However, capturing shocks with an algorithmic treatment alone is challenging, as past and present work demonstrates. We present a statistical analysis of 209 IP shocks observed at L1, and we use this information to optimize a set of shock identification criteria for use with an automated solar wind shock detection algorithm. In order to specify ranges for the threshold values used in our algorithm, we quantify discontinuities in the solar wind density, velocity, temperature, and magnetic field magnitude by analyzing 8 years of IP shocks detected by the SWEPAM and MAG instruments aboard the ACE spacecraft. Although automatic shock detection algorithms have previously been developed, in this paper we conduct a methodical optimization to refine shock identification criteria and present the optimal performance of this and similar approaches. We compute forecast skill scores for over 10,000 permutations of our shock detection criteria in order to identify the set of threshold values that yield optimal forecast skill scores. We then compare our results to previous automatic shock detection algorithms using a standard data set, and our optimized algorithm shows improvements in the reliability of automated shock detection.

  5. A wearable sensor module with a neural-network-based activity classification algorithm for daily energy expenditure estimation.

    PubMed

    Lin, Che-Wei; Yang, Ya-Ting C; Wang, Jeen-Shing; Yang, Yi-Ching

    2012-09-01

    This paper presents a wearable module and neural-network-based activity classification algorithm for energy expenditure estimation. The purpose of our design is first to categorize physical activities with similar intensity levels, and then to construct energy expenditure regression (EER) models using neural networks in order to optimize the estimation performance. The classification of physical activities for EER model construction is based on the acceleration and ECG signal data collected by wearable sensor modules developed by our research lab. The proposed algorithm consists of procedures for data collection, data preprocessing, activity classification, feature selection, and construction of EER models using neural networks. In order to reduce the computational load and achieve satisfactory estimation performance, we employed sequential forward and backward search strategies for feature selection. Two representative neural networks, a radial basis function network (RBFN) and a generalized regression neural network (GRNN), were employed as EER models for performance comparisons. Our experimental results have successfully validated the effectiveness of our wearable sensor module and its neural-network-based activity classification algorithm for energy expenditure estimation. In addition, our results demonstrate the superior performance of GRNN as compared to RBFN.

  6. Assessing Activity Pattern Similarity with Multidimensional Sequence Alignment based on a Multiobjective Optimization Evolutionary Algorithm

    PubMed Central

    Kwan, Mei-Po; Xiao, Ningchuan; Ding, Guoxiang

    2015-01-01

    Due to the complexity and multidimensional characteristics of human activities, assessing the similarity of human activity patterns and classifying individuals with similar patterns remains highly challenging. This paper presents a new and unique methodology for evaluating the similarity among individual activity patterns. It conceptualizes multidimensional sequence alignment (MDSA) as a multiobjective optimization problem, and solves this problem with an evolutionary algorithm. The study utilizes sequence alignment to code multiple facets of human activities into multidimensional sequences, and to treat similarity assessment as a multiobjective optimization problem that aims to minimize the alignment cost for all dimensions simultaneously. A multiobjective optimization evolutionary algorithm (MOEA) is used to generate a diverse set of optimal or near-optimal alignment solutions. Evolutionary operators are specifically designed for this problem, and a local search method also is incorporated to improve the search ability of the algorithm. We demonstrate the effectiveness of our method by comparing it with a popular existing method called ClustalG using a set of 50 sequences. The results indicate that our method outperforms the existing method for most of our selected cases. The multiobjective evolutionary algorithm presented in this paper provides an effective approach for assessing activity pattern similarity, and a foundation for identifying distinctive groups of individuals with similar activity patterns. PMID:26190858

  7. The development of an algebraic multigrid algorithm for symmetric positive definite linear systems

    SciTech Connect

    Vanek, P.; Mandel, J.; Brezina, M.

    1996-12-31

    An algebraic multigrid algorithm for symmetric, positive definite linear systems is developed based on the concept of prolongation by smoothed aggregation. Coarse levels are generated automatically. We present a set of requirements motivated heuristically by a convergence theory. The algorithm then attempts to satisfy the requirements. Input to the method are the coefficient matrix and zero energy modes, which are determined from nodal coordinates and knowledge of the differential equation. Efficiency of the resulting algorithm is demonstrated by computational results on real world problems from solid elasticity, plate blending, and shells.

  8. CT liver volumetry using geodesic active contour segmentation with a level-set algorithm

    NASA Astrophysics Data System (ADS)

    Suzuki, Kenji; Epstein, Mark L.; Kohlbrenner, Ryan; Obajuluwa, Ademola; Xu, Jianwu; Hori, Masatoshi; Baron, Richard

    2010-03-01

    Automatic liver segmentation on CT images is challenging because the liver often abuts other organs of a similar density. Our purpose was to develop an accurate automated liver segmentation scheme for measuring liver volumes. We developed an automated volumetry scheme for the liver in CT based on a 5 step schema. First, an anisotropic smoothing filter was applied to portal-venous phase CT images to remove noise while preserving the liver structure, followed by an edge enhancer to enhance the liver boundary. By using the boundary-enhanced image as a speed function, a fastmarching algorithm generated an initial surface that roughly estimated the liver shape. A geodesic-active-contour segmentation algorithm coupled with level-set contour-evolution refined the initial surface so as to more precisely fit the liver boundary. The liver volume was calculated based on the refined liver surface. Hepatic CT scans of eighteen prospective liver donors were obtained under a liver transplant protocol with a multi-detector CT system. Automated liver volumes obtained were compared with those manually traced by a radiologist, used as "gold standard." The mean liver volume obtained with our scheme was 1,520 cc, whereas the mean manual volume was 1,486 cc, with the mean absolute difference of 104 cc (7.0%). CT liver volumetrics based on an automated scheme agreed excellently with "goldstandard" manual volumetrics (intra-class correlation coefficient was 0.95) with no statistically significant difference (p(F<=f)=0.32), and required substantially less completion time. Our automated scheme provides an efficient and accurate way of measuring liver volumes.

  9. Deciphering the Minimal Algorithm for Development and Information-genesis

    NASA Astrophysics Data System (ADS)

    Li, Zhiyuan; Tang, Chao; Li, Hao

    During development, cells with identical genomes acquires different fates in a highly organized manner. In order to decipher the principles underlining development, we used C.elegans as the model organism. Based on a large set of microscopy imaging, we first constructed a ``standard worm'' in silico: from the single zygotic cell to about 500 cell stage, the lineage, position, cell-cell contact and gene expression dynamics are quantified for each cell in order to investigate principles underlining these intensive data. Next, we reverse-engineered the possible gene-gene/cell-cell interaction rules that are capable of running a dynamic model recapitulating the early fate decisions during C.elegans development. we further formulized the C.elegans embryogenesis in the language of information genesis. Analysis towards data and model uncovered the global landscape of development in the cell fate space, suggested possible gene regulatory architectures and cell signaling processes, revealed diversity and robustness as the essential trade-offs in development, and demonstrated general strategies in building multicellular organisms.

  10. Development of HF radar inversion algorithm for spectrum estimation (HIAS)

    NASA Astrophysics Data System (ADS)

    Hisaki, Yukiharu

    2015-03-01

    A method for estimating ocean wave directional spectra using an HF (high-frequency) ocean radar was developed. This method represents the development of work conducted in previous studies. In the present method, ocean wave directional spectra are estimated on polar coordinates whose center is the radar position, while spectra are estimated on regular grids. This method can be applied to both single and multiple radar cases. The area for wave estimation is more flexible than that of the previous method. As the signal to noise (SN) ratios of Doppler spectra are critical for wave estimation, we develop a method to exclude low SN ratio Doppler spectra. The validity of the method is demonstrated by comparing results with in situ observed wave data that it would be impossible to estimate by the methods of other groups.

  11. The Development of FPGA-Based Pseudo-Iterative Clustering Algorithms

    NASA Astrophysics Data System (ADS)

    Drueke, Elizabeth; Fisher, Wade; Plucinski, Pawel

    2016-03-01

    The Large Hadron Collider (LHC) in Geneva, Switzerland, is set to undergo major upgrades in 2025 in the form of the High-Luminosity Large Hadron Collider (HL-LHC). In particular, several hardware upgrades are proposed to the ATLAS detector, one of the two general purpose detectors. These hardware upgrades include, but are not limited to, a new hardware-level clustering algorithm, to be performed by a field programmable gate array, or FPGA. In this study, we develop that clustering algorithm and compare the output to a Python-implemented topoclustering algorithm developed at the University of Oregon. Here, we present the agreement between the FPGA output and expected output, with particular attention to the time required by the FPGA to complete the algorithm and other limitations set by the FPGA itself.

  12. Development of practical multiband algorithms for estimating land-surface temperature from EOS/MODIS data

    NASA Technical Reports Server (NTRS)

    Dozier, J.; Wan, Z.

    1994-01-01

    A practical multiband, hierarchical algorithm for estimating land-surface temperature from NASA's future Earth Observing System (EOS) instruments Moderate Resolution Imaging Spectroradiometer (MODIS) and Advance Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is developed through comprehensive, accurate, radiative transfer simulations at moderate spectral steps of 1-5/cm for wide ranges of atmospheric and surface conditions. The algorithm will accept empirical or estimated information about the surface emissivity and reflectivity and the atmospheric temperature and water-vapor profiles. Ground-based and aircraft measurements are necessary to validate and improve the algorithm and to establish its quality. Its accuracy depends on the calibration accuracy of thermal infrared data, uncertainties in surface heterogeneity, and temperature-dependent atmospheric absorption coefficients. Better knowledge of land-surface spectral emissivities and more accurate coefficients for atmospheric molecular band absorption and water vapor continuum absorption are needed to develop global land-surface temperature algorithms accurate to 1-2 K.

  13. The development of a bearing spectral analyzer and algorithms to detect turbopump bearing wear from deflectometer and strain gage data

    NASA Astrophysics Data System (ADS)

    Martinez, Carol L.

    1992-07-01

    Over the last several years, Rocketdyne has actively developed condition and health monitoring techniques and their elements for rocket engine components, specifically high pressure turbopumps. Of key interest is the development of bearing signature analysis systems for real-time monitoring of the cryogen-cooled turbopump shaft bearings, which spin at speeds up to 36,000 RPM. These system elements include advanced bearing vibration sensors, signal processing techniques, wear mode algorithms, and integrated control software. Results of development efforts in the areas of signal processing and wear mode identification and quantification algorithms based on strain gage and deflectometer data are presented. Wear modes investigated include: inner race wear, cage pocket wear, outer race wear, differential ball wear, cracked inner race, and nominal wear.

  14. Millimeter-wave imaging radiometer data processing and development of water vapor retrieval algorithms

    NASA Technical Reports Server (NTRS)

    Chang, L. Aron

    1995-01-01

    This document describes the current status of Millimeter-wave Imaging Radiometer (MIR) data processing and the technical development of the first version of a water vapor retrieval algorithm. The algorithm is being used by NASA/GSFC Microwave Sensors Branch, Laboratory for Hydrospheric Processes. It is capable of a three dimensional mapping of moisture fields using microwave data from airborne sensor of MIR and spaceborne instrument of Special Sensor Microwave/T-2 (SSM/T-2).

  15. Applications of feature selection. [development of classification algorithms for LANDSAT data

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.

    1976-01-01

    The use of satellite-acquired (LANDSAT) multispectral scanner (MSS) data to conduct an inventory of some crop of economic interest such as wheat over a large geographical area is considered in relation to the development of accurate and efficient algorithms for data classification. The dimension of the measurement space and the computational load for a classification algorithm is increased by the use of multitemporal measurements. Feature selection/combination techniques used to reduce the dimensionality of the problem are described.

  16. Real-time estimation of daily physical activity intensity by a triaxial accelerometer and a gravity-removal classification algorithm.

    PubMed

    Ohkawara, Kazunori; Oshima, Yoshitake; Hikihara, Yuki; Ishikawa-Takata, Kazuko; Tabata, Izumi; Tanaka, Shigeho

    2011-06-01

    We have recently developed a simple algorithm for the classification of household and locomotive activities using the ratio of unfiltered to filtered synthetic acceleration (gravity-removal physical activity classification algorithm, GRPACA) measured by a triaxial accelerometer. The purpose of the present study was to develop a new model for the immediate estimation of daily physical activity intensities using a triaxial accelerometer. A total of sixty-six subjects were randomly assigned into validation (n 44) and cross-validation (n 22) groups. All subjects performed fourteen activities while wearing a triaxial accelerometer in a controlled laboratory setting. During each activity, energy expenditure was measured by indirect calorimetry, and physical activity intensities were expressed as metabolic equivalents (MET). The validation group displayed strong relationships between measured MET and filtered synthetic accelerations for household (r 0·907, P < 0·001) and locomotive (r 0·961, P < 0·001) activities. In the cross-validation group, two GRPACA-based linear regression models provided highly accurate MET estimation for household and locomotive activities. Results were similar when equations were developed by non-linear regression or sex-specific linear or non-linear regressions. Sedentary activities were also accurately estimated by the specific linear regression classified from other activity counts. Therefore, the use of a triaxial accelerometer in combination with a GRPACA permits more accurate and immediate estimation of daily physical activity intensities, compared with previously reported cut-off classification models. This method may be useful for field investigations as well as for self-monitoring by general users.

  17. Development and application of unified algorithms for problems in computational science

    NASA Technical Reports Server (NTRS)

    Shankar, Vijaya; Chakravarthy, Sukumar

    1987-01-01

    A framework is presented for developing computationally unified numerical algorithms for solving nonlinear equations that arise in modeling various problems in mathematical physics. The concept of computational unification is an attempt to encompass efficient solution procedures for computing various nonlinear phenomena that may occur in a given problem. For example, in Computational Fluid Dynamics (CFD), a unified algorithm will be one that allows for solutions to subsonic (elliptic), transonic (mixed elliptic-hyperbolic), and supersonic (hyperbolic) flows for both steady and unsteady problems. The objectives are: development of superior unified algorithms emphasizing accuracy and efficiency aspects; development of codes based on selected algorithms leading to validation; application of mature codes to realistic problems; and extension/application of CFD-based algorithms to problems in other areas of mathematical physics. The ultimate objective is to achieve integration of multidisciplinary technologies to enhance synergism in the design process through computational simulation. Specific unified algorithms for a hierarchy of gas dynamics equations and their applications to two other areas: electromagnetic scattering, and laser-materials interaction accounting for melting.

  18. Ocean observations with EOS/MODIS: Algorithm development and post launch studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1995-01-01

    An investigation of the influence of stratospheric aerosol on the performance of the atmospheric correction algorithm was carried out. The results indicate how the performance of the algorithm is degraded if the stratospheric aerosol is ignored. Use of the MODIS 1380 nm band to effect a correction for stratospheric aerosols was also studied. The development of a multi-layer Monte Carlo radiative transfer code that includes polarization by molecular and aerosol scattering and wind-induced sea surface roughness has been completed. Comparison tests with an existing two-layer successive order of scattering code suggests that both codes are capable of producing top-of-atmosphere radiances with errors usually less than 0.1 percent. An initial set of simulations to study the effects of ignoring the polarization of the the ocean-atmosphere light field, in both the development of the atmospheric correction algorithm and the generation of the lookup tables used for operation of the algorithm, have been completed. An algorithm was developed that can be used to invert the radiance exiting the top and bottom of the atmosphere to yield the columnar optical properties of the atmospheric aerosol under clear sky conditions over the ocean, for aerosol optical thicknesses as large as 2. The algorithm is capable of retrievals with such large optical thicknesses because all significant orders of multiple scattering are included.

  19. Unified Framework for Development, Deployment and Robust Testing of Neuroimaging Algorithms

    PubMed Central

    Joshi, Alark; Scheinost, Dustin; Okuda, Hirohito; Belhachemi, Dominique; Murphy, Isabella; Staib, Lawrence H.; Papademetris, Xenophon

    2011-01-01

    Developing both graphical and command-line user interfaces for neuroimaging algorithms requires considerable effort. Neuroimaging algorithms can meet their potential only if they can be easily and frequently used by their intended users. Deployment of a large suite of such algorithms on multiple platforms requires consistency of user interface controls, consistent results across various platforms and thorough testing. We present the design and implementation of a novel object-oriented framework that allows for rapid development of complex image analysis algorithms with many reusable components and the ability to easily add graphical user interface controls. Our framework also allows for simplified yet robust nightly testing of the algorithms to ensure stability and cross platform interoperability. All of the functionality is encapsulated into a software object requiring no separate source code for user interfaces, testing or deployment. This formulation makes our framework ideal for developing novel, stable and easy-to-use algorithms for medical image analysis and computer assisted interventions. The framework has been both deployed at Yale and released for public use in the open source multi-platform image analysis software—BioImage Suite (bioimagesuite.org). PMID:21249532

  20. User Activity Recognition in Smart Homes Using Pattern Clustering Applied to Temporal ANN Algorithm.

    PubMed

    Bourobou, Serge Thomas Mickala; Yoo, Younghwan

    2015-01-01

    This paper discusses the possibility of recognizing and predicting user activities in the IoT (Internet of Things) based smart environment. The activity recognition is usually done through two steps: activity pattern clustering and activity type decision. Although many related works have been suggested, they had some limited performance because they focused only on one part between the two steps. This paper tries to find the best combination of a pattern clustering method and an activity decision algorithm among various existing works. For the first step, in order to classify so varied and complex user activities, we use a relevant and efficient unsupervised learning method called the K-pattern clustering algorithm. In the second step, the training of smart environment for recognizing and predicting user activities inside his/her personal space is done by utilizing the artificial neural network based on the Allen's temporal relations. The experimental results show that our combined method provides the higher recognition accuracy for various activities, as compared with other data mining classification algorithms. Furthermore, it is more appropriate for a dynamic environment like an IoT based smart home. PMID:26007738

  1. User Activity Recognition in Smart Homes Using Pattern Clustering Applied to Temporal ANN Algorithm.

    PubMed

    Bourobou, Serge Thomas Mickala; Yoo, Younghwan

    2015-01-01

    This paper discusses the possibility of recognizing and predicting user activities in the IoT (Internet of Things) based smart environment. The activity recognition is usually done through two steps: activity pattern clustering and activity type decision. Although many related works have been suggested, they had some limited performance because they focused only on one part between the two steps. This paper tries to find the best combination of a pattern clustering method and an activity decision algorithm among various existing works. For the first step, in order to classify so varied and complex user activities, we use a relevant and efficient unsupervised learning method called the K-pattern clustering algorithm. In the second step, the training of smart environment for recognizing and predicting user activities inside his/her personal space is done by utilizing the artificial neural network based on the Allen's temporal relations. The experimental results show that our combined method provides the higher recognition accuracy for various activities, as compared with other data mining classification algorithms. Furthermore, it is more appropriate for a dynamic environment like an IoT based smart home.

  2. Development of Fast Algorithms Using Recursion, Nesting and Iterations for Computational Electromagnetics

    NASA Technical Reports Server (NTRS)

    Chew, W. C.; Song, J. M.; Lu, C. C.; Weedon, W. H.

    1995-01-01

    In the first phase of our work, we have concentrated on laying the foundation to develop fast algorithms, including the use of recursive structure like the recursive aggregate interaction matrix algorithm (RAIMA), the nested equivalence principle algorithm (NEPAL), the ray-propagation fast multipole algorithm (RPFMA), and the multi-level fast multipole algorithm (MLFMA). We have also investigated the use of curvilinear patches to build a basic method of moments code where these acceleration techniques can be used later. In the second phase, which is mainly reported on here, we have concentrated on implementing three-dimensional NEPAL on a massively parallel machine, the Connection Machine CM-5, and have been able to obtain some 3D scattering results. In order to understand the parallelization of codes on the Connection Machine, we have also studied the parallelization of 3D finite-difference time-domain (FDTD) code with PML material absorbing boundary condition (ABC). We found that simple algorithms like the FDTD with material ABC can be parallelized very well allowing us to solve within a minute a problem of over a million nodes. In addition, we have studied the use of the fast multipole method and the ray-propagation fast multipole algorithm to expedite matrix-vector multiplication in a conjugate-gradient solution to integral equations of scattering. We find that these methods are faster than LU decomposition for one incident angle, but are slower than LU decomposition when many incident angles are needed as in the monostatic RCS calculations.

  3. Developing NASA's VIIRS LST and Emissivity EDRs using a physics based Temperature Emissivity Separation (TES) algorithm

    NASA Astrophysics Data System (ADS)

    Islam, T.; Hulley, G. C.; Malakar, N.; Hook, S. J.

    2015-12-01

    Land Surface Temperature and Emissivity (LST&E) data are acknowledged as critical Environmental Data Records (EDRs) by the NASA Earth Science Division. The current operational LST EDR for the recently launched Suomi National Polar-orbiting Partnership's (NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) payload utilizes a split-window algorithm that relies on previously-generated fixed emissivity dependent coefficients and does not produce a dynamically varying and multi-spectral land surface emissivity product. Furthermore, this algorithm deviates from its MODIS counterpart (MOD11) resulting in a discontinuity in the MODIS/VIIRS LST time series. This study presents an alternative physics based algorithm for generation of the NASA VIIRS LST&E EDR in order to provide continuity with its MODIS counterpart algorithm (MOD21). The algorithm, known as temperature emissivity separation (TES) algorithm, uses a fast radiative transfer model - Radiative Transfer for (A)TOVS (RTTOV) in combination with an emissivity calibration model to isolate the surface radiance contribution retrieving temperature and emissivity. Further, a new water-vapor scaling (WVS) method is developed and implemented to improve the atmospheric correction process within the TES system. An independent assessment of the VIIRS LST&E outputs is performed against in situ LST measurements and laboratory measured emissivity spectra samples over dedicated validation sites in the Southwest USA. Emissivity retrievals are also validated with the latest ASTER Global Emissivity Database Version 4 (GEDv4). An overview and current status of the algorithm as well as the validation results will be discussed.

  4. Hypersonic Vehicle Propulsion System Control Model Development Roadmap and Activities

    NASA Technical Reports Server (NTRS)

    Stueber, Thomas J.; Le, Dzu K.; Vrnak, Daniel R.

    2009-01-01

    The NASA Fundamental Aeronautics Program Hypersonic project is directed towards fundamental research for two classes of hypersonic vehicles: highly reliable reusable launch systems (HRRLS) and high-mass Mars entry systems (HMMES). The objective of the hypersonic guidance, navigation, and control (GN&C) discipline team is to develop advanced guidance and control algorithms to enable efficient and effective operation of these challenging vehicles. The ongoing work at the NASA Glenn Research Center supports the hypersonic GN&C effort in developing tools to aid the design of advanced control algorithms that specifically address the propulsion system of the HRRLSclass vehicles. These tools are being developed in conjunction with complementary research and development activities in hypersonic propulsion at Glenn and elsewhere. This report is focused on obtaining control-relevant dynamic models of an HRRLS-type hypersonic vehicle propulsion system.

  5. Battery algorithm verification and development using hardware-in-the-loop testing

    NASA Astrophysics Data System (ADS)

    He, Yongsheng; Liu, Wei; Koch, Brain J.

    Battery algorithms play a vital role in hybrid electric vehicles (HEVs), plug-in hybrid electric vehicles (PHEVs), extended-range electric vehicles (EREVs), and electric vehicles (EVs). The energy management of hybrid and electric propulsion systems needs to rely on accurate information on the state of the battery in order to determine the optimal electric drive without abusing the battery. In this study, a cell-level hardware-in-the-loop (HIL) system is used to verify and develop state of charge (SOC) and power capability predictions of embedded battery algorithms for various vehicle applications. Two different batteries were selected as representative examples to illustrate the battery algorithm verification and development procedure. One is a lithium-ion battery with a conventional metal oxide cathode, which is a power battery for HEV applications. The other is a lithium-ion battery with an iron phosphate (LiFePO 4) cathode, which is an energy battery for applications in PHEVs, EREVs, and EVs. The battery cell HIL testing provided valuable data and critical guidance to evaluate the accuracy of the developed battery algorithms, to accelerate battery algorithm future development and improvement, and to reduce hybrid/electric vehicle system development time and costs.

  6. Accuracy of Optimized Branched Algorithms to Assess Activity-Specific PAEE

    PubMed Central

    Edwards, Andy G.; Hill, James O.; Byrnes, William C.; Browning, Raymond C.

    2009-01-01

    PURPOSE To assess the activity-specific accuracy achievable by branched algorithm (BA) analysis of simulated daily-living physical activity energy expenditure (PAEE) within a sedentary population. METHODS Sedentary men (n=8) and women (n=8) first performed a treadmill calibration protocol, during which heart rate (HR), accelerometry (ACC), and PAEE were measured in 1-minute epochs. From these data, HR-PAEE, and ACC-PAEE regressions were constructed and used in each of six analytic models to predict PAEE from ACC and HR data collected during a subsequent simulated daily-living protocol. Criterion PAEE was measured during both protocols via indirect calorimetry. The accuracy achieved by each model was assessed by the root mean square of the difference between model-predicted daily–living PAEE and the criterion daily-living PAEE (expressed here as % of mean daily living PAEE). RESULTS Across the range of activities an unconstrained post hoc optimized branched algorithm best predicted criterion PAEE. Estimates using individual calibration were generally more accurate than those using group calibration (14 vs. 16 % error, respectively). These analyses also performed well within each of the six daily-living activities, but systematic errors appeared for several of those activities, which may be explained by an inability of the algorithm to simultaneously accommodate a heterogeneous range of activities. Analyses of between mean square error by subject and activity suggest that optimization involving minimization of RMS for total daily-living PAEE is associated with decreased error between subjects but increased error between activities. CONCLUSION The performance of post hoc optimized branched algorithms may be limited by heterogeneity in the daily-living activities being performed. PMID:19952842

  7. Advancements in the Development of an Operational Lightning Jump Algorithm for GOES-R GLM

    NASA Technical Reports Server (NTRS)

    Shultz, Chris; Petersen, Walter; Carey, Lawrence

    2011-01-01

    Rapid increases in total lightning have been shown to precede the manifestation of severe weather at the surface. These rapid increases have been termed lightning jumps, and are the current focus of algorithm development for the GOES-R Geostationary Lightning Mapper (GLM). Recent lightning jump algorithm work has focused on evaluation of algorithms in three additional regions of the country, as well as, markedly increasing the number of thunderstorms in order to evaluate the each algorithm s performance on a larger population of storms. Lightning characteristics of just over 600 thunderstorms have been studied over the past four years. The 2 lightning jump algorithm continues to show the most promise for an operational lightning jump algorithm, with a probability of detection of 82%, a false alarm rate of 35%, a critical success index of 57%, and a Heidke Skill Score of 0.73 on the entire population of thunderstorms. Average lead time for the 2 algorithm on all severe weather is 21.15 minutes, with a standard deviation of +/- 14.68 minutes. Looking at tornadoes alone, the average lead time is 18.71 minutes, with a standard deviation of +/-14.88 minutes. Moreover, removing the 2 lightning jumps that occur after a jump has been detected, and before severe weather is detected at the ground, the 2 lightning jump algorithm s false alarm rate drops from 35% to 21%. Cold season, low topped, and tropical environments cause problems for the 2 lightning jump algorithm, due to their relative dearth in lightning as compared to a supercellular or summertime airmass thunderstorm environment.

  8. The development of a scalable parallel 3-D CFD algorithm for turbomachinery. M.S. Thesis Final Report

    NASA Technical Reports Server (NTRS)

    Luke, Edward Allen

    1993-01-01

    Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.

  9. Bobcat 2013: a hyperspectral data collection supporting the development and evaluation of spatial-spectral algorithms

    NASA Astrophysics Data System (ADS)

    Kaufman, Jason; Celenk, Mehmet; White, A. K.; Stocker, Alan D.

    2014-06-01

    The amount of hyperspectral imagery (HSI) data currently available is relatively small compared to other imaging modalities, and what is suitable for developing, testing, and evaluating spatial-spectral algorithms is virtually nonexistent. In this work, a significant amount of coincident airborne hyperspectral and high spatial resolution panchromatic imagery that supports the advancement of spatial-spectral feature extraction algorithms was collected to address this need. The imagery was collected in April 2013 for Ohio University by the Civil Air Patrol, with their Airborne Real-time Cueing Hyperspectral Enhanced Reconnaissance (ARCHER) sensor. The target materials, shapes, and movements throughout the collection area were chosen such that evaluation of change detection algorithms, atmospheric compensation techniques, image fusion methods, and material detection and identification algorithms is possible. This paper describes the collection plan, data acquisition, and initial analysis of the collected imagery.

  10. Implementation on Landsat Data of a Simple Cloud Mask Algorithm Developed for MODIS Land Bands

    NASA Technical Reports Server (NTRS)

    Oreopoulos, Lazaros; Wilson, Michael J.; Varnai, Tamas

    2010-01-01

    This letter assesses the performance on Landsat-7 images of a modified version of a cloud masking algorithm originally developed for clear-sky compositing of Moderate Resolution Imaging Spectroradiometer (MODIS) images at northern mid-latitudes. While data from recent Landsat missions include measurements at thermal wavelengths, and such measurements are also planned for the next mission, thermal tests are not included in the suggested algorithm in its present form to maintain greater versatility and ease of use. To evaluate the masking algorithm we take advantage of the availability of manual (visual) cloud masks developed at USGS for the collection of Landsat scenes used here. As part of our evaluation we also include the Automated Cloud Cover Assesment (ACCA) algorithm that includes thermal tests and is used operationally by the Landsat-7 mission to provide scene cloud fractions, but no cloud masks. We show that the suggested algorithm can perform about as well as ACCA both in terms of scene cloud fraction and pixel-level cloud identification. Specifically, we find that the algorithm gives an error of 1.3% for the scene cloud fraction of 156 scenes, and a root mean square error of 7.2%, while it agrees with the manual mask for 93% of the pixels, figures very similar to those from ACCA (1.2%, 7.1%, 93.7%).

  11. Development of a fire detection algorithm for the COMS (Communication Ocean and Meteorological Satellite)

    NASA Astrophysics Data System (ADS)

    Kim, Goo; Kim, Dae Sun; Lee, Yang-Won

    2013-10-01

    The forest fires do much damage to our life in ecological and economic aspects. South Korea is probably more liable to suffer from the forest fire because mountain area occupies more than half of land in South Korea. They have recently launched the COMS(Communication Ocean and Meteorological Satellite) which is a geostationary satellite. In this paper, we developed forest fire detection algorithm using COMS data. Generally, forest fire detection algorithm uses characteristics of 4 and 11 micrometer brightness temperature. Our algorithm additionally uses LST(Land Surface Temperature). We confirmed the result of our fire detection algorithm using statistical data of Korea Forest Service and ASTER(Advanced Spaceborne Thermal Emission and Reflection Radiometer) images. We used the data in South Korea On April 1 and 2, 2011 because there are small and big forest fires at that time. The detection rate was 80% in terms of the frequency of the forest fires and was 99% in terms of the damaged area. Considering the number of COMS's channels and its low resolution, this result is a remarkable outcome. To provide users with the result of our algorithm, we developed a smartphone application for users JSP(Java Server Page). This application can work regardless of the smartphone's operating system. This study can be unsuitable for other areas and days because we used just two days data. To improve the accuracy of our algorithm, we need analysis using long-term data as future work.

  12. Development of an Algorithm to Classify Colonoscopy Indication from Coded Health Care Data

    PubMed Central

    Adams, Kenneth F.; Johnson, Eric A.; Chubak, Jessica; Kamineni, Aruna; Doubeni, Chyke A.; Buist, Diana S.M.; Williams, Andrew E.; Weinmann, Sheila; Doria-Rose, V. Paul; Rutter, Carolyn M.

    2015-01-01

    Introduction: Electronic health data are potentially valuable resources for evaluating colonoscopy screening utilization and effectiveness. The ability to distinguish screening colonoscopies from exams performed for other purposes is critical for research that examines factors related to screening uptake and adherence, and the impact of screening on patient outcomes, but distinguishing between these indications in secondary health data proves challenging. The objective of this study is to develop a new and more accurate algorithm for identification of screening colonoscopies using electronic health data. Methods: Data from a case-control study of colorectal cancer with adjudicated colonoscopy indication was used to develop logistic regression-based algorithms. The proposed algorithms predict the probability that a colonoscopy was indicated for screening, with variables selected for inclusion in the models using the Least Absolute Shrinkage and Selection Operator (LASSO). Results: The algorithms had excellent classification accuracy in internal validation. The primary, restricted model had AUC= 0.94, sensitivity=0.91, and specificity=0.82. The secondary, extended model had AUC=0.96, sensitivity=0.88, and specificity=0.90. Discussion: The LASSO approach enabled estimation of parsimonious algorithms that identified screening colonoscopies with high accuracy in our study population. External validation is needed to replicate these results and to explore the performance of these algorithms in other settings. PMID:26290883

  13. A comparison of three self-tuning control algorithms developed for the Bristol-Babcock controller

    SciTech Connect

    Tapp, P.A.

    1992-04-01

    A brief overview of adaptive control methods relating to the design of self-tuning proportional-integral-derivative (PID) controllers is given. The methods discussed include gain scheduling, self-tuning, auto-tuning, and model-reference adaptive control systems. Several process identification and parameter adjustment methods are discussed. Characteristics of the two most common types of self-tuning controllers implemented by industry (i.e., pattern recognition and process identification) are summarized. The substance of the work is a comparison of three self-tuning proportional-plus-integral (STPI) control algorithms developed to work in conjunction with the Bristol-Babcock PID control module. The STPI control algorithms are based on closed-loop cycling theory, pattern recognition theory, and model-based theory. A brief theory of operation of these three STPI control algorithms is given. Details of the process simulations developed to test the STPI algorithms are given, including an integrating process, a first-order system, a second-order system, a system with initial inverse response, and a system with variable time constant and delay. The STPI algorithms' performance with regard to both setpoint changes and load disturbances is evaluated, and their robustness is compared. The dynamic effects of process deadtime and noise are also considered. Finally, the limitations of each of the STPI algorithms is discussed, some conclusions are drawn from the performance comparisons, and a few recommendations are made. 6 refs.

  14. A comparison of three self-tuning control algorithms developed for the Bristol-Babcock controller

    SciTech Connect

    Tapp, P.A.

    1992-04-01

    A brief overview of adaptive control methods relating to the design of self-tuning proportional-integral-derivative (PID) controllers is given. The methods discussed include gain scheduling, self-tuning, auto-tuning, and model-reference adaptive control systems. Several process identification and parameter adjustment methods are discussed. Characteristics of the two most common types of self-tuning controllers implemented by industry (i.e., pattern recognition and process identification) are summarized. The substance of the work is a comparison of three self-tuning proportional-plus-integral (STPI) control algorithms developed to work in conjunction with the Bristol-Babcock PID control module. The STPI control algorithms are based on closed-loop cycling theory, pattern recognition theory, and model-based theory. A brief theory of operation of these three STPI control algorithms is given. Details of the process simulations developed to test the STPI algorithms are given, including an integrating process, a first-order system, a second-order system, a system with initial inverse response, and a system with variable time constant and delay. The STPI algorithms` performance with regard to both setpoint changes and load disturbances is evaluated, and their robustness is compared. The dynamic effects of process deadtime and noise are also considered. Finally, the limitations of each of the STPI algorithms is discussed, some conclusions are drawn from the performance comparisons, and a few recommendations are made. 6 refs.

  15. TIGER: Development of Thermal Gradient Compensation Algorithms and Techniques

    NASA Technical Reports Server (NTRS)

    Hereford, James; Parker, Peter A.; Rhew, Ray D.

    2004-01-01

    In a wind tunnel facility, the direct measurement of forces and moments induced on the model are performed by a force measurement balance. The measurement balance is a precision-machined device that has strain gages at strategic locations to measure the strain (i.e., deformations) due to applied forces and moments. The strain gages convert the strain (and hence the applied force) to an electrical voltage that is measured by external instruments. To address the problem of thermal gradients on the force measurement balance NASA-LaRC has initiated a research program called TIGER - Thermally-Induced Gradients Effects Research. The ultimate goals of the TIGER program are to: (a) understand the physics of the thermally-induced strain and its subsequent impact on load measurements and (b) develop a robust thermal gradient compensation technique. This paper will discuss the impact of thermal gradients on force measurement balances, specific aspects of the TIGER program (the design of a special-purpose balance, data acquisition and data analysis challenges), and give an overall summary.

  16. Development of a rule-based algorithm for rice cultivation mapping using Landsat 8 time series

    NASA Astrophysics Data System (ADS)

    Karydas, Christos G.; Toukiloglou, Pericles; Minakou, Chara; Gitas, Ioannis Z.

    2015-06-01

    In the framework of ERMES project (FP7 66983), an algorithm for mapping rice cultivation extents using mediumhigh resolution satellite data was developed. ERMES (An Earth obseRvation Model based RicE information Service) aims to develop a prototype of downstream service for rice yield modelling based on a combination of Earth Observation and in situ data. The algorithm was designed as a set of rules applied on a time series of Landsat 8 images, acquired throughout the rice cultivation season of 2014 from the plain of Thessaloniki, Greece. The rules rely on the use of spectral indices, such as the Normalized Difference Vegetation Index (NDVI), the Normalized Difference Water Index (NDWI), and the Normalized Seasonal Wetness Index (NSWI), extracted from the Landsat 8 dataset. The algorithm is subdivided into two phases: a) a hard classification phase, resulting in a binary map (rice/no-rice), where pixels are judged according to their performance in all the images of the time series, while index thresholds were defined after a trial and error approach; b) a soft classification phase, resulting in a fuzzy map, by assigning scores to the pixels which passed (as `rice') the first phase. Finally, a user-defined threshold of the fuzzy score will discriminate rice from no-rice pixels in the output map. The algorithm was tested in a subset of Thessaloniki plain against a set of selected field data. The results indicated an overall accuracy of the algorithm higher than 97%. The algorithm was also applied in a study are in Spain (Valencia) and a preliminary test indicated a similar performance, i.e. about 98%. Currently, the algorithm is being modified, so as to map rice extents early in the cultivation season (by the end of June), with a view to contribute more substantially to the rice yield prediction service of ERMES. Both algorithm modes (late and early) are planned to be tested in extra Mediterranean study areas, in Greece, Italy, and Spain.

  17. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  18. North Sea development activity surges

    SciTech Connect

    Not Available

    1992-08-10

    This paper reports that operators in the North Sea have reported a burst of upstream activity. Off the U.K.: Amoco (U.K.) Exploration Co. installed three jackets in its North Everest and Lomond fields. It also completed laying the Central Area Transmission System (CATS) pipeline, which will carry the fields' gas to shore. BP Exploration Operating Co. Ltd. installed the jacket for it Unity riser platform 5 {1/2} km from its Forties Charlie platform. Conoco (U.K.) Ltd. tested a successful appraisal well in Britannia field in Block 15/30, about 130 miles northeast of Aberdeen. In the Norwegian North Sea, Saga Petroleum AS placed Snorre oil and gas field on production 6 weeks ahead of schedule and 1.5 billion kroner under budget at a cost of 16.6 billion kroner; and downstream off the U.K., Phillips Petroleum Co. (U.K.) Ltd. awarded Allseas Marine Contractors SA, Essen, Belgium, a pipelay and trenching contract for its Ann field development project in Block 49/6a.

  19. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  20. Correlation signatures of wet soils and snows. [algorithm development and computer programming

    NASA Technical Reports Server (NTRS)

    Phillips, M. R.

    1972-01-01

    Interpretation, analysis, and development of algorithms have provided the necessary computational programming tools for soil data processing, data handling and analysis. Algorithms that have been developed thus far, are adequate and have been proven successful for several preliminary and fundamental applications such as software interfacing capabilities, probability distributions, grey level print plotting, contour plotting, isometric data displays, joint probability distributions, boundary mapping, channel registration and ground scene classification. A description of an Earth Resources Flight Data Processor, (ERFDP), which handles and processes earth resources data under a users control is provided.

  1. Better-than-the-best fusion algorithm with application in human activity recognition

    NASA Astrophysics Data System (ADS)

    Najjar, Nayeff; Gupta, Shalabh

    2015-05-01

    This paper introduces the Better-than-the-Best Fusion (BB-Fus) algorithm. The BB-Fus algorithm is a simple and effective information fusion algorithm that combines the information from different sources (be it sensors, features or classifiers) to improve the Correct Classification Rate (CCR). It can be observed that in most classification problems, different sensors or features might have different classification accuracies in separating different classes. Therefore, this paper constructs an optimal decision tree that isolates one class at a time with the best sensor to separate that particular class. The paper shows that the decision tree improves the overall CCR as compared to the use of any single sensor or feature for any 3-class classification problem. The efficiency of the BB-Fus algorithm is validated on the Opportunity data set to solve the human activity recognition problem where a set of 56 sensors (including a localization system, accelerometers, inertial measurement units and magnetic sensors mounted on various body parts; besides, accelerometers and gyroscopes mounted on different objects) are used. The CCR resulting from the BB-Fus algorithm is 96% while the best sensor achieved 94% CCR.

  2. Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU

    PubMed Central

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures. PMID:24723812

  3. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation

    PubMed Central

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  4. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation.

    PubMed

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  5. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation.

    PubMed

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  6. Development and validation of evolutionary algorithm software as an optimization tool for biological and environmental applications.

    PubMed

    Sys, K; Boon, N; Verstraete, W

    2004-06-01

    A flexible, extendable tool for the optimization of (micro)biological processes and protocols using evolutionary algorithms was developed. It has been tested using three different theoretical optimization problems: 2 two-dimensional problems, one with three maxima and one with five maxima and a river autopurification optimization problem with boundary conditions. For each problem, different evolutionary parameter settings were used for the optimization. For each combination of evolutionary parameters, 15 generations were run 20 times. It has been shown that in all cases, the evolutionary algorithm gave rise to valuable results. Generally, the algorithms were able to detect the more stable sub-maximum even if there existed less stable maxima. The latter is, from a practical point of view, generally more desired. The most important factors influencing the convergence process were the parameter value randomization rate and distribution. The developed software, described in this work, is available for free.

  7. Applications and development of new algorithms for displacement analysis using InSAR time series

    NASA Astrophysics Data System (ADS)

    Osmanoglu, Batuhan

    -dimensional (3-D) phase unwrapping. Chapter 4 focuses on the unwrapping path. Unwrapping algorithms can be divided into two groups, path-dependent and path-independent algorithms. Path-dependent algorithms use local unwrapping functions applied pixel-by-pixel to the dataset. In contrast, path-independent algorithms use global optimization methods such as least squares, and return a unique solution. However, when aliasing and noise are present, path-independent algorithms can underestimate the signal in some areas due to global fitting criteria. Path-dependent algorithms do not underestimate the signal, but, as the name implies, the unwrapping path can affect the result. Comparison between existing path algorithms and a newly developed algorithm based on Fisher information theory was conducted. Results indicate that Fisher information theory does indeed produce lower misfit results for most tested cases. Chapter 5 presents a new time series analysis method based on 3-D unwrapping of SAR data using extended Kalman filters. Existing methods for time series generation using InSAR data employ special filters to combine two-dimensional (2-D) spatial unwrapping with one-dimensional (1-D) temporal unwrapping results. The new method, however, combines observations in azimuth, range and time for repeat pass interferometry. Due to the pixel-by-pixel characteristic of the filter, the unwrapping path is selected based on a quality map. This unwrapping algorithm is the first application of extended Kalman filters to the 3-D unwrapping problem. Time series analyses of InSAR data are used in a variety of applications with different characteristics. Consequently, it is difficult to develop a single algorithm that can provide optimal results in all cases, given that different algorithms possess a unique set of strengths and weaknesses. Nonetheless, filter-based unwrapping algorithms such as the one presented in this dissertation have the capability of joining multiple observations into a uniform

  8. Soil Moisture Active Passive (SMAP) Project Algorithm Theoretical Basis Document SMAP L1B Radiometer Data Product: L1B_TB

    NASA Technical Reports Server (NTRS)

    Piepmeier, Jeffrey; Mohammed, Priscilla; De Amici, Giovanni; Kim, Edward; Peng, Jinzheng; Ruf, Christopher; Hanna, Maher; Yueh, Simon; Entekhabi, Dara

    2016-01-01

    The purpose of the Soil Moisture Active Passive (SMAP) radiometer calibration algorithm is to convert Level 0 (L0) radiometer digital counts data into calibrated estimates of brightness temperatures referenced to the Earth's surface within the main beam. The algorithm theory in most respects is similar to what has been developed and implemented for decades for other satellite radiometers; however, SMAP includes two key features heretofore absent from most satellite borne radiometers: radio frequency interference (RFI) detection and mitigation, and measurement of the third and fourth Stokes parameters using digital correlation. The purpose of this document is to describe the SMAP radiometer and forward model, explain the SMAP calibration algorithm, including approximations, errors, and biases, provide all necessary equations for implementing the calibration algorithm and detail the RFI detection and mitigation process. Section 2 provides a summary of algorithm objectives and driving requirements. Section 3 is a description of the instrument and Section 4 covers the forward models, upon which the algorithm is based. Section 5 gives the retrieval algorithm and theory. Section 6 describes the orbit simulator, which implements the forward model and is the key for deriving antenna pattern correction coefficients and testing the overall algorithm.

  9. Developing Internal Controls through Activities

    ERIC Educational Resources Information Center

    Barnes, F. Herbert

    2009-01-01

    Life events can include the Tuesday afternoon cooking class with the group worker or the Saturday afternoon football game, but in the sense that Fritz Redl thought of them, these activities are only threads in a fabric of living that includes all the elements of daily life: playing, working, school-based learning, learning through activities,…

  10. Developments in the Aerosol Layer Height Retrieval Algorithm for the Copernicus Sentinel-4/UVN Instrument

    NASA Astrophysics Data System (ADS)

    Nanda, Swadhin; Sanders, Abram; Veefkind, Pepijn

    2016-04-01

    The Sentinel-4 mission is a part of the European Commission's Copernicus programme, the goal of which is to provide geo-information to manage environmental assets, and to observe, understand and mitigate the effects of the changing climate. The Sentinel-4/UVN instrument design is motivated by the need to monitor trace gas concentrations and aerosols in the atmosphere from a geostationary orbit. The on-board instrument is a high resolution UV-VIS-NIR (UVN) spectrometer system that provides hourly radiance measurements over Europe and northern Africa with a spatial sampling of 8 km. The main application area of Sentinel-4/UVN is air quality. One of the data products that is being developed for Sentinel-4/UVN is the Aerosol Layer Height (ALH). The goal is to determine the height of aerosol plumes with a resolution of better than 0.5 - 1 km. The ALH product thus targets aerosol layers in the free troposphere, such as desert dust, volcanic ash and biomass during plumes. KNMI is assigned with the development of the Aerosol Layer Height (ALH) algorithm. Its heritage is the ALH algorithm developed by Sanders and De Haan (ATBD, 2016) for the TROPOMI instrument on board the Sentinel-5 Precursor mission that is to be launched in June or July 2016 (tentative date). The retrieval algorithm designed so far for the aerosol height product is based on the absorption characteristics of the oxygen-A band (759-770 nm). The algorithm has heritage to the ALH algorithm developed for TROPOMI on the Sentinel 5 precursor satellite. New aspects for Sentinel-4/UVN include the higher resolution (0.116 nm compared to 0.4 for TROPOMI) and hourly observation from the geostationary orbit. The algorithm uses optimal estimation to obtain a spectral fit of the reflectance across absorption band, while assuming a single uniform layer with fixed width to represent the aerosol vertical distribution. The state vector includes amongst other elements the height of this layer and its aerosol optical

  11. The design and development of signal-processing algorithms for an airborne x-band Doppler weather radar

    NASA Technical Reports Server (NTRS)

    Nicholson, Shaun R.

    1994-01-01

    Improved measurements of precipitation will aid our understanding of the role of latent heating on global circulations. Spaceborne meteorological sensors such as the planned precipitation radar and microwave radiometers on the Tropical Rainfall Measurement Mission (TRMM) provide for the first time a comprehensive means of making these global measurements. Pre-TRMM activities include development of precipitation algorithms using existing satellite data, computer simulations, and measurements from limited aircraft campaigns. Since the TRMM radar will be the first spaceborne precipitation radar, there is limited experience with such measurements, and only recently have airborne radars become available that can attempt to address the issue of the limitations of a spaceborne radar. There are many questions regarding how much attenuation occurs in various cloud types and the effect of cloud vertical motions on the estimation of precipitation rates. The EDOP program being developed by NASA GSFC will provide data useful for testing both rain-retrieval algorithms and the importance of vertical motions on the rain measurements. The purpose of this report is to describe the design and development of real-time embedded parallel algorithms used by EDOP to extract reflectivity and Doppler products (velocity, spectrum width, and signal-to-noise ratio) as the first step in the aforementioned goals.

  12. Development of sub-daily erosion and sediment transport algorithms in SWAT

    Technology Transfer Automated Retrieval System (TEKTRAN)

    New Soil and Water Assessment Tool (SWAT) algorithms for simulation of stormwater best management practices (BMPs) such as detention basins, wet ponds, sedimentation filtration ponds, and retention irrigation systems are under development for modeling small/urban watersheds. Modeling stormwater BMPs...

  13. Ocean Observations with EOS/MODIS: Algorithm Development and Post Launch Studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1997-01-01

    The following accomplishments were made during the present reporting period: (1) We expanded our new method, for identifying the presence of absorbing aerosols and simultaneously performing atmospheric correction, to the point where it could be added as a subroutine to the MODIS water-leaving radiance algorithm; (2) We successfully acquired micro pulse lidar (MPL) data at sea during a cruise in February; (3) We developed a water-leaving radiance algorithm module for an approximate correction of the MODIS instrument polarization sensitivity; and (4) We participated in one cruise to the Gulf of Maine, a well known region for mesoscale coccolithophore blooms. We measured coccolithophore abundance, production and optical properties.

  14. Ocean observations with EOS/MODIS: Algorithm Development and Post Launch Studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1998-01-01

    Significant accomplishments made during the present reporting period: (1) We expanded our "spectral-matching" algorithm (SMA), for identifying the presence of absorbing aerosols and simultaneously performing atmospheric correction and derivation of the ocean's bio-optical parameters, to the point where it could be added as a subroutine to the MODIS water-leaving radiance algorithm; (2) A modification to the SMA that does not require detailed aerosol models has been developed. This is important as the requirement for realistic aerosol models has been a weakness of the SMA; and (3) We successfully acquired micro pulse lidar data in a Saharan dust outbreak during ACE-2 in the Canary Islands.

  15. Development and benefit analysis of a sector design algorithm for terminal dynamic airspace configuration

    NASA Astrophysics Data System (ADS)

    Sciandra, Vincent

    The National Airspace System (NAS) is the vast network of systems enabling safe and efficient air travel in the United States. It consists of a set of static sectors, each controlled by one or more air traffic controllers. Air traffic control is tasked with ensuring that all flights can depart and arrive on time and in a safe and efficient matter. However, skyrocketing demand will only increase the stress on an already inefficient system, causing massive delays. The current, static configuration of the NAS cannot possibly handle the future demand on the system safely and efficiently, especially since it is projected to triple by 2025. To overcome these issues, the Next Generation of Air Transportation System (NextGen) is being enacted to increase the flexibility of the NAS. A major objective of NextGen is to implement Adaptable Dynamic Airspace Configuration (ADAC) which will dynamically allocate the sectors to best fit the traffic in the area. Dynamically allocating sectors will allow resources such as controllers to be better distributed to meet traffic demands. Currently, most DAC research has involved the en route airspace. This leaves the terminal airspace, which accounts for a large amount of the overall NAS complexity, in need of work. Using a combination of methods used in en route sectorization, this thesis has developed an algorithm for the dynamic allocation of sectors in the terminal airspace. This algorithm will be evaluated using metrics common in the evaluation of dynamic density, which is adapted for the unique challenges of the terminal airspace, and used to measure workload on air traffic controllers. These metrics give a better view of the controller workload than the number of aircraft alone. By comparing the test results with sectors currently used in the NAS using real traffic data, the algorithm xv generated sectors can be quantitatively evaluated for improvement of the current sectorizations. This will be accomplished by testing the

  16. Development of a 2-D algorithm to simulate convection and phase transition efficiently

    NASA Astrophysics Data System (ADS)

    Evans, Katherine J.; Knoll, D. A.; Pernice, Michael

    2006-11-01

    We develop a Jacobian-Free Newton-Krylov (JFNK) method for the solution of a two-dimensional convection phase change model using the incompressible Navier-Stokes equation set and enthalpy as the energy conservation variable. The SIMPLE algorithm acts as a physics-based preconditioner to JFNK. This combined algorithm is compared to solutions using SIMPLE as the main solver. Algorithm performance is assessed for two benchmark problems of phase change convection of a pure material, one melting and one freezing. The JFNK-SIMPLE method is shown to be more efficient per time step and more robust at larger time steps. Overall CPU savings of more than an order of magnitude are realized.

  17. Development of the Landsat Data Continuity Mission Cloud Cover Assessment Algorithms

    USGS Publications Warehouse

    Scaramuzza, Pat; Bouchard, M.A.; Dwyer, J.L.

    2012-01-01

    The upcoming launch of the Operational Land Imager (OLI) will start the next era of the Landsat program. However, the Automated Cloud-Cover Assessment (CCA) (ACCA) algorithm used on Landsat 7 requires a thermal band and is thus not suited for OLI. There will be a thermal instrument on the Landsat Data Continuity Mission (LDCM)-the Thermal Infrared Sensor-which may not be available during all OLI collections. This illustrates a need for CCA for LDCM in the absence of thermal data. To research possibilities for full-resolution OLI cloud assessment, a global data set of 207 Landsat 7 scenes with manually generated cloud masks was created. It was used to evaluate the ACCA algorithm, showing that the algorithm correctly classified 79.9% of a standard test subset of 3.95 109 pixels. The data set was also used to develop and validate two successor algorithms for use with OLI data-one derived from an off-the-shelf machine learning package and one based on ACCA but enhanced by a simple neural network. These comprehensive CCA algorithms were shown to correctly classify pixels as cloudy or clear 88.5% and 89.7% of the time, respectively.

  18. A comparison of two adaptive algorithms for the control of active engine mounts

    NASA Astrophysics Data System (ADS)

    Hillis, A. J.; Harrison, A. J. L.; Stoten, D. P.

    2005-08-01

    This paper describes work conducted in order to control automotive active engine mounts, consisting of a conventional passive mount and an internal electromagnetic actuator. Active engine mounts seek to cancel the oscillatory forces generated by the rotation of out-of-balance masses within the engine. The actuator generates a force dependent on a control signal from an algorithm implemented with a real-time DSP. The filtered-x least-mean-square (FXLMS) adaptive filter is used as a benchmark for comparison with a new implementation of the error-driven minimal controller synthesis (Er-MCSI) adaptive controller. Both algorithms are applied to an active mount fitted to a saloon car equipped with a four-cylinder turbo-diesel engine, and have no a priori knowledge of the system dynamics. The steady-state and transient performance of the two algorithms are compared and the relative merits of the two approaches are discussed. The Er-MCSI strategy offers significant computational advantages as it requires no cancellation path modelling. The Er-MCSI controller is found to perform in a fashion similar to the FXLMS filter—typically reducing chassis vibration by 50-90% under normal driving conditions.

  19. Activities to Encourage Speech and Language Development

    MedlinePlus

    ... and Swallowing / Development Activities to Encourage Speech and Language Development Birth to 2 Years Encourage your baby ... or light) of the packages. Typical Speech and Language Development Learning More Than One Language Adult Speech ...

  20. Millimeter-Wave Imaging Radiometer (MIR) Data Processing and Development of Water Vapor Retrieval Algorithms

    NASA Technical Reports Server (NTRS)

    Chang, L. Aron

    1998-01-01

    This document describes the final report of the Millimeter-wave Imaging Radiometer (MIR) Data Processing and Development of Water Vapor Retrieval Algorithms. Volumes of radiometric data have been collected using airborne MIR measurements during a series of field experiments since May 1992. Calibrated brightness temperature data in MIR channels are now available for studies of various hydrological parameters of the atmosphere and Earth's surface. Water vapor retrieval algorithms using multichannel MIR data input are developed for the profiling of atmospheric humidity. The retrieval algorithms are also extended to do three-dimensional mapping of moisture field using continuous observation provided by airborne sensor MIR or spaceborne sensor SSM/T-2. Validation studies for water vapor retrieval are carried out through the intercomparison of collocated and concurrent measurements using different instruments including lidars and radiosondes. The developed MIR water vapor retrieval algorithm is capable of humidity profiling under meteorological conditions ranging from clear column to moderately cloudy sky. Simulative water vapor retrieval studies using extended microwave channels near 183 and 557 GHz strong absorption lines indicate feasibility of humidity profiling to layers in the upper troposphere and improve the overall vertical resolution through the atmosphere.

  1. SPHERES as Formation Flight Algorithm Development and Validation Testbed: Current Progress and Beyond

    NASA Technical Reports Server (NTRS)

    Kong, Edmund M.; Saenz-Otero, Alvar; Nolet, Simon; Berkovitz, Dustin S.; Miller, David W.; Sell, Steve W.

    2004-01-01

    The MIT-SSL SPHERES testbed provides a facility for the development of algorithms necessary for the success of Distributed Satellite Systems (DSS). The initial development contemplated formation flight and docking control algorithms; SPHERES now supports the study of metrology, control, autonomy, artificial intelligence, and communications algorithms and their effects on DSS projects. To support this wide range of topics, the SPHERES design contemplated the need to support multiple researchers, as echoed from both the hardware and software designs. The SPHERES operational plan further facilitates the development of algorithms by multiple researchers, while the operational locations incrementally increase the ability of the tests to operate in a representative environment. In this paper, an overview of the SPHERES testbed is first presented. The SPHERES testbed serves as a model of the design philosophies that allow for the various researches being carried out on such a facility. The implementation of these philosophies are further highlighted in the three different programs that are currently scheduled for testing onboard the International Space Station (ISS) and three that are proposed for a re-flight mission: Mass Property Identification, Autonomous Rendezvous and Docking, TPF Multiple Spacecraft Formation Flight in the first flight and Precision Optical Pointing, Tethered Formation Flight and Mars Orbit Sample Retrieval for the re-flight mission.

  2. An Improved Greedy Search Algorithm for the Development of a Phonetically Rich Speech Corpus

    NASA Astrophysics Data System (ADS)

    Zhang, Jin-Song; Nakamura, Satoshi

    An efficient way to develop large scale speech corpora is to collect phonetically rich ones that have high coverage of phonetic contextual units. The sentence set, usually called as the minimum set, should have small text size in order to reduce the collection cost. It can be selected by a greedy search algorithm from a large mother text corpus. With the inclusion of more and more phonetic contextual effects, the number of different phonetic contextual units increased dramatically, making the search not a trivial issue. In order to improve the search efficiency, we previously proposed a so-called least-to-most-ordered greedy search based on the conventional algorithms. This paper evaluated these algorithms in order to show their different characteristics. The experimental results showed that the least-to-most-ordered methods successfully achieved smaller objective sets at significantly less computation time, when compared with the conventional ones. This algorithm has already been applied to the development a number of speech corpora, including a large scale phonetically rich Chinese speech corpus ATRPTH which played an important role in developing our multi-language translation system.

  3. Integrated Graphics Operations and Analysis Lab Development of Advanced Computer Graphics Algorithms

    NASA Technical Reports Server (NTRS)

    Wheaton, Ira M.

    2011-01-01

    The focus of this project is to aid the IGOAL in researching and implementing algorithms for advanced computer graphics. First, this project focused on porting the current International Space Station (ISS) Xbox experience to the web. Previously, the ISS interior fly-around education and outreach experience only ran on an Xbox 360. One of the desires was to take this experience and make it into something that can be put on NASA s educational site for anyone to be able to access. The current code works in the Unity game engine which does have cross platform capability but is not 100% compatible. The tasks for an intern to complete this portion consisted of gaining familiarity with Unity and the current ISS Xbox code, porting the Xbox code to the web as is, and modifying the code to work well as a web application. In addition, a procedurally generated cloud algorithm will be developed. Currently, the clouds used in AGEA animations and the Xbox experiences are a texture map. The desire is to create a procedurally generated cloud algorithm to provide dynamically generated clouds for both AGEA animations and the Xbox experiences. This task consists of gaining familiarity with AGEA and the plug-in interface, developing the algorithm, creating an AGEA plug-in to implement the algorithm inside AGEA, and creating a Unity script to implement the algorithm for the Xbox. This portion of the project was unable to be completed in the time frame of the internship; however, the IGOAL will continue to work on it in the future.

  4. Utilization of Ancillary Data Sets for Conceptual SMAP Mission Algorithm Development and Product Generation

    NASA Technical Reports Server (NTRS)

    O'Neill, P.; Podest, E.

    2011-01-01

    The planned Soil Moisture Active Passive (SMAP) mission is one of the first Earth observation satellites being developed by NASA in response to the National Research Council's Decadal Survey, Earth Science and Applications from Space: National Imperatives for the Next Decade and Beyond [1]. Scheduled to launch late in 2014, the proposed SMAP mission would provide high resolution and frequent revisit global mapping of soil moisture and freeze/thaw state, utilizing enhanced Radio Frequency Interference (RFI) mitigation approaches to collect new measurements of the hydrological condition of the Earth's surface. The SMAP instrument design incorporates an L-band radar (3 km) and an L band radiometer (40 km) sharing a single 6-meter rotating mesh antenna to provide measurements of soil moisture and landscape freeze/thaw state [2]. These observations would (1) improve our understanding of linkages between the Earth's water, energy, and carbon cycles, (2) benefit many application areas including numerical weather and climate prediction, flood and drought monitoring, agricultural productivity, human health, and national security, (3) help to address priority questions on climate change, and (4) potentially provide continuity with brightness temperature and soil moisture measurements from ESA's SMOS (Soil Moisture Ocean Salinity) and NASA's Aquarius missions. In the planned SMAP mission prelaunch time frame, baseline algorithms are being developed for generating (1) soil moisture products both from radiometer measurements on a 36 km grid and from combined radar/radiometer measurements on a 9 km grid, and (2) freeze/thaw products from radar measurements on a 3 km grid. These retrieval algorithms need a variety of global ancillary data, both static and dynamic, to run the retrieval models, constrain the retrievals, and provide flags for indicating retrieval quality. The choice of which ancillary dataset to use for a particular SMAP product would be based on a number of factors

  5. Femtosecond free-electron laser x-ray diffraction data sets for algorithm development.

    PubMed

    Kassemeyer, Stephan; Steinbrener, Jan; Lomb, Lukas; Hartmann, Elisabeth; Aquila, Andrew; Barty, Anton; Martin, Andrew V; Hampton, Christina Y; Bajt, Saša; Barthelmess, Miriam; Barends, Thomas R M; Bostedt, Christoph; Bott, Mario; Bozek, John D; Coppola, Nicola; Cryle, Max; DePonte, Daniel P; Doak, R Bruce; Epp, Sascha W; Erk, Benjamin; Fleckenstein, Holger; Foucar, Lutz; Graafsma, Heinz; Gumprecht, Lars; Hartmann, Andreas; Hartmann, Robert; Hauser, Günter; Hirsemann, Helmut; Hömke, André; Holl, Peter; Jönsson, Olof; Kimmel, Nils; Krasniqi, Faton; Liang, Mengning; Maia, Filipe R N C; Marchesini, Stefano; Nass, Karol; Reich, Christian; Rolles, Daniel; Rudek, Benedikt; Rudenko, Artem; Schmidt, Carlo; Schulz, Joachim; Shoeman, Robert L; Sierra, Raymond G; Soltau, Heike; Spence, John C H; Starodub, Dmitri; Stellato, Francesco; Stern, Stephan; Stier, Gunter; Svenda, Martin; Weidenspointner, Georg; Weierstall, Uwe; White, Thomas A; Wunderer, Cornelia; Frank, Matthias; Chapman, Henry N; Ullrich, Joachim; Strüder, Lothar; Bogan, Michael J; Schlichting, Ilme

    2012-02-13

    We describe femtosecond X-ray diffraction data sets of viruses and nanoparticles collected at the Linac Coherent Light Source. The data establish the first large benchmark data sets for coherent diffraction methods freely available to the public, to bolster the development of algorithms that are essential for developing this novel approach as a useful imaging technique. Applications are 2D reconstructions, orientation classification and finally 3D imaging by assembling 2D patterns into a 3D diffraction volume.

  6. Femtosecond free-electron laser x-ray diffraction data sets for algorithm development.

    PubMed

    Kassemeyer, Stephan; Steinbrener, Jan; Lomb, Lukas; Hartmann, Elisabeth; Aquila, Andrew; Barty, Anton; Martin, Andrew V; Hampton, Christina Y; Bajt, Saša; Barthelmess, Miriam; Barends, Thomas R M; Bostedt, Christoph; Bott, Mario; Bozek, John D; Coppola, Nicola; Cryle, Max; DePonte, Daniel P; Doak, R Bruce; Epp, Sascha W; Erk, Benjamin; Fleckenstein, Holger; Foucar, Lutz; Graafsma, Heinz; Gumprecht, Lars; Hartmann, Andreas; Hartmann, Robert; Hauser, Günter; Hirsemann, Helmut; Hömke, André; Holl, Peter; Jönsson, Olof; Kimmel, Nils; Krasniqi, Faton; Liang, Mengning; Maia, Filipe R N C; Marchesini, Stefano; Nass, Karol; Reich, Christian; Rolles, Daniel; Rudek, Benedikt; Rudenko, Artem; Schmidt, Carlo; Schulz, Joachim; Shoeman, Robert L; Sierra, Raymond G; Soltau, Heike; Spence, John C H; Starodub, Dmitri; Stellato, Francesco; Stern, Stephan; Stier, Gunter; Svenda, Martin; Weidenspointner, Georg; Weierstall, Uwe; White, Thomas A; Wunderer, Cornelia; Frank, Matthias; Chapman, Henry N; Ullrich, Joachim; Strüder, Lothar; Bogan, Michael J; Schlichting, Ilme

    2012-02-13

    We describe femtosecond X-ray diffraction data sets of viruses and nanoparticles collected at the Linac Coherent Light Source. The data establish the first large benchmark data sets for coherent diffraction methods freely available to the public, to bolster the development of algorithms that are essential for developing this novel approach as a useful imaging technique. Applications are 2D reconstructions, orientation classification and finally 3D imaging by assembling 2D patterns into a 3D diffraction volume. PMID:22418172

  7. Infrared active polarimetric imaging system controlled by image segmentation algorithms: application to decamouflage

    NASA Astrophysics Data System (ADS)

    Vannier, Nicolas; Goudail, François; Plassart, Corentin; Boffety, Matthieu; Feneyrou, Patrick; Leviandier, Luc; Galland, Frédéric; Bertaux, Nicolas

    2016-05-01

    We describe an active polarimetric imager with laser illumination at 1.5 µm that can generate any illumination and analysis polarization state on the Poincar sphere. Thanks to its full polarization agility and to image analysis of the scene with an ultrafast active-contour based segmentation algorithm, it can perform adaptive polarimetric contrast optimization. We demonstrate the capacity of this imager to detect manufactured objects in different types of environments for such applications as decamouflage and hazardous object detection. We compare two imaging modes having different number of polarimetric degrees of freedom and underline the characteristics that a polarimetric imager aimed at this type of applications should possess.

  8. Synthetic Molecular Machines for Active Self-Assembly: Prototype Algorithms, Designs, and Experimental Study

    NASA Astrophysics Data System (ADS)

    Dabby, Nadine L.

    Computer science and electrical engineering have been the great success story of the twentieth century. The neat modularity and mapping of a language onto circuits has led to robots on Mars, desktop computers and smartphones. But these devices are not yet able to do some of the things that life takes for granted: repair a scratch, reproduce, regenerate, or grow exponentially fast--all while remaining functional. This thesis explores and develops algorithms, molecular implementations, and theoretical proofs in the context of "active self-assembly" of molecular systems. The long-term vision of active self-assembly is the theoretical and physical implementation of materials that are composed of reconfigurable units with the programmability and adaptability of biology's numerous molecular machines. En route to this goal, we must first find a way to overcome the memory limitations of molecular systems, and to discover the limits of complexity that can be achieved with individual molecules. One of the main thrusts in molecular programming is to use computer science as a tool for figuring out what can be achieved. While molecular systems that are Turing-complete have been demonstrated [Winfree, 1996], these systems still cannot achieve some of the feats biology has achieved. One might think that because a system is Turing-complete, capable of computing "anything," that it can do any arbitrary task. But while it can simulate any digital computational problem, there are many behaviors that are not "computations" in a classical sense, and cannot be directly implemented. Examples include exponential growth and molecular motion relative to a surface. Passive self-assembly systems cannot implement these behaviors because (a) molecular motion relative to a surface requires a source of fuel that is external to the system, and (b) passive systems are too slow to assemble exponentially-fast-growing structures. We call these behaviors "energetically incomplete" programmable

  9. Development of sensor-based nitrogen recommendation algorithms for cereal crops

    NASA Astrophysics Data System (ADS)

    Asebedo, Antonio Ray

    Nitrogen (N) management is one of the most recognizable components of farming both within and outside the world of agriculture. Interest over the past decade has greatly increased in improving N management systems in corn (Zea mays) and winter wheat (Triticum aestivum ) to have high NUE, high yield, and be environmentally sustainable. Nine winter wheat experiments were conducted across seven locations from 2011 through 2013. The objectives of this study were to evaluate the impacts of fall-winter, Feekes 4, Feekes 7, and Feekes 9 N applications on winter wheat grain yield, grain protein, and total grain N uptake. Nitrogen treatments were applied as single or split applications in the fall-winter, and top-dressed in the spring at Feekes 4, Feekes 7, and Feekes 9 with applied N rates ranging from 0 to 134 kg ha-1. Results indicate that Feekes 7 and 9 N applications provide more optimal combinations of grain yield, grain protein levels, and fertilizer N recovered in the grain when compared to comparable rates of N applied in the fall-winter or at Feekes 4. Winter wheat N management studies from 2006 through 2013 were utilized to develop sensor-based N recommendation algorithms for winter wheat in Kansas. Algorithm RosieKat v.2.6 was designed for multiple N application strategies and utilized N reference strips for establishing N response potential. Algorithm NRS v1.5 addressed single top-dress N applications and does not require a N reference strip. In 2013, field validations of both algorithms were conducted at eight locations across Kansas. Results show algorithm RK v2.6 consistently provided highly efficient N recommendations for improving NUE, while achieving high grain yield and grain protein. Without the use of the N reference strip, NRS v1.5 performed statistically equal to the KSU soil test N recommendation in regards to grain yield but with lower applied N rates. Six corn N fertigation experiments were conducted at KSU irrigated experiment fields from 2012

  10. Review and Analysis of Algorithmic Approaches Developed for Prognostics on CMAPSS Dataset

    NASA Technical Reports Server (NTRS)

    Ramasso, Emannuel; Saxena, Abhinav

    2014-01-01

    Benchmarking of prognostic algorithms has been challenging due to limited availability of common datasets suitable for prognostics. In an attempt to alleviate this problem several benchmarking datasets have been collected by NASA's prognostic center of excellence and made available to the Prognostics and Health Management (PHM) community to allow evaluation and comparison of prognostics algorithms. Among those datasets are five C-MAPSS datasets that have been extremely popular due to their unique characteristics making them suitable for prognostics. The C-MAPSS datasets pose several challenges that have been tackled by different methods in the PHM literature. In particular, management of high variability due to sensor noise, effects of operating conditions, and presence of multiple simultaneous fault modes are some factors that have great impact on the generalization capabilities of prognostics algorithms. More than 70 publications have used the C-MAPSS datasets for developing data-driven prognostic algorithms. The C-MAPSS datasets are also shown to be well-suited for development of new machine learning and pattern recognition tools for several key preprocessing steps such as feature extraction and selection, failure mode assessment, operating conditions assessment, health status estimation, uncertainty management, and prognostics performance evaluation. This paper summarizes a comprehensive literature review of publications using C-MAPSS datasets and provides guidelines and references to further usage of these datasets in a manner that allows clear and consistent comparison between different approaches.

  11. DEVELOPMENT OF PROCESSING ALGORITHMS FOR OUTLIERS AND MISSING VALUES IN CONSTANT OBSERVATION DATA OF TRAFFIC VOLUMES

    NASA Astrophysics Data System (ADS)

    Hashimoto, Hiroyoshi; Kawano, Tomohiko; Momma, Toshiyuki; Uesaka, Katsumi

    Ministry of Land, Infrastructure, Transport and Tourism of Japan is going to make maximum use of vehicle detectors installed at national roads around the country and efficiently gather traffic volume data from wide areas by estimating traffic volumes within adjacent road sections based on the constant observation data obtained from the vehicle detectors. Efficient processing of outliers and missing values in constant observation data are needed in this process. Focusing on the processing of singular and missing values, the authors have developed a series of algorithms to calculate hourly traffic volumes in which a required accuracy is secured based on measurement data obtained from vehicle detectors. The algorithms have been put to practical uses. The main characteristic of these algorithms is that they use data accumulated in the past as well as data from constant observation devices in adjacent road sections. This paper describes the contents of the developed algorithms and clarifies their accuracy using actual observation data and by making comparis on with other methods.

  12. jClustering, an Open Framework for the Development of 4D Clustering Algorithms

    PubMed Central

    Mateos-Pérez, José María; García-Villalba, Carmen; Pascau, Javier; Desco, Manuel; Vaquero, Juan J.

    2013-01-01

    We present jClustering, an open framework for the design of clustering algorithms in dynamic medical imaging. We developed this tool because of the difficulty involved in manually segmenting dynamic PET images and the lack of availability of source code for published segmentation algorithms. Providing an easily extensible open tool encourages publication of source code to facilitate the process of comparing algorithms and provide interested third parties with the opportunity to review code. The internal structure of the framework allows an external developer to implement new algorithms easily and quickly, focusing only on the particulars of the method being implemented and not on image data handling and preprocessing. This tool has been coded in Java and is presented as an ImageJ plugin in order to take advantage of all the functionalities offered by this imaging analysis platform. Both binary packages and source code have been published, the latter under a free software license (GNU General Public License) to allow modification if necessary. PMID:23990913

  13. Space-based Doppler lidar sampling strategies: Algorithm development and simulated observation experiments

    NASA Technical Reports Server (NTRS)

    Emmitt, G. D.; Wood, S. A.; Morris, M.

    1990-01-01

    Lidar Atmospheric Wind Sounder (LAWS) Simulation Models (LSM) were developed to evaluate the potential impact of global wind observations on the basic understanding of the Earth's atmosphere and on the predictive skills of current forecast models (GCM and regional scale). Fully integrated top to bottom LAWS Simulation Models for global and regional scale simulations were developed. The algorithm development incorporated the effects of aerosols, water vapor, clouds, terrain, and atmospheric turbulence into the models. Other additions include a new satellite orbiter, signal processor, line of sight uncertainty model, new Multi-Paired Algorithm and wind error analysis code. An atmospheric wind field library containing control fields, meteorological fields, phenomena fields, and new European Center for Medium Range Weather Forecasting (ECMWF) data was also added. The LSM was used to address some key LAWS issues and trades such as accuracy and interpretation of LAWS information, data density, signal strength, cloud obscuration, and temporal data resolution.

  14. Neural Networks algorithm development for polarimetric observations of above cloud aerosols (ACA)

    NASA Astrophysics Data System (ADS)

    Segal-Rosenhaimer, M.; Knobelspiesse, K. D.; Redemann, J.

    2015-12-01

    The direct and indirect radiative effects of above clouds aerosols (ACA) are still highly uncertain in current climate assessments. Much of this uncertainty is observational as most orbital remote sensing algorithms were not designed to simultaneously retrieve aerosol and cloud optical properties. Recently, several algorithms have been developed to infer ACA loading and properties using passive, single view angle instruments (OMI, MODIS). Yet, these are not operational and still require rigorous validation. Multiangle polarimetric instruments like POLDER, and RSP show promise for detection and quantification of ACA. However, the retrieval methods for polarimetric measurements entail some drawbacks such as assuming homogeneity of the underlying cloud field for POLDER and retrieved cloud effective radii as an input into RSP scheme. In addition, these methods require computationally expensive RT calculations, which precludes real-time polarimetric data analysis during field campaigns. Here we describe the development of a new algorithm to retrieve atmospheric aerosol and cloud optical properties from observations by polarimetrically sensitive instruments using Neural Networks (NN), which are computationally efficient and fast enough to produce results in the field. This algorithm is specific for ACA, and developed primarily to support the ORACLES (ObseRvations of Aerosols above CLouds and their intEractionS) campaign, which will acquire measurements of ACA in the South-East Atlantic Ocean during episodes of absorbing aerosols above Stratocumulus cloud decks in 2016-18. The algorithm will use a trained NN scheme for concurrent cloud and aerosol microphysical property retrievals that will be input to optimal estimation method. We will discuss the overall retrieval scheme, focusing on the input variables. Specifically, we use principle component analysis (PCA) to examine the information content available to describe the simulated cloud scenes (with adequate noise

  15. A novel fair active queue management algorithm based on traffic delay jitter

    NASA Astrophysics Data System (ADS)

    Wang, Xue-Shun; Yu, Shao-Hua; Dai, Jin-You; Luo, Ting

    2009-11-01

    In order to guarantee the quantity of data traffic delivered in the network, congestion control strategy is adopted. According to the study of many active queue management (AQM) algorithms, this paper proposes a novel active queue management algorithm named JFED. JFED can stabilize queue length at a desirable level by adjusting output traffic rate and adopting a reasonable calculation of packet drop probability based on buffer queue length and traffic jitter; and it support burst packet traffic through the packet delay jitter, so that it can traffic flow medium data. JFED impose effective punishment upon non-responsible flow with a full stateless method. To verify the performance of JFED, it is implemented in NS2 and is compared with RED and CHOKe with respect to different performance metrics. Simulation results show that the proposed JFED algorithm outperforms RED and CHOKe in stabilizing instantaneous queue length and in fairness. It is also shown that JFED enables the link capacity to be fully utilized by stabilizing the queue length at a desirable level, while not incurring excessive packet loss ratio.

  16. Navy GTE seal development activity

    NASA Technical Reports Server (NTRS)

    Grala, Carl P.

    1993-01-01

    Under the auspices of the Integrated High Performance Turbine Engine Technology Initiative, the Naval Air Warfare Center conducts advanced development programs for demonstration in the next generation of air-breathing propulsion systems. Among the target technologies are gas path and lube oil seals. Two development efforts currently being managed by NAWCAD are the High Performance Compressor Discharge Film-Riding Face Seal and the Subsonic Core High Speed Air/Oil Seal. The High Performance Compressor Discharge Film-Riding Face Seal Program aims at reducing parasitic leakage through application of a film-riding face sea concept to the compressor discharge location of a Phase 2 IHPTET engine. An order-of-magnitude leakage reduction relative to current labyrinth seal configurations is expected. Performance goals for these seals are (1) 1200 F air temperature, (2) 800 feet-per-second surface velocity, and (3) 600 SPI differential pressure. The two designs chosen for fabrication and rig test are a spiral groove and a Rayleigh step seal. Rig testing is currently underway. The Subsonic Core High Speed Air/Oil Seal Program is developing shaft-to-ground seals for next-generation propulsion systems that will minimize leakage and provide full life. Significantly higher rotor speeds and temperatures will be experienced. Technologies being exploited include, hydrodynamic lift assist features, ultra light weight designs, and improved cooling schemes. Parametric testing has been completed; a final seal design is entering the endurance test phase.

  17. Development of Outlier detection Algorithm Applicable to a Korean Surge-Gauge

    NASA Astrophysics Data System (ADS)

    Lee, Jun-Whan; Park, Sun-Cheon; Lee, Won-Jin; Lee, Duk Kee

    2016-04-01

    The Korea Meteorological Administration (KMA) is operating a surge-gauge (aerial ultrasonic type) at Ulleung-do to monitor tsunamis. And the National Institute of Meteorological Sciences (NIMS), KMA is developing a tsunami detection and observation system using this surge-gauge. Outliers resulting from a problem with the transmission and extreme events, which change the water level temporarily, are one of the most common discouraging problems in tsunami detection. Unlike a spike, multipoint outliers are difficult to detect clearly. Most of the previous studies used statistic values or signal processing methods such as wavelet transform and filter to detect the multipoint outliers, and used a continuous dataset. However, as the focus moved to a near real-time operation with a dataset that contains gaps, these methods are no longer tenable. In this study, we developed an outlier detection algorithm applicable to the Ulleung-do surge gauge where both multipoint outliers and missing data exist. Although only 9-point data and two arithmetic operations (plus and minus) are used, because of the newly developed keeping method, the algorithm is not only simple and fast but also effective in a non-continuous dataset. We calibrated 17 thresholds and conducted performance tests using the three month data from the Ulleung-do surge gauge. The results show that the newly developed despiking algorithm performs reliably in alleviating the outlier detecting problem.

  18. Sensitivity of cloud retrieval statistics to algorithm choices: Lessons learned from MODIS product development

    NASA Astrophysics Data System (ADS)

    Platnick, Steven; Ackerman, Steven; King, Michael; Zhang, Zhibo; Wind, Galina

    2013-04-01

    Cloud detection algorithms search for measurement signatures that differentiate a cloud-contaminated or "not-clear" pixel from the clear-sky background. These signatures can be spectral, textural or temporal in nature. The magnitude of the difference between the cloud and the background must exceed a threshold value for the pixel to be classified having a not-clear FOV. All detection algorithms employ multiple tests ranging across some portion of the solar reflectance and/or infrared spectrum. However, a cloud is not a single, uniform object, but rather has a distribution of optical thickness and morphology. As a result, problems can arise when the distributions of cloud and clear-sky background characteristics overlap, making some test results indeterminate and/or leading to some amount of detection misclassification. Further, imager cloud retrieval statistics are highly sensitive to how a pixel identified as not-clear by a cloud mask is determined to be useful for cloud-top and optical retrievals based on 1-D radiative models. This presentation provides an overview of the different 'choices' algorithm developers make in cloud detection algorithms and the impact on regional and global cloud amounts and fractional coverage, cloud type and property distributions. Lessons learned over the course of the MODIS cloud product development history are discussed. As an example, we will focus on the 1km MODIS Collection 5 cloud optical retrieval algorithm (product MOD06/MYD06 for Terra and Aqua, respectively) which removed pixels associated with cloud edges as defined by immediate adjacency to clear FOV MODIS cloud mask (MOD35/MYD35) pixels as well as ocean pixels with partly cloudy elements in the 250m MODIS cloud mask - part of the so-called Clear Sky Restoral algorithm. The Collection 6 algorithm attempts retrievals for these two types of partly cloudy pixel populations, but allows a user to isolate or filter out the populations. Retrieval sensitivities for these

  19. MEMS-based sensing and algorithm development for fall detection and gait analysis

    NASA Astrophysics Data System (ADS)

    Gupta, Piyush; Ramirez, Gabriel; Lie, Donald Y. C.; Dallas, Tim; Banister, Ron E.; Dentino, Andrew

    2010-02-01

    Falls by the elderly are highly detrimental to health, frequently resulting in injury, high medical costs, and even death. Using a MEMS-based sensing system, algorithms are being developed for detecting falls and monitoring the gait of elderly and disabled persons. In this study, wireless sensors utilize Zigbee protocols were incorporated into planar shoe insoles and a waist mounted device. The insole contains four sensors to measure pressure applied by the foot. A MEMS based tri-axial accelerometer is embedded in the insert and a second one is utilized by the waist mounted device. The primary fall detection algorithm is derived from the waist accelerometer. The differential acceleration is calculated from samples received in 1.5s time intervals. This differential acceleration provides the quantification via an energy index. From this index one may ascertain different gait and identify fall events. Once a pre-determined index threshold is exceeded, the algorithm will classify an event as a fall or a stumble. The secondary algorithm is derived from frequency analysis techniques. The analysis consists of wavelet transforms conducted on the waist accelerometer data. The insole pressure data is then used to underline discrepancies in the transforms, providing more accurate data for classifying gait and/or detecting falls. The range of the transform amplitude in the fourth iteration of a Daubechies-6 transform was found sufficient to detect and classify fall events.

  20. A Focus Group on Dental Pain Complaints with General Medical Practitioners: Developing a Treatment Algorithm.

    PubMed

    Carter, Ava Elizabeth; Carter, Geoff; Abbey, Robyn

    2016-01-01

    Objective. The differential diagnosis of pain in the mouth can be challenging for general medical practitioners (GMPs) as many different dental problems can present with similar signs and symptoms. This study aimed to create a treatment algorithm for GMPs to effectively and appropriately refer the patients and prescribe antibiotics. Design. The study design is comprised of qualitative focus group discussions. Setting and Subjects. Groups of GMPs within the Gold Coast and Brisbane urban and city regions. Outcome Measures. Content thematically analysed and treatment algorithm developed. Results. There were 5 focus groups with 8-9 participants per group. Addressing whether antibiotics should be given to patients with dental pain was considered very important to GMPs to prevent overtreatment and creating antibiotic resistance. Many practitioners were unsure of what the different forms of dental pains represent. 90% of the practitioners involved agreed that the treatment algorithm was useful to daily practice. Conclusion. Common dental complaints and infections are seldom surgical emergencies but can result in prolonged appointments for those GMPs who do not regularly deal with these issues. The treatment algorithm for referral processes and prescriptions was deemed easily downloadable and simple to interpret and detailed but succinct enough for clinical use by GMPs. PMID:27462469

  1. Development of Algorithms for Control of Humidity in Plant Growth Chambers

    NASA Technical Reports Server (NTRS)

    Costello, Thomas A.

    2003-01-01

    Algorithms were developed to control humidity in plant growth chambers used for research on bioregenerative life support at Kennedy Space Center. The algorithms used the computed water vapor pressure (based on measured air temperature and relative humidity) as the process variable, with time-proportioned outputs to operate the humidifier and de-humidifier. Algorithms were based upon proportional-integral-differential (PID) and Fuzzy Logic schemes and were implemented using I/O Control software (OPTO-22) to define and download the control logic to an autonomous programmable logic controller (PLC, ultimate ethernet brain and assorted input-output modules, OPTO-22), which performed the monitoring and control logic processing, as well the physical control of the devices that effected the targeted environment in the chamber. During limited testing, the PLC's successfully implemented the intended control schemes and attained a control resolution for humidity of less than 1%. The algorithms have potential to be used not only with autonomous PLC's but could also be implemented within network-based supervisory control programs. This report documents unique control features that were implemented within the OPTO-22 framework and makes recommendations regarding future uses of the hardware and software for biological research by NASA.

  2. A Focus Group on Dental Pain Complaints with General Medical Practitioners: Developing a Treatment Algorithm

    PubMed Central

    Carter, Geoff; Abbey, Robyn

    2016-01-01

    Objective. The differential diagnosis of pain in the mouth can be challenging for general medical practitioners (GMPs) as many different dental problems can present with similar signs and symptoms. This study aimed to create a treatment algorithm for GMPs to effectively and appropriately refer the patients and prescribe antibiotics. Design. The study design is comprised of qualitative focus group discussions. Setting and Subjects. Groups of GMPs within the Gold Coast and Brisbane urban and city regions. Outcome Measures. Content thematically analysed and treatment algorithm developed. Results. There were 5 focus groups with 8-9 participants per group. Addressing whether antibiotics should be given to patients with dental pain was considered very important to GMPs to prevent overtreatment and creating antibiotic resistance. Many practitioners were unsure of what the different forms of dental pains represent. 90% of the practitioners involved agreed that the treatment algorithm was useful to daily practice. Conclusion. Common dental complaints and infections are seldom surgical emergencies but can result in prolonged appointments for those GMPs who do not regularly deal with these issues. The treatment algorithm for referral processes and prescriptions was deemed easily downloadable and simple to interpret and detailed but succinct enough for clinical use by GMPs. PMID:27462469

  3. Development of AN Algorithmic Procedure for the Detection of Conjugate Fragments

    NASA Astrophysics Data System (ADS)

    Filippas, D.; Georgopoulo, A.

    2013-07-01

    The rapid development of Computer Vision has contributed to the widening of the techniques and methods utilized by archaeologists for the digitization and reconstruction of historic objects by automating the matching of fragments, small or large. This paper proposes a novel method for the detection of conjugate fragments, based mainly on their geometry. Subsequently the application of the Fragmatch algorithm is presented, with an extensive analysis of both of its parts; the global and the partial matching of surfaces. The method proposed is based on the comparison of vectors and surfaces, performed linearly, for simplicity and speed. A series of simulations have been performed in order to test the limits of the algorithm for the noise and the accuracy of scanning, for the number of scan points, as well as for the wear of the surfaces and the diversity of shapes. Problems that have been encountered during the application of these examples are interpreted and ways of dealing with them are being proposed. In addition a practical application is presented to test the algorithm in real conditions. Finally, the key points of this work are being mentioned, followed by an analysis of the advantages and disadvantages of the proposed Fragmatch algorithm along with proposals for future work.

  4. Development of an Aircraft Approach and Departure Atmospheric Profile Generation Algorithm

    NASA Technical Reports Server (NTRS)

    Buck, Bill K.; Velotas, Steven G.; Rutishauser, David K. (Technical Monitor)

    2004-01-01

    In support of NASA Virtual Airspace Modeling and Simulation (VAMS) project, an effort was initiated to develop and test techniques for extracting meteorological data from landing and departing aircraft, and for building altitude based profiles for key meteorological parameters from these data. The generated atmospheric profiles will be used as inputs to NASA s Aircraft Vortex Spacing System (AVOLSS) Prediction Algorithm (APA) for benefits and trade analysis. A Wake Vortex Advisory System (WakeVAS) is being developed to apply weather and wake prediction and sensing technologies with procedures to reduce current wake separation criteria when safe and appropriate to increase airport operational efficiency. The purpose of this report is to document the initial theory and design of the Aircraft Approach Departure Atmospheric Profile Generation Algorithm.

  5. Algorithm developments for the Euler equations with calculations of transonic flows

    NASA Technical Reports Server (NTRS)

    Goorjian, Peter M.

    1987-01-01

    A new algorithm has been developed for the Euler equations that uses flux vector splitting in combination with the concept of rotating the coordinate system to the local streamwise direction. Flux vector biasing is applied along the local streamwise direction and central differencing is used transverse to the flow direction. The flux vector biasing is switched from upwind for supersonic flow to downwind-biased for subsonic flow. This switching is based on the Mach number; hence the proper domain of dependence is used in the supersonic regions and the switching occurs across shock waves. The theoretical basis and the development of the formulas for flux vector splitting are presented. Then several one-dimensional calculations are presented of steady and unsteady transonic flows, which demonstrate the stability and accuracy of the algorithm. Finally results are shown for unsteady transonic flow over an airfoil. The pressure coefficient plots show sharp transonic shock profiles, and the Mach contour plots show smoothly varying contours.

  6. Forecasting of the development of professional medical equipment engineering based on neuro-fuzzy algorithms

    NASA Astrophysics Data System (ADS)

    Vaganova, E. V.; Syryamkin, M. V.

    2015-11-01

    The purpose of the research is the development of evolutionary algorithms for assessments of promising scientific directions. The main attention of the present study is paid to the evaluation of the foresight possibilities for identification of technological peaks and emerging technologies in professional medical equipment engineering in Russia and worldwide on the basis of intellectual property items and neural network modeling. An automated information system consisting of modules implementing various classification methods for accuracy of the forecast improvement and the algorithm of construction of neuro-fuzzy decision tree have been developed. According to the study result, modern trends in this field will focus on personalized smart devices, telemedicine, bio monitoring, «e-Health» and «m-Health» technologies.

  7. Data and software tools for gamma radiation spectral threat detection and nuclide identification algorithm development and evaluation

    NASA Astrophysics Data System (ADS)

    Portnoy, David; Fisher, Brian; Phifer, Daniel

    2015-06-01

    The detection of radiological and nuclear threats is extremely important to national security. The federal government is spending significant resources developing new detection systems and attempting to increase the performance of existing ones. The detection of illicit radionuclides that may pose a radiological or nuclear threat is a challenging problem complicated by benign radiation sources (e.g., cat litter and medical treatments), shielding, and large variations in background radiation. Although there is a growing acceptance within the community that concentrating efforts on algorithm development (independent of the specifics of fully assembled systems) has the potential for significant overall system performance gains, there are two major hindrances to advancements in gamma spectral analysis algorithms under the current paradigm: access to data and common performance metrics along with baseline performance measures. Because many of the signatures collected during performance measurement campaigns are classified, dissemination to algorithm developers is extremely limited. This leaves developers no choice but to collect their own data if they are lucky enough to have access to material and sensors. This is often combined with their own definition of metrics for measuring performance. These two conditions make it all but impossible for developers and external reviewers to make meaningful comparisons between algorithms. Without meaningful comparisons, performance advancements become very hard to achieve and (more importantly) recognize. The objective of this work is to overcome these obstacles by developing and freely distributing real and synthetically generated gamma-spectra data sets as well as software tools for performance evaluation with associated performance baselines to national labs, academic institutions, government agencies, and industry. At present, datasets for two tracks, or application domains, have been developed: one that includes temporal

  8. Spontaneous network activity and synaptic development

    PubMed Central

    Kerschensteiner, Daniel

    2014-01-01

    Throughout development, the nervous system produces patterned spontaneous activity. Research over the last two decades has revealed a core group of mechanisms that mediate spontaneous activity in diverse circuits. Many circuits engage several of these mechanisms sequentially to accommodate developmental changes in connectivity. In addition to shared mechanisms, activity propagates through developing circuits and neuronal pathways (i.e. linked circuits in different brain areas) in stereotypic patterns. Increasing evidence suggests that spontaneous network activity shapes synaptic development in vivo. Variations in activity-dependent plasticity may explain how similar mechanisms and patterns of activity can be employed to establish diverse circuits. Here, I will review common mechanisms and patterns of spontaneous activity in emerging neural networks and discuss recent insights into their contribution to synaptic development. PMID:24280071

  9. Ocean Observations with EOS/MODIS: Algorithm Development and Post Launch Studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1997-01-01

    Significant accomplishments made during the present reporting period are as follows: (1) We developed a new method for identifying the presence of absorbing aerosols and, simultaneously, performing atmospheric correction. The algorithm consists of optimizing the match between the top-of-atmosphere radiance spectrum and the result of models of both the ocean and aerosol optical properties; (2) We developed an algorithm for providing an accurate computation of the diffuse transmittance of the atmosphere given an aerosol model. A module for inclusion into the MODIS atmospheric-correction algorithm was completed; (3) We acquired reflectance data for oceanic whitecaps during a cruise on the RV Ka'imimoana in the Tropical Pacific (Manzanillo, Mexico to Honolulu, Hawaii). The reflectance spectrum of whitecaps was found to be similar to that for breaking waves in the surf zone measured by Frouin, Schwindling and Deschamps, however, the drop in augmented reflectance from 670 to 860 nm was not as great, and the magnitude of the augmented reflectance was significantly less than expected; and (4) We developed a method for the approximate correction for the effects of the MODIS polarization sensitivity. The correction, however, requires adequate characterization of the polarization sensitivity of MODIS prior to launch.

  10. Estimating SWE globally using AMSR-E observations: validation and algorithm development

    NASA Astrophysics Data System (ADS)

    Kelly, R. E.; Foster, J. L.; Hall, D. K.; Tedesco, M.

    2007-12-01

    The Advanced Microwave Scanning Radiometer - EOS (AMSR-E) instrument aboard NASA's Aqua satellite mission is used to estimate daily, five day maximum and monthly average snow water equivalent on a polar projected 25 x 25 km grid. The five-day and monthly products are based on the daily product which uses microwave brightness temperature observations at 10, 18, 36 and 89 GHz, two MODIS land cover products and a snow density "climatology" based on Russian and Canadian data. The current version (B07) of the product implements a dynamic algorithm that has developed from a static approach based on a method by Chang et al. (1987). Retrievals are performed at the native spatial resolution to reflect the measurement "process". This paper describes validation and algorithm developments to the product based on an assessment of the B07 version. New developments to the algorithm that improve the detection capability and that better correct for vegetation are described. The paper also identifies a pathway that will enable the product to reach level 1 validation status.

  11. Development of a Distributed Routing Algorithm for a Digital Telephony Switch.

    NASA Astrophysics Data System (ADS)

    Al-Wakeel, Sami Saleh

    This research has developed a distributed routing algorithm and distributed control software to be implemented in modular digital telephone switching systems. The routing algorithm allows the routing information and the computer calculations for determining the route of switch calls to be divided evenly among the individual units of the digital switch, thus eliminating the need for the centralized complex routing logic. In addition a "routing language" for the storage of routing information has been developed that both compresses the routing information to conserve computer memory and speeds up the search through the routing information. A fully modular microprocessor-based digital switch that takes advantage of the routing algorithm was designed. The switch design achieves several objectives that include the reduction of digital telephone switch cost by taking full advantage of VLSI technology enabling manufacture by developing countries. By utilization of the technical advantages of the distributive routing algorithm, the modular switch can easily reach a capacity of 400,000 lines without degrading the system call processing or exceeding the system loading limits. A distributive control software was also designed to provide the main software protocols and routines necessary for a fully modular telephone switch. The design has several advantages over normal stored program control switches since it eliminates the need for centralized control software and allows the switch units to operate in any signaling environment. As a result, the possibility of total system breakdown is reduced, the switch software can be easily tested or modified, and the switch can interface any of the currently available communication technologies; namely, cable, VHF, satellite, R-1 or R-2 trunks and trunked radio phones. A second development of this research is a mathematical scheme to evaluate the performance of microprocessor-based digital telephone switches. The scheme evaluates various

  12. Developing an Algorithm to Identify History of Cancer Using Electronic Medical Records

    PubMed Central

    Clarke, Christina L.; Feigelson, Heather S.

    2016-01-01

    Introduction/Objective: The objective of this study was to develop an algorithm to identify Kaiser Permanente Colorado (KPCO) members with a history of cancer. Background: Tumor registries are used with high precision to identify incident cancer, but are not designed to capture prevalent cancer within a population. We sought to identify a cohort of adults with no history of cancer, and thus, we could not rely solely on the tumor registry. Methods: We included all KPCO members between the ages of 40–75 years who were continuously enrolled during 2013 (N=201,787). Data from the tumor registry, chemotherapy files, inpatient and outpatient claims were used to create an algorithm to identify members with a high likelihood of cancer. We validated the algorithm using chart review and calculated sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) for occurrence of cancer. Findings: The final version of the algorithm achieved a sensitivity of 100 percent and specificity of 84.6 percent for identifying cancer. If we relied on the tumor registry alone, 47 percent of those with a history of cancer would have been missed. Discussion: Using the tumor registry alone to identify a cohort of patients with prior cancer is not sufficient. In the final version of the algorithm, the sensitivity and PPV were improved when a diagnosis code for cancer was required to accompany oncology visits or chemotherapy administration. Conclusion: Electronic medical record (EMR) data can be used effectively in combination with data from the tumor registry to identify health plan members with a history of cancer. PMID:27195308

  13. Development and evaluation of an articulated registration algorithm for human skeleton registration

    NASA Astrophysics Data System (ADS)

    Yip, Stephen; Perk, Timothy; Jeraj, Robert

    2014-03-01

    Accurate registration over multiple scans is necessary to assess treatment response of bone diseases (e.g. metastatic bone lesions). This study aimed to develop and evaluate an articulated registration algorithm for the whole-body skeleton registration in human patients. In articulated registration, whole-body skeletons are registered by auto-segmenting into individual bones using atlas-based segmentation, and then rigidly aligning them. Sixteen patients (weight = 80-117 kg, height = 168-191 cm) with advanced prostate cancer underwent the pre- and mid-treatment PET/CT scans over a course of cancer therapy. Skeletons were extracted from the CT images by thresholding (HU>150). Skeletons were registered using the articulated, rigid, and deformable registration algorithms to account for position and postural variability between scans. The inter-observers agreement in the atlas creation, the agreement between the manually and atlas-based segmented bones, and the registration performances of all three registration algorithms were all assessed using the Dice similarity index—DSIobserved, DSIatlas, and DSIregister. Hausdorff distance (dHausdorff) of the registered skeletons was also used for registration evaluation. Nearly negligible inter-observers variability was found in the bone atlases creation as the DSIobserver was 96 ± 2%. Atlas-based and manual segmented bones were in excellent agreement with DSIatlas of 90 ± 3%. Articulated (DSIregsiter = 75 ± 2%, dHausdorff = 0.37 ± 0.08 cm) and deformable registration algorithms (DSIregister = 77 ± 3%, dHausdorff = 0.34 ± 0.08 cm) considerably outperformed the rigid registration algorithm (DSIregsiter = 59 ± 9%, dHausdorff = 0.69 ± 0.20 cm) in the skeleton registration as the rigid registration algorithm failed to capture the skeleton flexibility in the joints. Despite superior skeleton registration performance, deformable registration algorithm failed to preserve the local rigidity of bones as over 60% of the

  14. Development and evaluation of an articulated registration algorithm for human skeleton registration.

    PubMed

    Yip, Stephen; Perk, Timothy; Jeraj, Robert

    2014-03-21

    Accurate registration over multiple scans is necessary to assess treatment response of bone diseases (e.g. metastatic bone lesions). This study aimed to develop and evaluate an articulated registration algorithm for the whole-body skeleton registration in human patients. In articulated registration, whole-body skeletons are registered by auto-segmenting into individual bones using atlas-based segmentation, and then rigidly aligning them. Sixteen patients (weight = 80-117 kg, height = 168-191 cm) with advanced prostate cancer underwent the pre- and mid-treatment PET/CT scans over a course of cancer therapy. Skeletons were extracted from the CT images by thresholding (HU>150). Skeletons were registered using the articulated, rigid, and deformable registration algorithms to account for position and postural variability between scans. The inter-observers agreement in the atlas creation, the agreement between the manually and atlas-based segmented bones, and the registration performances of all three registration algorithms were all assessed using the Dice similarity index-DSIobserved, DSIatlas, and DSIregister. Hausdorff distance (dHausdorff) of the registered skeletons was also used for registration evaluation. Nearly negligible inter-observers variability was found in the bone atlases creation as the DSIobserver was 96 ± 2%. Atlas-based and manual segmented bones were in excellent agreement with DSIatlas of 90 ± 3%. Articulated (DSIregsiter = 75 ± 2%, dHausdorff = 0.37 ± 0.08 cm) and deformable registration algorithms (DSIregister = 77 ± 3%, dHausdorff = 0.34 ± 0.08 cm) considerably outperformed the rigid registration algorithm (DSIregsiter = 59 ± 9%, dHausdorff = 0.69 ± 0.20 cm) in the skeleton registration as the rigid registration algorithm failed to capture the skeleton flexibility in the joints. Despite superior skeleton registration performance, deformable registration algorithm failed to preserve the local rigidity of bones as over 60% of the skeletons

  15. Computer Game Development as a Literacy Activity

    ERIC Educational Resources Information Center

    Owston, Ron; Wideman, Herb; Ronda, Natalia Sinitskaya; Brown, Christine

    2009-01-01

    This study examined computer game development as a pedagogical activity to motivate and engage students in curriculum-related literacy activities. We hypothesized that as a consequence, students would improve their traditional reading and writing skills as well as develop new digital literacy skills. Eighteen classes of grade 4 students were…

  16. Evidence-Based Skin Care: A Systematic Literature Review and the Development of a Basic Skin Care Algorithm.

    PubMed

    Lichterfeld, Andrea; Hauss, Armin; Surber, Christian; Peters, Tina; Blume-Peytavi, Ulrike; Kottner, Jan

    2015-01-01

    Patients in acute and long-term care settings receive daily routine skin care, including washing, bathing, and showering, often followed by application of lotions, creams, and/or ointments. These personal hygiene and skin care activities are integral parts of nursing practice, but little is known about their benefits or clinical efficacy. The aim of this article was to summarize the empirical evidence supporting basic skin care procedures and interventions and to develop a clinical algorithm for basic skin care. Electronic databases MEDLINE, EMBASE, and CINAHL were searched and afterward a forward search was conducted using Scopus and Web of Science. In order to evaluate a broad range of basic skin care interventions systematic reviews, intervention studies, and guidelines, consensus statements and best practice standards also were included in the analysis. One hundred twenty-one articles were read in full text; 41documents were included in this report about skin care for prevention of dry skin, prevention of incontinence-associated dermatitis and prevention of skin injuries. The methodological quality of the included publications was variable. Review results and expert input were used to create a clinical algorithm for basic skin care. A 2-step approach is proposed including general and special skin care. Interventions focus primarily on skin that is either too dry or too moist. The target groups for the algorithm are adult patients or residents with intact or preclinical damaged skin in care settings. The goal of the skin care algorithm is a first attempt to provide guidance for practitioners to improve basic skin care in clinical settings in order to maintain or increase skin health.

  17. Development of Analytical Algorithm for the Performance Analysis of Power Train System of an Electric Vehicle

    NASA Astrophysics Data System (ADS)

    Kim, Chul-Ho; Lee, Kee-Man; Lee, Sang-Heon

    Power train system design is one of the key R&D areas on the development process of new automobile because an optimum size of engine with adaptable power transmission which can accomplish the design requirement of new vehicle can be obtained through the system design. Especially, for the electric vehicle design, very reliable design algorithm of a power train system is required for the energy efficiency. In this study, an analytical simulation algorithm is developed to estimate driving performance of a designed power train system of an electric. The principal theory of the simulation algorithm is conservation of energy with several analytical and experimental data such as rolling resistance, aerodynamic drag, mechanical efficiency of power transmission etc. From the analytical calculation results, running resistance of a designed vehicle is obtained with the change of operating condition of the vehicle such as inclined angle of road and vehicle speed. Tractive performance of the model vehicle with a given power train system is also calculated at each gear ratio of transmission. Through analysis of these two calculation results: running resistance and tractive performance, the driving performance of a designed electric vehicle is estimated and it will be used to evaluate the adaptability of the designed power train system on the vehicle.

  18. Algorithm and code development for unsteady three-dimensional Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Obayashi, Shigeru

    1994-01-01

    Aeroelastic tests require extensive cost and risk. An aeroelastic wind-tunnel experiment is an order of magnitude more expensive than a parallel experiment involving only aerodynamics. By complementing the wind-tunnel experiments with numerical simulations, the overall cost of the development of aircraft can be considerably reduced. In order to accurately compute aeroelastic phenomenon it is necessary to solve the unsteady Euler/Navier-Stokes equations simultaneously with the structural equations of motion. These equations accurately describe the flow phenomena for aeroelastic applications. At ARC a code, ENSAERO, is being developed for computing the unsteady aerodynamics and aeroelasticity of aircraft, and it solves the Euler/Navier-Stokes equations. The purpose of this cooperative agreement was to enhance ENSAERO in both algorithm and geometric capabilities. During the last five years, the algorithms of the code have been enhanced extensively by using high-resolution upwind algorithms and efficient implicit solvers. The zonal capability of the code has been extended from a one-to-one grid interface to a mismatching unsteady zonal interface. The geometric capability of the code has been extended from a single oscillating wing case to a full-span wing-body configuration with oscillating control surfaces. Each time a new capability was added, a proper validation case was simulated, and the capability of the code was demonstrated.

  19. Development of an algorithm to predict comfort of wheelchair fit based on clinical measures.

    PubMed

    Kon, Keisuke; Hayakawa, Yasuyuki; Shimizu, Shingo; Nosaka, Toshiya; Tsuruga, Takeshi; Matsubara, Hiroyuki; Nomura, Tomohiro; Murahara, Shin; Haruna, Hirokazu; Ino, Takumi; Inagaki, Jun; Kobayashi, Toshiki

    2015-09-01

    [Purpose] The purpose of this study was to develop an algorithm to predict the comfort of a subject seated in a wheelchair, based on common clinical measurements and without depending on verbal communication. [Subjects] Twenty healthy males (mean age: 21.5 ± 2 years; height: 171 ± 4.3 cm; weight: 56 ± 12.3 kg) participated in this study. [Methods] Each experimental session lasted for 60 min. The clinical measurements were obtained under 4 conditions (good posture, with and without a cushion; bad posture, with and without a cushion). Multiple regression analysis was performed to determine the relationship between a visual analogue scale and exercise physiology parameters (respiratory and metabolism), autonomic nervous parameters (heart rate, blood pressure, and salivary amylase level), and 3D-coordinate posture parameters (good or bad posture). [Results] For the equation (algorithm) to predict the visual analogue scale score, the adjusted multiple correlation coefficient was 0.72, the residual standard deviation was 1.2, and the prediction error was 12%. [Conclusion] The algorithm developed in this study could predict the comfort of healthy male seated in a wheelchair with 72% accuracy. PMID:26504299

  20. Development of an algorithm to predict comfort of wheelchair fit based on clinical measures

    PubMed Central

    Kon, Keisuke; Hayakawa, Yasuyuki; Shimizu, Shingo; Nosaka, Toshiya; Tsuruga, Takeshi; Matsubara, Hiroyuki; Nomura, Tomohiro; Murahara, Shin; Haruna, Hirokazu; Ino, Takumi; Inagaki, Jun; Kobayashi, Toshiki

    2015-01-01

    [Purpose] The purpose of this study was to develop an algorithm to predict the comfort of a subject seated in a wheelchair, based on common clinical measurements and without depending on verbal communication. [Subjects] Twenty healthy males (mean age: 21.5 ± 2 years; height: 171 ± 4.3 cm; weight: 56 ± 12.3 kg) participated in this study. [Methods] Each experimental session lasted for 60 min. The clinical measurements were obtained under 4 conditions (good posture, with and without a cushion; bad posture, with and without a cushion). Multiple regression analysis was performed to determine the relationship between a visual analogue scale and exercise physiology parameters (respiratory and metabolism), autonomic nervous parameters (heart rate, blood pressure, and salivary amylase level), and 3D-coordinate posture parameters (good or bad posture). [Results] For the equation (algorithm) to predict the visual analogue scale score, the adjusted multiple correlation coefficient was 0.72, the residual standard deviation was 1.2, and the prediction error was 12%. [Conclusion] The algorithm developed in this study could predict the comfort of healthy male seated in a wheelchair with 72% accuracy. PMID:26504299

  1. Development of Deterministic Disaggregation Algorithm for Remotely Sensed Soil Moisture Products

    NASA Astrophysics Data System (ADS)

    Shin, Y.; Mohanty, B. P.

    2011-12-01

    Soil moisture near the land surface and in the subsurface profile is an important issue for hydrology, agronomy, and meteorology. Soil moisture data are limited in the spatial and temporal scales. Till now, point-scaled soil moisture measurements representing regional scales are available. Remote sensing (RS) scheme can be an alternative to direct measurement. However, the availability of RS datasets has a limitation due to the scale discrepancy between the RS resolution and local-scale. A number of studies have been conducted to develop downscaling/disaggregation algorithm for extracting fine-scaled soil moisture within a remote sensing product using the stochastic methods. The stochastic downscaling/disaggregation schemes provide us only for soil texture information and sub-area fractions contained in a RS pixel indicating that their specific locations are not recognized. Thus, we developed the deterministic disaggregation algorithm (DDA) with a genetic algorithm (GA) adapting the inverse method for extracting/searching soil textures and their specific location of sub-pixels within a RS soil moisture product under the numerical experiments and field validations. This approach performs quite well in disaggregating/recognizing the soil textures and their specific locations within a RS soil moisture footprint compared to the results of stochastic method. On the basis of these findings, we can suggest that the DDA can be useful for improving the availability of RS products.

  2. Effects of Varying Epoch Lengths, Wear Time Algorithms, and Activity Cut-Points on Estimates of Child Sedentary Behavior and Physical Activity from Accelerometer Data

    PubMed Central

    Banda, Jorge A.; Haydel, K. Farish; Davila, Tania; Desai, Manisha; Haskell, William L.; Matheson, Donna; Robinson, Thomas N.

    2016-01-01

    Objective To examine the effects of accelerometer epoch lengths, wear time (WT) algorithms, and activity cut-points on estimates of WT, sedentary behavior (SB), and physical activity (PA). Methods 268 7–11 year-olds with BMI ≥ 85th percentile for age and sex wore accelerometers on their right hips for 4–7 days. Data were processed and analyzed at epoch lengths of 1-, 5-, 10-, 15-, 30-, and 60-seconds. For each epoch length, WT minutes/day was determined using three common WT algorithms, and minutes/day and percent time spent in SB, light (LPA), moderate (MPA), and vigorous (VPA) PA were determined using five common activity cut-points. ANOVA tested differences in WT, SB, LPA, MPA, VPA, and MVPA when using the different epoch lengths, WT algorithms, and activity cut-points. Results WT minutes/day varied significantly by epoch length when using the NHANES WT algorithm (p < .0001), but did not vary significantly by epoch length when using the ≥ 20 minute consecutive zero or Choi WT algorithms. Minutes/day and percent time spent in SB, LPA, MPA, VPA, and MVPA varied significantly by epoch length for all sets of activity cut-points tested with all three WT algorithms (all p < .0001). Across all epoch lengths, minutes/day and percent time spent in SB, LPA, MPA, VPA, and MVPA also varied significantly across all sets of activity cut-points with all three WT algorithms (all p < .0001). Conclusions The common practice of converting WT algorithms and activity cut-point definitions to match different epoch lengths may introduce significant errors. Estimates of SB and PA from studies that process and analyze data using different epoch lengths, WT algorithms, and/or activity cut-points are not comparable, potentially leading to very different results, interpretations, and conclusions, misleading research and public policy. PMID:26938240

  3. An Effort to Develop an Algorithm to Target Abdominal CT Scans for Patients After Gastric Bypass.

    PubMed

    Pernar, Luise I M; Lockridge, Ryan; McCormack, Colleen; Chen, Judy; Shikora, Scott A; Spector, David; Tavakkoli, Ali; Vernon, Ashley H; Robinson, Malcolm K

    2016-10-01

    Abdominal CT (abdCT) scans are frequently ordered for Roux-en-Y gastric bypass (RYGB) patients presenting to the emergency department (ED) with abdominal pain, but often do not reveal intra-abdominal pathology. We aimed to develop an algorithm for rational ordering of abdCTs. We retrospectively reviewed our institution's RYGB patients presenting acutely with abdominal pain, documenting clinical and laboratory data, and scan results. Associations of clinical parameters to abdCT results were examined for outcome predictors. Of 1643 RYGB patients who had surgery between 2005 and 2015, 355 underwent 387 abdCT scans. Based on abdCT, 48 (12 %) patients required surgery and 86 (22 %) another intervention. No clinical or laboratory parameter predicted imaging results. Imaging decisions for RYGB patients do not appear to be amenable to a simple algorithm, and patient work-up should be based on astute clinical judgment.

  4. Hybrid Neural-Network: Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics Developed and Demonstrated

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2002-01-01

    As part of the NASA Aviation Safety Program, a unique model-based diagnostics method that employs neural networks and genetic algorithms for aircraft engine performance diagnostics has been developed and demonstrated at the NASA Glenn Research Center against a nonlinear gas turbine engine model. Neural networks are applied to estimate the internal health condition of the engine, and genetic algorithms are used for sensor fault detection, isolation, and quantification. This hybrid architecture combines the excellent nonlinear estimation capabilities of neural networks with the capability to rank the likelihood of various faults given a specific sensor suite signature. The method requires a significantly smaller data training set than a neural network approach alone does, and it performs the combined engine health monitoring objectives of performance diagnostics and sensor fault detection and isolation in the presence of nominal and degraded engine health conditions.

  5. A simulation environment for modeling and development of algorithms for ensembles of mobile microsystems

    NASA Astrophysics Data System (ADS)

    Fink, Jonathan; Collins, Tom; Kumar, Vijay; Mostofi, Yasamin; Baras, John; Sadler, Brian

    2009-05-01

    The vision for the Micro Autonomous Systems Technologies MAST programis to develop autonomous, multifunctional, collaborative ensembles of agile, mobile microsystems to enhance tactical situational awareness in urban and complex terrain for small unit operations. Central to this vision is the ability to have multiple, heterogeneous autonomous assets to function as a single cohesive unit, that is adaptable, responsive to human commands and resilient to adversarial conditions. This paper represents an effort to develop a simulation environment for studying control, sensing, communication, perception, and planning methodologies and algorithms.

  6. Development of a block Lanczos algorithm for free vibration analysis of spinning structures

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.; Lawson, C. L.

    1988-01-01

    This paper is concerned with the development of an efficient eigenproblem solution algorithm and an associated computer program for the economical solution of the free vibration problem of complex practical spinning structural systems. Thus, a detailed description of a newly developed block Lanczos procedure is presented in this paper that employs only real numbers in all relevant computations and also fully exploits sparsity of associated matrices. The procedure is capable of computing multiple roots and proves to be most efficient compared to other existing similar techniques.

  7. Developments of global greenhouse gas retrieval algorithm based on Optimal Estimation Method

    NASA Astrophysics Data System (ADS)

    Kim, W. V.; Kim, J.; Lee, H.; Jung, Y.; Boesch, H.

    2013-12-01

    After the industrial revolution, atmospheric carbon dioxide concentration increased drastically over the last 250 years. It is still increasing and over than 400ppm of carbon dioxide was measured at Mauna Loa observatory for the first time which value was considered as important milestone. Therefore, understanding the source, emission, transport and sink of global carbon dioxide is unprecedentedly important. Currently, Total Carbon Column Observing Network (TCCON) is operated to observe CO2 concentration by ground base instruments. However, the number of site is very few and concentrated to Europe and North America. Remote sensing of CO2 could supplement those limitations. Greenhouse Gases Observing SATellite (GOSAT) which was launched 2009 is measuring column density of CO2 and other satellites are planned to launch in a few years. GOSAT provide valuable measurement data but its low spatial resolution and poor success rate of retrieval due to aerosol and cloud, forced the results to cover less than half of the whole globe. To improve data availability, accurate aerosol information is necessary, especially for East Asia region where the aerosol concentration is higher than other region. For the first step, we are developing CO2 retrieval algorithm based on optimal estimation method with VLIDORT the vector discrete ordinate radiative transfer model. Proto type algorithm, developed from various combinations of state vectors to find best combination of state vectors, shows appropriate result and good agreement with TCCON measurements. To reduce calculation cost low-stream interpolation is applied for model simulation and the simulation time is drastically reduced. For the further study, GOSAT CO2 retrieval algorithm will be combined with accurate GOSAT-CAI aerosol retrieval algorithm to obtain more accurate result especially for East Asia.

  8. The development of line-scan image recognition algorithms for the detection of frass on mature tomatoes

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In this research, a multispectral algorithm derived from hyperspectral line-scan fluorescence imaging under violet LED excitation was developed for the detection of frass contamination on mature tomatoes. The algorithm utilized the fluorescence intensities at two wavebands, 664 nm and 690 nm, for co...

  9. Algorithm development and verification of UASCM for multi-dimension and multi-group neutron kinetics model

    SciTech Connect

    Si, S.

    2012-07-01

    The Universal Algorithm of Stiffness Confinement Method (UASCM) for neutron kinetics model of multi-dimensional and multi-group transport equations or diffusion equations has been developed. The numerical experiments based on transport theory code MGSNM and diffusion theory code MGNEM have demonstrated that the algorithm has sufficient accuracy and stability. (authors)

  10. [Illinois Career Development Month Ideas and Activities.

    ERIC Educational Resources Information Center

    Illinois State Board of Education, Springfield.

    This document is intended to help practitioners plan and implement activities for observance of Career Development Month in Illinois. Part 1 examines the following topics: the definitions of career development and education-to-careers; the rationale for devoting a month to career development; a career framework; and suggested Career Development…

  11. Microphysical particle properties derived from inversion algorithms developed in the framework of EARLINET

    NASA Astrophysics Data System (ADS)

    Müller, Detlef; Böckmann, Christine; Kolgotin, Alexei; Schneidenbach, Lars; Chemyakin, Eduard; Rosemann, Julia; Znak, Pavel; Romanov, Anton

    2016-10-01

    We present a summary on the current status of two inversion algorithms that are used in EARLINET (European Aerosol Research Lidar Network) for the inversion of data collected with EARLINET multiwavelength Raman lidars. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. Development of these two algorithms started in 2000 when EARLINET was founded. The algorithms are based on a manually controlled inversion of optical data which allows for detailed sensitivity studies. The algorithms allow us to derive particle effective radius as well as volume and surface area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light absorption needs to be known with high accuracy. It is an extreme challenge to retrieve the real part with an accuracy better than 0.05 and the imaginary part with accuracy better than 0.005-0.1 or ±50 %. Single-scattering albedo can be computed from the retrieved microphysical parameters and allows us to categorize aerosols into high- and low-absorbing aerosols. On the basis of a few exemplary simulations with synthetic optical data we discuss the current status of these manually operated algorithms, the potentially achievable accuracy of data products, and the goals for future work. One algorithm was used with the purpose of testing how well microphysical parameters can be derived if the real part of the complex refractive index is known to at least 0.05 or 0.1. The other algorithm was used to find out how well microphysical parameters can be derived if this constraint for the real part is not applied. The optical data used in our study cover a range of Ångström exponents and extinction-to-backscatter (lidar) ratios that are found from lidar measurements of various aerosol types. We also tested

  12. A collaborative approach to developing an electronic health record phenotyping algorithm for drug-induced liver injury

    PubMed Central

    Overby, Casey Lynnette; Pathak, Jyotishman; Gottesman, Omri; Haerian, Krystl; Perotte, Adler; Murphy, Sean; Bruce, Kevin; Johnson, Stephanie; Talwalkar, Jayant; Shen, Yufeng; Ellis, Steve; Kullo, Iftikhar; Chute, Christopher; Friedman, Carol; Bottinger, Erwin; Hripcsak, George; Weng, Chunhua

    2013-01-01

    Objective To describe a collaborative approach for developing an electronic health record (EHR) phenotyping algorithm for drug-induced liver injury (DILI). Methods We analyzed types and causes of differences in DILI case definitions provided by two institutions—Columbia University and Mayo Clinic; harmonized two EHR phenotyping algorithms; and assessed the performance, measured by sensitivity, specificity, positive predictive value, and negative predictive value, of the resulting algorithm at three institutions except that sensitivity was measured only at Columbia University. Results Although these sites had the same case definition, their phenotyping methods differed by selection of liver injury diagnoses, inclusion of drugs cited in DILI cases, laboratory tests assessed, laboratory thresholds for liver injury, exclusion criteria, and approaches to validating phenotypes. We reached consensus on a DILI phenotyping algorithm and implemented it at three institutions. The algorithm was adapted locally to account for differences in populations and data access. Implementations collectively yielded 117 algorithm-selected cases and 23 confirmed true positive cases. Discussion Phenotyping for rare conditions benefits significantly from pooling data across institutions. Despite the heterogeneity of EHRs and varied algorithm implementations, we demonstrated the portability of this algorithm across three institutions. The performance of this algorithm for identifying DILI was comparable with other computerized approaches to identify adverse drug events. Conclusions Phenotyping algorithms developed for rare and complex conditions are likely to require adaptive implementation at multiple institutions. Better approaches are also needed to share algorithms. Early agreement on goals, data sources, and validation methods may improve the portability of the algorithms. PMID:23837993

  13. Advanced Technology Development for Active Acoustic Liners

    NASA Technical Reports Server (NTRS)

    Sheplak, Mark; Cattafesta, Louis N., III; Nishida, Toshikazu; Kurdila, Andrew J.

    2001-01-01

    Objectives include: (1) Develop electro-mechanical/acoustic models of a Helmholtz resonator possessing a compliant diaphragm coupled to a piezoelectric device; (2) Design and fabricate the energy reclamation module and active Helmholtz resonator; (3) Develop and build appropriate energy reclamation/storage circuit; (4) Develop and fabricate appropriate piezoelectric shunt circuit to tune the compliance of the active Helmholtz resonator via a variable capacitor; (5) Quantify energy reclamation module efficiency in a grazing-flow plane wave tube possessing known acoustic energy input; and (6) Quantify actively tuned Helmholtz resonator performance in grazing-flow plane wave tube for a white-noise input

  14. Synthetic Molecular Machines for Active Self-Assembly: Prototype Algorithms, Designs, and Experimental Study

    NASA Astrophysics Data System (ADS)

    Dabby, Nadine L.

    Computer science and electrical engineering have been the great success story of the twentieth century. The neat modularity and mapping of a language onto circuits has led to robots on Mars, desktop computers and smartphones. But these devices are not yet able to do some of the things that life takes for granted: repair a scratch, reproduce, regenerate, or grow exponentially fast--all while remaining functional. This thesis explores and develops algorithms, molecular implementations, and theoretical proofs in the context of "active self-assembly" of molecular systems. The long-term vision of active self-assembly is the theoretical and physical implementation of materials that are composed of reconfigurable units with the programmability and adaptability of biology's numerous molecular machines. En route to this goal, we must first find a way to overcome the memory limitations of molecular systems, and to discover the limits of complexity that can be achieved with individual molecules. One of the main thrusts in molecular programming is to use computer science as a tool for figuring out what can be achieved. While molecular systems that are Turing-complete have been demonstrated [Winfree, 1996], these systems still cannot achieve some of the feats biology has achieved. One might think that because a system is Turing-complete, capable of computing "anything," that it can do any arbitrary task. But while it can simulate any digital computational problem, there are many behaviors that are not "computations" in a classical sense, and cannot be directly implemented. Examples include exponential growth and molecular motion relative to a surface. Passive self-assembly systems cannot implement these behaviors because (a) molecular motion relative to a surface requires a source of fuel that is external to the system, and (b) passive systems are too slow to assemble exponentially-fast-growing structures. We call these behaviors "energetically incomplete" programmable

  15. Development and evaluation of collision warning/collision avoidance algorithms using an errable driver model

    NASA Astrophysics Data System (ADS)

    Yang, Hsin-Hsiang; Peng, Huei

    2010-12-01

    Collision warning/collision avoidance (CW/CA) systems must be designed to work seamlessly with a human driver, providing warning or control actions when the driver's response (or lack of) is deemed inappropriate. The effectiveness of CW/CA systems working with a human driver needs to be evaluated thoroughly because of legal/liability and other (e.g. traffic flow) concerns. CW/CA systems tuned only under open-loop manoeuvres were frequently found to work unsatisfactorily with human-in-the-loop. However, tuning CW/CA systems with human drivers co-existing is slow and non-repeatable. Driver models, if constructed and used properly, can capture human/control interactions and accelerate the CW/CA development process. Design and evaluation methods for CW/CA algorithms can be categorised into three approaches, scenario-based, performance-based and human-centred. The strength and weakness of these approaches were discussed in this paper and a humanised errable driver model was introduced to improve the developing process. The errable driver model used in this paper is a model that emulates human driver's functions and can generate both nominal (error-free) and devious (with error) behaviours. The car-following data used for developing and validating the model were obtained from a large-scale naturalistic driving database. Three error-inducing behaviours were introduced: human perceptual limitation, time delay and distraction. By including these error-inducing behaviours, rear-end collisions with a lead vehicle were found to occur at a probability similar to traffic accident statistics in the USA. This driver model is then used to evaluate the performance of several existing CW/CA algorithms. Finally, a new CW/CA algorithm was developed based on this errable driver model.

  16. An ECG-based Algorithm for the Automatic Identification of Autonomic Activations Associated with Cortical Arousal

    PubMed Central

    Basner, Mathias; Griefahn, Barbara; Müller, Uwe; Plath, Gernot; Samel, Alexander

    2007-01-01

    Objectives: EEG arousals are associated with autonomic activations. Visual EEG arousal scoring is time consuming and suffers from low interobserver agreement. We hypothesized that information on changes in heart rate alone suffice to predict the occurrence of cortical arousal. Methods: Two visual AASM EEG arousal scorings of 56 healthy subject nights (mean age 37.0 ± 12.8 years, 26 male) were obtained. For each of 5 heartbeats following the onset of 3581 consensus EEG arousals and of an equal number of control conditions, differences to a moving median were calculated and used to estimate likelihood ratios (LRs) for 10 categories of heartbeat differences. Comparable to 5 consecutive diagnostic tests, these LRs were used to calculate the probability of heart rate responses being associated with cortical arousals. Results: EEG and ECG arousal indexes agreed well across a wide range of decision thresholds, resulting in a receiver operating characteristic (ROC) with an area under the curve of 0.91. For the decision threshold chosen for the final analyses, a sensitivity of 68.1% and a specificity of 95.2% were obtained. ECG and EEG arousal indexes were poorly correlated (r = 0.19, P <0.001, ICC = 0.186), which could in part be attributed to 3 outliers. The Bland-Altman plot showed an unbiased estimation of EEG arousal indexes by ECG arousal indexes with a standard deviation of ± 7.9 arousals per hour sleep. In about two-thirds of all cases, ECG arousal scoring was matched by at least one (22.2%) or by both (42.5%) of the visual scorings. Sensitivity of the algorithm increased with increasing duration of EEG arousals. The ECG algorithm was also successfully validated with 30 different nights of 10 subjects (mean age 35.3 ▯ 13.6 years, 5 male). Conclusions: In its current version, the ECG algorithm cannot replace visual EEG arousal scoring. Sensitivity for detecting <10-s EEG arousals needs to be improved. However, in a nonclinical population, it may be valuable to

  17. Development of algorithms for understanding the temporal and spatial variability of the earth's radiation balance

    NASA Astrophysics Data System (ADS)

    Brooks, D. R.; Harrison, E. F.; Minnis, P.; Suttles, J. T.; Kandel, R. S.

    1986-05-01

    A brief description is given of how temporal and spatial variability in the earth's radiative behavior influences the goals of satellite radiation monitoring systems and how some previous systems have addressed the existing problems. Then, results of some simulations of radiation budget monitoring missions are presented. These studies led to the design of the Earth Radiation Budget Experiment (ERBE). A description is given of the temporal and spatial averaging algorithms developed for the ERBE data analysis. These algorithms are intended primarily to produce monthly averages of the net radiant exitance on regional, zonal, and global scales and to provide insight into the regional diurnal variability of radiative parameters such as albedo and long-wave radiant exitance. The algorithms are applied to scanner and nonscanner data for up to three satellites. Modeling of dialy shortwave albedo and radiant exitance with satellite samling that is insufficient to fully account for changing meteorology is discussed in detail. Studies performed during the ERBE mission and software design are reviewed. These studies provide quantitative estimates of the effects of temporally sparse and biased sampling on inferred diurnal and regional radiative parameters. Other topics covered include long-wave diurnal modeling, extraction of a regional monthly net clear-sky radiation budget, the statistical significance of observed diurnal variability, quality control of the analysis, and proposals for validating the results of ERBE time and space averaging.

  18. GLASS daytime all-wave net radiation product: Algorithm development and preliminary validation

    DOE PAGES

    Jiang, Bo; Liang, Shunlin; Ma, Han; Zhang, Xiaotong; Xiao, Zhiqiang; Zhao, Xiang; Jia, Kun; Yao, Yunjun; Jia, Aolin

    2016-03-09

    Mapping surface all-wave net radiation (Rn) is critically needed for various applications. Several existing Rn products from numerical models and satellite observations have coarse spatial resolutions and their accuracies may not meet the requirements of land applications. In this study, we develop the Global LAnd Surface Satellite (GLASS) daytime Rn product at a 5 km spatial resolution. Its algorithm for converting shortwave radiation to all-wave net radiation using the Multivariate Adaptive Regression Splines (MARS) model is determined after comparison with three other algorithms. The validation of the GLASS Rn product based on high-quality in situ measurements in the United Statesmore » shows a coefficient of determination value of 0.879, an average root mean square error value of 31.61 Wm-2, and an average bias of 17.59 Wm-2. Furthermore, we also compare our product/algorithm with another satellite product (CERES-SYN) and two reanalysis products (MERRA and JRA55), and find that the accuracy of the much higher spatial resolution GLASS Rn product is satisfactory. The GLASS Rn product from 2000 to the present is operational and freely available to the public.« less

  19. Development of a general learning algorithm with applications in nuclear reactor systems

    SciTech Connect

    Brittain, C.R.; Otaduy, P.J.; Perez, R.B.

    1989-12-01

    The objective of this study was development of a generalized learning algorithm that can learn to predict a particular feature of a process by observation of a set of representative input examples. The algorithm uses pattern matching and statistical analysis techniques to find a functional relationship between descriptive attributes of the input examples and the feature to be predicted. The algorithm was tested by applying it to a set of examples consisting of performance descriptions for 277 fuel cycles of Oak Ridge National Laboratory's High Flux Isotope Reactor (HFIR). The program learned to predict the critical rod position for the HFIR from core configuration data prior to reactor startup. The functional relationship bases its predictions on initial core reactivity, the number of certain targets placed in the center of the reactor, and the total exposure of the control plates. Twelve characteristic fuel cycle clusters were identified. Nine fuel cycles were diagnosed as having noisy data, and one could not be predicted by the functional relationship. 13 refs., 6 figs.

  20. Development of a new time domain-based algorithm for train detection and axle counting

    NASA Astrophysics Data System (ADS)

    Allotta, B.; D'Adamio, P.; Meli, E.; Pugi, L.

    2015-12-01

    This paper presents an innovative train detection algorithm, able to perform the train localisation and, at the same time, to estimate its speed, the crossing times on a fixed point of the track and the axle number. The proposed solution uses the same approach to evaluate all these quantities, starting from the knowledge of generic track inputs directly measured on the track (for example, the vertical forces on the sleepers, the rail deformation and the rail stress). More particularly, all the inputs are processed through cross-correlation operations to extract the required information in terms of speed, crossing time instants and axle counter. This approach has the advantage to be simple and less invasive than the standard ones (it requires less equipment) and represents a more reliable and robust solution against numerical noise because it exploits the whole shape of the input signal and not only the peak values. A suitable and accurate multibody model of railway vehicle and flexible track has also been developed by the authors to test the algorithm when experimental data are not available and in general, under any operating conditions (fundamental to verify the algorithm accuracy and robustness). The railway vehicle chosen as benchmark is the Manchester Wagon, modelled in the Adams VI-Rail environment. The physical model of the flexible track has been implemented in the Matlab and Comsol Multiphysics environments. A simulation campaign has been performed to verify the performance and the robustness of the proposed algorithm, and the results are quite promising. The research has been carried out in cooperation with Ansaldo STS and ECM Spa.

  1. Development and Validation of a Diabetic Retinopathy Referral Algorithm Based on Single-Field Fundus Photography

    PubMed Central

    Srinivasan, Sangeetha; Shetty, Sharan; Natarajan, Viswanathan; Sharma, Tarun; Raman, Rajiv

    2016-01-01

    Purpose To develop a simplified algorithm to identify and refer diabetic retinopathy (DR) from single-field retinal images specifically for sight-threatening diabetic retinopathy for appropriate care (ii) to determine the agreement and diagnostic accuracy of the algorithm as a pilot study among optometrists versus “gold standard” (retinal specialist grading). Methods The severity of DR was scored based on colour photo using a colour coded algorithm, which included the lesions of DR and number of quadrants involved. A total of 99 participants underwent training followed by evaluation. Data of the 99 participants were analyzed. Fifty posterior pole 45 degree retinal images with all stages of DR were presented. Kappa scores (κ), areas under the receiver operating characteristic curves (AUCs), sensitivity and specificity were determined, with further comparison between working optometrists and optometry students. Results Mean age of the participants was 22 years (range: 19–43 years), 87% being women. Participants correctly identified 91.5% images that required immediate referral (κ) = 0.696), 62.5% of images as requiring review after 6 months (κ = 0.462), and 51.2% of those requiring review after 1 year (κ = 0.532). The sensitivity and specificity of the optometrists were 91% and 78% for immediate referral, 62% and 84% for review after 6 months, and 51% and 95% for review after 1 year, respectively. The AUC was the highest (0.855) for immediate referral, second highest (0.824) for review after 1 year, and 0.727 for review after 6 months criteria. Optometry students performed better than the working optometrists for all grades of referral. Conclusions The diabetic retinopathy algorithm assessed in this work is a simple and a fairly accurate method for appropriate referral based on single-field 45 degree posterior pole retinal images. PMID:27661981

  2. Development of Bio-Optical Algorithms for Geostationary Ocean Color Imager

    NASA Astrophysics Data System (ADS)

    Ryu, J.; Moon, J.; Min, J.; Palanisamy, S.; Han, H.; Ahn, Y.

    2007-12-01

    GOCI, the first Geostationary Ocean Color Imager, shall be operated in a staring-frame capture mode onboard its Communication Ocean and Meteorological Satellite (COMS) and tentatively scheduled for launch in 2008. The mission concept includes eight visible-to-near-infrared bands, 0.5 km pixel resolution, and a coverage region of 2,500 ¢®¢¯ 2,500 km centered at Korea. The GOCI is expected to provide SeaWiFS quality observations for a single study area with imaging interval of 1 hour from 10 am to 5 pm. In the GOCI swath area, the optical properties of the East Sea (typical of Case-I water), the Yellow Sea and East China Sea (typical of Case-II water) are investigated. For developing the GOCI bio-optical algorithms in optically more complex waters, it is necessary to study and understand the optical properties around the Korean Sea. Radiometric measurements were made using WETLabs AC-S, TriOS RAMSES ACC/ARC, and ASD FieldSpec Pro Dual VNIR Spectroradiometer. Seawater samples were collected concurrently with the radiometric measurements at about 300 points around the Korean Sea during 1998 to 2007. The absorption coefficients were determined using Perkin-Elmer Lambda 19 dual-beam spectrophotometer. We analyzed the absorption coefficient of sea water constituents such as phytoplankton, Suspended Sediment (SS) and Dissolved Organic Matter (DOM). Two kinds of chlorophyll algorithms are developed by using statistical regression and fluorescence-based technique considering the bio- optical properties in Case-II waters. Fluorescence measurements were related to in situ Chl-a concentrations to obtain the Flu(681), Flu(688) and Flu(area) algorithms, which were compared with those from standard spectral ratios of the remote sensing reflectance. The single band algorithm for is derived by relationship between Rrs (555) and in situ concentration. The CDOM is estimated by absorption spectra and its slope centered at 440 nm wavelength. These standard algorithms will be

  3. Performance evaluation of nonnegative matrix factorization algorithms to estimate task-related neuronal activities from fMRI data.

    PubMed

    Ding, Xiaoyu; Lee, Jong-Hwan; Lee, Seong-Whan

    2013-04-01

    Nonnegative matrix factorization (NMF) is a blind source separation (BSS) algorithm which is based on the distinct constraint of nonnegativity of the estimated parameters as well as on the measured data. In this study, according to the potential feasibility of NMF for fMRI data, the four most popular NMF algorithms, corresponding to the following two types of (1) least-squares based update [i.e., alternating least-squares NMF (ALSNMF) and projected gradient descent NMF] and (2) multiplicative update (i.e., NMF based on Euclidean distance and NMF based on divergence cost function), were investigated by using them to estimate task-related neuronal activities. These algorithms were applied firstly to individual data from a single subject and, subsequently, to group data sets from multiple subjects. On the single-subject level, although all four algorithms detected task-related activation from simulated data, the performance of multiplicative update NMFs was significantly deteriorated when evaluated using visuomotor task fMRI data, for which they failed in estimating any task-related neuronal activities. In group-level analysis on both simulated data and real fMRI data, ALSNMF outperformed the other three algorithms. The presented findings may suggest that ALSNMF appears to be the most promising option among the tested NMF algorithms to extract task-related neuronal activities from fMRI data.

  4. Development of Algorithms and Error Analyses for the Short Baseline Lightning Detection and Ranging System

    NASA Technical Reports Server (NTRS)

    Starr, Stanley O.

    1998-01-01

    NASA, at the John F. Kennedy Space Center (KSC), developed and operates a unique high-precision lightning location system to provide lightning-related weather warnings. These warnings are used to stop lightning- sensitive operations such as space vehicle launches and ground operations where equipment and personnel are at risk. The data is provided to the Range Weather Operations (45th Weather Squadron, U.S. Air Force) where it is used with other meteorological data to issue weather advisories and warnings for Cape Canaveral Air Station and KSC operations. This system, called Lightning Detection and Ranging (LDAR), provides users with a graphical display in three dimensions of 66 megahertz radio frequency events generated by lightning processes. The locations of these events provide a sound basis for the prediction of lightning hazards. This document provides the basis for the design approach and data analysis for a system of radio frequency receivers to provide azimuth and elevation data for lightning pulses detected simultaneously by the LDAR system. The intent is for this direction-finding system to correct and augment the data provided by LDAR and, thereby, increase the rate of valid data and to correct or discard any invalid data. This document develops the necessary equations and algorithms, identifies sources of systematic errors and means to correct them, and analyzes the algorithms for random error. This data analysis approach is not found in the existing literature and was developed to facilitate the operation of this Short Baseline LDAR (SBLDAR). These algorithms may also be useful for other direction-finding systems using radio pulses or ultrasonic pulse data.

  5. A robust active contour edge detection algorithm based on local Gaussian statistical model for oil slick remote sensing image

    NASA Astrophysics Data System (ADS)

    Jing, Yu; Wang, Yaxuan; Liu, Jianxin; Liu, Zhaoxia

    2015-08-01

    Edge detection is a crucial method for the location and quantity estimation of oil slick when oil spills on the sea. In this paper, we present a robust active contour edge detection algorithm for oil spill remote sensing images. In the proposed algorithm, we define a local Gaussian data fitting energy term with spatially varying means and variances, and this data fitting energy term is introduced into a global minimization active contour (GMAC) framework. The energy function minimization is achieved fast by a dual formulation of the weighted total variation norm. The proposed algorithm avoids the existence of local minima, does not require the definition of initial contour, and is robust to weak boundaries, high noise and severe intensity inhomogeneity exiting in oil slick remote sensing images. Furthermore, the edge detection of oil slick and the correction of intensity inhomogeneity are simultaneously achieved via the proposed algorithm. The experiment results have shown that a superior performance of proposed algorithm over state-of-the-art edge detection algorithms. In addition, the proposed algorithm can also deal with the special images with the object and background of the same intensity means but different variances.

  6. Language Development Activities through the Auditory Channel.

    ERIC Educational Resources Information Center

    Fitzmaurice, Peggy, Comp.; And Others

    Presented primarily for use with educable mentally retarded and learning disabled children are approximately 100 activities for language development through the auditory channel. Activities are grouped under the following three areas: receptive skills (auditory decoding, auditory memory, and auditory discrimination); expressive skills (auditory…

  7. Multidisciplinary Design, Analysis, and Optimization Tool Development using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Li, Wesley

    2008-01-01

    Multidisciplinary design, analysis, and optimization using a genetic algorithm is being developed at the National Aeronautics and Space A dministration Dryden Flight Research Center to automate analysis and design process by leveraging existing tools such as NASTRAN, ZAERO a nd CFD codes to enable true multidisciplinary optimization in the pr eliminary design stage of subsonic, transonic, supersonic, and hypers onic aircraft. This is a promising technology, but faces many challe nges in large-scale, real-world application. This paper describes cur rent approaches, recent results, and challenges for MDAO as demonstr ated by our experience with the Ikhana fire pod design.

  8. Multidisciplinary Design, Analysis, and Optimization Tool Development Using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Li, Wesley

    2009-01-01

    Multidisciplinary design, analysis, and optimization using a genetic algorithm is being developed at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California) to automate analysis and design process by leveraging existing tools to enable true multidisciplinary optimization in the preliminary design stage of subsonic, transonic, supersonic, and hypersonic aircraft. This is a promising technology, but faces many challenges in large-scale, real-world application. This report describes current approaches, recent results, and challenges for multidisciplinary design, analysis, and optimization as demonstrated by experience with the Ikhana fire pod design.!

  9. Developing a data element repository to support EHR-driven phenotype algorithm authoring and execution.

    PubMed

    Jiang, Guoqian; Kiefer, Richard C; Rasmussen, Luke V; Solbrig, Harold R; Mo, Huan; Pacheco, Jennifer A; Xu, Jie; Montague, Enid; Thompson, William K; Denny, Joshua C; Chute, Christopher G; Pathak, Jyotishman

    2016-08-01

    The Quality Data Model (QDM) is an information model developed by the National Quality Forum for representing electronic health record (EHR)-based electronic clinical quality measures (eCQMs). In conjunction with the HL7 Health Quality Measures Format (HQMF), QDM contains core elements that make it a promising model for representing EHR-driven phenotype algorithms for clinical research. However, the current QDM specification is available only as descriptive documents suitable for human readability and interpretation, but not for machine consumption. The objective of the present study is to develop and evaluate a data element repository (DER) for providing machine-readable QDM data element service APIs to support phenotype algorithm authoring and execution. We used the ISO/IEC 11179 metadata standard to capture the structure for each data element, and leverage Semantic Web technologies to facilitate semantic representation of these metadata. We observed there are a number of underspecified areas in the QDM, including the lack of model constraints and pre-defined value sets. We propose a harmonization with the models developed in HL7 Fast Healthcare Interoperability Resources (FHIR) and Clinical Information Modeling Initiatives (CIMI) to enhance the QDM specification and enable the extensibility and better coverage of the DER. We also compared the DER with the existing QDM implementation utilized within the Measure Authoring Tool (MAT) to demonstrate the scalability and extensibility of our DER-based approach. PMID:27392645

  10. Development and evaluation of a predictive algorithm for telerobotic task complexity

    NASA Technical Reports Server (NTRS)

    Gernhardt, M. L.; Hunter, R. C.; Hedgecock, J. C.; Stephenson, A. G.

    1993-01-01

    There is a wide range of complexity in the various telerobotic servicing tasks performed in subsea, space, and hazardous material handling environments. Experience with telerobotic servicing has evolved into a knowledge base used to design tasks to be 'telerobot friendly.' This knowledge base generally resides in a small group of people. Written documentation and requirements are limited in conveying this knowledge base to serviceable equipment designers and are subject to misinterpretation. A mathematical model of task complexity based on measurable task parameters and telerobot performance characteristics would be a valuable tool to designers and operational planners. Oceaneering Space Systems and TRW have performed an independent research and development project to develop such a tool for telerobotic orbital replacement unit (ORU) exchange. This algorithm was developed to predict an ORU exchange degree of difficulty rating (based on the Cooper-Harper rating used to assess piloted operations). It is based on measurable parameters of the ORU, attachment receptacle and quantifiable telerobotic performance characteristics (e.g., link length, joint ranges, positional accuracy, tool lengths, number of cameras, and locations). The resulting algorithm can be used to predict task complexity as the ORU parameters, receptacle parameters, and telerobotic characteristics are varied.

  11. Estimating aquifer recharge in Mission River watershed, Texas: model development and calibration using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Uddameri, V.; Kuchanur, M.

    2007-01-01

    Soil moisture balance studies provide a convenient approach to estimate aquifer recharge when only limited site-specific data are available. A monthly mass-balance approach has been utilized in this study to estimate recharge in a small watershed in the coastal bend of South Texas. The developed lumped parameter model employs four adjustable parameters to calibrate model predicted stream runoff to observations at a gaging station. A new procedure was developed to correctly capture the intermittent nature of rainfall. The total monthly rainfall was assigned to a single-equivalent storm whose duration was obtained via calibration. A total of four calibrations were carried out using an evolutionary computing technique called genetic algorithms as well as the conventional gradient descent (GD) technique. Ordinary least squares and the heteroscedastic maximum likelihood error (HMLE) based objective functions were evaluated as part of this study as well. While the genetic algorithm based calibrations were relatively better in capturing the peak runoff events, the GD based calibration did slightly better in capturing the low flow events. Treating the Box-Cox exponent in the HMLE function as a calibration parameter did not yield better estimates and the study corroborates the suggestion made in the literature of fixing this exponent at 0.3. The model outputs were compared against available information and results indicate that the developed modeling approach provides a conservative estimate of recharge.

  12. The NASA Soil Moisture Active Passive (SMAP) Mission - Algorithm and Cal/Val Activities and Synergies with SMOS and Other L-Band Missions

    NASA Technical Reports Server (NTRS)

    Njoku, Eni; Entekhabi, Dara; O'Neill, Peggy; Jackson, Tom; Kellogg, Kent; Entin, Jared

    2011-01-01

    applicable to soil moisture measurement, such as Aquarius, SAO COM, and ALOS-2. The algorithms and data products for SMAP are being developed in the SMAP Science Data System (SDS) Testbed. The algorithms are developed and evaluated in the SDS Testbed using simulated SMAP observations as well as observational data from current airborne and spaceborne L-band sensors including SMOS. The SMAP project is developing a Calibration and Validation (Cal/Val) Plan that is designed to support algorithm development (pre-launch) and data product validation (post-launch). A key component of the Cal/Val Plan is the identification, characterization, and instrumentation of sites that can be used to calibrate and validate the sensor data (Level I) and derived geophysical products (Level 2 and higher). In this presentation we report on the development status of the SMAP data product algorithms, and the planning and implementation of the SMAP Cal/Val program. Several components of the SMAP algorithm development and Cal/Val plans have commonality with those of SMOS, and for this reason there are shared activities and resources that can be utilized between the missions, including in situ networks, ancillary data sets, and long-term monitoring sites.

  13. Density-matrix renormalization group algorithm with multi-level active space.

    PubMed

    Ma, Yingjin; Wen, Jing; Ma, Haibo

    2015-07-21

    The density-matrix renormalization group (DMRG) method, which can deal with a large active space composed of tens of orbitals, is nowadays widely used as an efficient addition to traditional complete active space (CAS)-based approaches. In this paper, we present the DMRG algorithm with a multi-level (ML) control of the active space based on chemical intuition-based hierarchical orbital ordering, which is called as ML-DMRG with its self-consistent field (SCF) variant ML-DMRG-SCF. Ground and excited state calculations of H2O, N2, indole, and Cr2 with comparisons to DMRG references using fixed number of kept states (M) illustrate that ML-type DMRG calculations can obtain noticeable efficiency gains. It is also shown that the orbital re-ordering based on hierarchical multiple active subspaces may be beneficial for reducing computational time for not only ML-DMRG calculations but also DMRG ones with fixed M values. PMID:26203012

  14. GASAKe: forecasting landslide activations by a genetic-algorithms based hydrological model

    NASA Astrophysics Data System (ADS)

    Terranova, O. G.; Gariano, S. L.; Iaquinta, P.; Iovine, G. G. R.

    2015-02-01

    GASAKe is a new hydrological model aimed at forecasting the triggering of landslides. The model is based on genetic-algorithms and allows to obtaining thresholds of landslide activation from the set of historical occurrences and from the rainfall series. GASAKe can be applied to either single landslides or set of similar slope movements in a homogeneous environment. Calibration of the model is based on genetic-algorithms, and provides for families of optimal, discretized solutions (kernels) that maximize the fitness function. Starting from these latter, the corresponding mobility functions (i.e. the predictive tools) can be obtained through convolution with the rain series. The base time of the kernel is related to the magnitude of the considered slope movement, as well as to hydro-geological complexity of the site. Generally, smaller values are expected for shallow slope instabilities with respect to large-scale phenomena. Once validated, the model can be applied to estimate the timing of future landslide activations in the same study area, by employing recorded or forecasted rainfall series. Example of application of GASAKe to a medium-scale slope movement (the Uncino landslide at San Fili, in Calabria, Southern Italy) and to a set of shallow landslides (in the Sorrento Peninsula, Campania, Southern Italy) are discussed. In both cases, a successful calibration of the model has been achieved, despite unavoidable uncertainties concerning the dates of landslide occurrence. In particular, for the Sorrento Peninsula case, a fitness of 0.81 has been obtained by calibrating the model against 10 dates of landslide activation; in the Uncino case, a fitness of 1 (i.e. neither missing nor false alarms) has been achieved against 5 activations. As for temporal validation, the experiments performed by considering the extra dates of landslide activation have also proved satisfactory. In view of early-warning applications for civil protection purposes, the capability of the

  15. Development of ocean color algorithms for estimating chlorophyll-a concentrations and inherent optical properties using gene expression programming (GEP).

    PubMed

    Chang, Chih-Hua

    2015-03-01

    This paper proposes new inversion algorithms for the estimation of Chlorophyll-a concentration (Chla) and the ocean's inherent optical properties (IOPs) from the measurement of remote sensing reflectance (Rrs). With in situ data from the NASA bio-optical marine algorithm data set (NOMAD), inversion algorithms were developed by the novel gene expression programming (GEP) approach, which creates, manipulates and selects the most appropriate tree-structured functions based on evolutionary computing. The limitations and validity of the proposed algorithms are evaluated by simulated Rrs spectra with respect to NOMAD, and a closure test for IOPs obtained at a single reference wavelength. The application of GEP-derived algorithms is validated against in situ, synthetic and satellite match-up data sets compiled by NASA and the International Ocean Color Coordinate Group (IOCCG). The new algorithms are able to provide Chla and IOPs retrievals to those derived by other state-of-the-art regression approaches and obtained with the semi- and quasi-analytical algorithms, respectively. In practice, there are no significant differences between GEP, support vector regression, and multilayer perceptron model in terms of the overall performance. The GEP-derived algorithms are successfully applied in processing the images taken by the Sea Wide Field-of-view Sensor (SeaWiFS), generate Chla and IOPs maps which show better details of developing algal blooms, and give more information on the distribution of water constituents between different water bodies. PMID:25836776

  16. An algorithm for hyperspectral remote sensing of aerosols: 1. Development of theoretical framework

    NASA Astrophysics Data System (ADS)

    Hou, Weizhen; Wang, Jun; Xu, Xiaoguang; Reid, Jeffrey S.; Han, Dong

    2016-07-01

    This paper describes the first part of a series of investigations to develop algorithms for simultaneous retrieval of aerosol parameters and surface reflectance from a newly developed hyperspectral instrument, the GEOstationary Trace gas and Aerosol Sensor Optimization (GEO-TASO), by taking full advantage of available hyperspectral measurement information in the visible bands. We describe the theoretical framework of an inversion algorithm for the hyperspectral remote sensing of the aerosol optical properties, in which major principal components (PCs) for surface reflectance is assumed known, and the spectrally dependent aerosol refractive indices are assumed to follow a power-law approximation with four unknown parameters (two for real and two for imaginary part of refractive index). New capabilities for computing the Jacobians of four Stokes parameters of reflected solar radiation at the top of the atmosphere with respect to these unknown aerosol parameters and the weighting coefficients for each PC of surface reflectance are added into the UNified Linearized Vector Radiative Transfer Model (UNL-VRTM), which in turn facilitates the optimization in the inversion process. Theoretical derivations of the formulas for these new capabilities are provided, and the analytical solutions of Jacobians are validated against the finite-difference calculations with relative error less than 0.2%. Finally, self-consistency check of the inversion algorithm is conducted for the idealized green-vegetation and rangeland surfaces that were spectrally characterized by the U.S. Geological Survey digital spectral library. It shows that the first six PCs can yield the reconstruction of spectral surface reflectance with errors less than 1%. Assuming that aerosol properties can be accurately characterized, the inversion yields a retrieval of hyperspectral surface reflectance with an uncertainty of 2% (and root-mean-square error of less than 0.003), which suggests self-consistency in the

  17. Laboratory Activities for Developing Process Skills.

    ERIC Educational Resources Information Center

    Institute for Services to Education, Inc., Washington, DC.

    This workbook contains laboratory exercises designed for use in a college introductory biology course. Each exercise helps the student develop a basic science skill. The exercises are arranged in a hierarchical sequence suggesting the scientific method. Each skill facilitates the development of succeeding ones. Activities include Use of the…

  18. Development through Participation in Sociocultural Activity.

    ERIC Educational Resources Information Center

    Rogoff, Barbara; And Others

    1995-01-01

    Presents the theoretical position that as people participate in sociocultural activities, they contribute to the development of community practices that simultaneously contribute to the individuals' own development. Illustrates this argument using observations of the developmental processes of individual Girl Scouts and of community traditions of…

  19. Pedunculopontine Gamma Band Activity and Development.

    PubMed

    Garcia-Rill, Edgar; Luster, Brennon; Mahaffey, Susan; MacNicol, Melanie; Hyde, James R; D'Onofrio, Stasia M; Phillips, Cristy

    2015-12-03

    This review highlights the most important discovery in the reticular activating system in the last 10 years, the manifestation of gamma band activity in cells of the reticular activating system (RAS), especially in the pedunculopontine nucleus, which is in charge of waking and rapid eye movement (REM) sleep. The identification of different cell groups manifesting P/Q-type Ca(2+) channels that control waking vs. those that manifest N-type channels that control REM sleep provides novel avenues for the differential control of waking vs. REM sleep. Recent discoveries on the development of this system can help explain the developmental decrease in REM sleep and the basic rest-activity cycle.

  20. Enhanced surface activity of SnO2 thin film verified by LM algorithm

    NASA Astrophysics Data System (ADS)

    Choudhury, Sandip Paul; Kumari, Navnita; Bhattacharjee, Ayon

    2016-04-01

    Impedance studies were conducted on spray deposited Cu doped SnO2 thin films. Rietveld analysis provided evidence of non-existence of any other phase due to doping. Controlled injection of ethanol vapor was done to study the surface activity of these films at different temperatures. The cole-cole plots of ethanol absorbed films to that of unexposed thin films were constructed at different temperatures and compared. The studies reveal that the electron scattering process was homogeneous in nature and the film had a narrow relaxation time. Levenberg-Marquardt algorithm with unweighted function was used for theoretical fitting of the cole-cole plots that revealed the weakening of the Fermi pinning level.

  1. Optimisation of halogenase enzyme activity by application of a genetic algorithm.

    PubMed

    Muffler, Kai; Retzlaff, Marco; van Pée, Karl-Heinz; Ulber, Roland

    2007-01-10

    A genetic algorithm (GA) was applied for the optimisation of an enzyme assay composition respectively the enzyme activity of a recombinantly produced FADH(2)-dependent halogenating enzyme. The examined enzyme belongs to the class of halogenases and is capable to halogenate tryptophan regioselective in position 5. Therefore, the expressed trp-5-halogenase can be an interesting tool in the manufacturing of serotonin precursors. The application of stochastic search strategies (e.g. GAs) is well suited for fast determination of the global optimum in multidimensional search spaces, where statistical approaches or even the popular classical one-factor-at-a-time method often failures by misleading to local optima. The concentrations of six different medium components were optimised and the maximum yield of the halogenated tryptophan could be increased from 3.5 up to 65%.

  2. Adaptive RSOV filter using the FELMS algorithm for nonlinear active noise control systems

    NASA Astrophysics Data System (ADS)

    Zhao, Haiquan; Zeng, Xiangping; He, Zhengyou; Li, Tianrui

    2013-01-01

    This paper presents a recursive second-order Volterra (RSOV) filter to solve the problems of signal saturation and other nonlinear distortions that occur in nonlinear active noise control systems (NANC) used for actual applications. Since this nonlinear filter based on an infinite impulse response (IIR) filter structure can model higher than second-order and third-order nonlinearities for systems where the nonlinearities are harmonically related, the RSOV filter is more effective in NANC systems with either a linear secondary path (LSP) or a nonlinear secondary path (NSP). Simulation results clearly show that the RSOV adaptive filter using the multichannel structure filtered-error least mean square (FELMS) algorithm can further greatly reduce the computational burdens and is more suitable to eliminate nonlinear distortions in NANC systems than a SOV filter, a bilinear filter and a third-order Volterra (TOV) filter.

  3. An intelligent active force control algorithm to control an upper extremity exoskeleton for motor recovery

    NASA Astrophysics Data System (ADS)

    Hasbullah Mohd Isa, Wan; Taha, Zahari; Mohd Khairuddin, Ismail; Majeed, Anwar P. P. Abdul; Fikri Muhammad, Khairul; Abdo Hashem, Mohammed; Mahmud, Jamaluddin; Mohamed, Zulkifli

    2016-02-01

    This paper presents the modelling and control of a two degree of freedom upper extremity exoskeleton by means of an intelligent active force control (AFC) mechanism. The Newton-Euler formulation was used in deriving the dynamic modelling of both the anthropometry based human upper extremity as well as the exoskeleton that consists of the upper arm and the forearm. A proportional-derivative (PD) architecture is employed in this study to investigate its efficacy performing joint-space control objectives. An intelligent AFC algorithm is also incorporated into the PD to investigate the effectiveness of this hybrid system in compensating disturbances. The Mamdani Fuzzy based rule is employed to approximate the estimated inertial properties of the system to ensure the AFC loop responds efficiently. It is found that the IAFC-PD performed well against the disturbances introduced into the system as compared to the conventional PD control architecture in performing the desired trajectory tracking.

  4. Development of novel algorithm and real-time monitoring ambulatory system using Bluetooth module for fall detection in the elderly.

    PubMed

    Hwang, J Y; Kang, J M; Jang, Y W; Kim, H

    2004-01-01

    Novel algorithm and real-time ambulatory monitoring system for fall detection in elderly people is described. Our system is comprised of accelerometer, tilt sensor and gyroscope. For real-time monitoring, we used Bluetooth. Accelerometer measures kinetic force, tilt sensor and gyroscope estimates body posture. Also, we suggested algorithm using signals which obtained from the system attached to the chest for fall detection. To evaluate our system and algorithm, we experimented on three people aged over 26 years. The experiment of four cases such as forward fall, backward fall, side fall and sit-stand was repeated ten times and the experiment in daily life activity was performed one time to each subject. These experiments showed that our system and algorithm could distinguish between falling and daily life activity. Moreover, the accuracy of fall detection is 96.7%. Our system is especially adapted for long-time and real-time ambulatory monitoring of elderly people in emergency situation.

  5. Participatory Design Activities and Agile Software Development

    NASA Astrophysics Data System (ADS)

    Kautz, Karlheinz

    This paper contributes to the studies of design activities in information systems development. It provides a case study of a large agile development project and focusses on how customers and users participated in agile development and design activities in practice. The investigated project utilized the agile method eXtreme Programming. Planning games, user stories and story cards, working software, and acceptance tests structured the customer and user involvement. We found genuine customer and user involvement in the design activities in the form of both direct and indirect participation in the agile development project. The involved customer representatives played informative, consultative, and participative roles in the project. This led to their functional empowerment— the users were enabled to carry out their work to their own satisfaction and in an effective, efficient, and economical manner.

  6. Millimeter-wave Imaging Radiometer (MIR) data processing and development of water vapor retrieval algorithms

    NASA Technical Reports Server (NTRS)

    Chang, L. Aron

    1995-01-01

    This document describes the progress of the task of the Millimeter-wave Imaging Radiometer (MIR) data processing and the development of water vapor retrieval algorithms, for the second six-month performing period. Aircraft MIR data from two 1995 field experiments were collected and processed with a revised data processing software. Two revised versions of water vapor retrieval algorithm were developed, one for the execution of retrieval on a supercomputer platform, and one for using pressure as the vertical coordinate. Two implementations of incorporating products from other sensors into the water vapor retrieval system, one from the Special Sensor Microwave Imager (SSM/I), the other from the High-resolution Interferometer Sounder (HIS). Water vapor retrievals were performed for both airborne MIR data and spaceborne SSM/T-2 data, during field experiments of TOGA/COARE, CAMEX-1, and CAMEX-2. The climatology of water vapor during TOGA/COARE was examined by SSM/T-2 soundings and conventional rawinsonde.

  7. Developing algorithms for predicting protein-protein interactions of homology modeled proteins.

    SciTech Connect

    Martin, Shawn Bryan; Sale, Kenneth L.; Faulon, Jean-Loup Michel; Roe, Diana C.

    2006-01-01

    The goal of this project was to examine the protein-protein docking problem, especially as it relates to homology-based structures, identify the key bottlenecks in current software tools, and evaluate and prototype new algorithms that may be developed to improve these bottlenecks. This report describes the current challenges in the protein-protein docking problem: correctly predicting the binding site for the protein-protein interaction and correctly placing the sidechains. Two different and complementary approaches are taken that can help with the protein-protein docking problem. The first approach is to predict interaction sites prior to docking, and uses bioinformatics studies of protein-protein interactions to predict theses interaction site. The second approach is to improve validation of predicted complexes after docking, and uses an improved scoring function for evaluating proposed docked poses, incorporating a solvation term. This scoring function demonstrates significant improvement over current state-of-the art functions. Initial studies on both these approaches are promising, and argue for full development of these algorithms.

  8. Development of optimization model for sputtering process parameter based on gravitational search algorithm

    NASA Astrophysics Data System (ADS)

    Norlina, M. S.; Diyana, M. S. Nor; Mazidah, P.; Rusop, M.

    2016-07-01

    In the RF magnetron sputtering process, the desirable layer properties are largely influenced by the process parameters and conditions. If the quality of the thin film has not reached up to its intended level, the experiments have to be repeated until the desirable quality has been met. This research is proposing Gravitational Search Algorithm (GSA) as the optimization model to reduce the time and cost to be spent in the thin film fabrication. The optimization model's engine has been developed using Java. The model is developed based on GSA concept, which is inspired by the Newtonian laws of gravity and motion. In this research, the model is expected to optimize four deposition parameters which are RF power, deposition time, oxygen flow rate and substrate temperature. The results have turned out to be promising and it could be concluded that the performance of the model is satisfying in this parameter optimization problem. Future work could compare GSA with other nature based algorithms and test them with various set of data.

  9. Developing a Moving-Solid Algorithm for Simulating Tsunamis Induced by Rock Sliding

    NASA Astrophysics Data System (ADS)

    Chuang, M.; Wu, T.; Huang, C.; Wang, C.; Chu, C.; Chen, M.

    2012-12-01

    The landslide generated tsunami is one of the most devastating nature hazards. However, the involvement of the moving obstacle and dynamic free-surface movement makes the numerical simulation a difficult task. To describe the fluid motion, we use modified two-step projection method to decouple the velocity and pressure fields with 3D LES turbulent model. The free-surface movement is tracked by volume of fluid (VOF) method (Wu, 2004). To describe the effect from the moving obstacle on the fluid, a newly developed moving-solid algorithm (MSA) is developed. We combine the ideas from immersed boundary method (IBM) and partial-cell treatment (PCT) for specifying the contacting speed on the solid face and for presenting the obstacle blocking effect, respectively. By using the concept of IBM, the cell-center and cell-face velocities can be specified arbitrarily. And because we move the solid obstacle on a fixed grid, the boundary of the solid seldom coincides with the cell faces, which makes it inappropriate to assign the solid boundary velocity to the cell faces. To overcome this problem, the PCT is adopted. Using this algorithm, the solid surface is conceptually coincided with the cell faces, and the cell face velocity is able to be specified as the obstacle velocity. The advantage of using this algorithm is obtaining the stable pressure field which is extremely important for coupling with a force-balancing model which describes the solid motion. This model is therefore able to simulate incompressible high-speed fluid motion. In order to describe the solid motion, the DEM (Discrete Element Method) is adopted. The new-time solid movement can be predicted and divided into translation and rotation based on the Newton's equations and Euler's equations respectively. The detail of the moving-solid algorithm is presented in this paper. This model is then applied to studying the rock-slide generated tsunami. The results are validated with the laboratory data (Liu and Wu, 2005

  10. Development of Gis Tool for the Solution of Minimum Spanning Tree Problem using Prim's Algorithm

    NASA Astrophysics Data System (ADS)

    Dutta, S.; Patra, D.; Shankar, H.; Alok Verma, P.

    2014-11-01

    minimum spanning tree (MST) of a connected, undirected and weighted network is a tree of that network consisting of all its nodes and the sum of weights of all its edges is minimum among all such possible spanning trees of the same network. In this study, we have developed a new GIS tool using most commonly known rudimentary algorithm called Prim's algorithm to construct the minimum spanning tree of a connected, undirected and weighted road network. This algorithm is based on the weight (adjacency) matrix of a weighted network and helps to solve complex network MST problem easily, efficiently and effectively. The selection of the appropriate algorithm is very essential otherwise it will be very hard to get an optimal result. In case of Road Transportation Network, it is very essential to find the optimal results by considering all the necessary points based on cost factor (time or distance). This paper is based on solving the Minimum Spanning Tree (MST) problem of a road network by finding it's minimum span by considering all the important network junction point. GIS technology is usually used to solve the network related problems like the optimal path problem, travelling salesman problem, vehicle routing problems, location-allocation problems etc. Therefore, in this study we have developed a customized GIS tool using Python script in ArcGIS software for the solution of MST problem for a Road Transportation Network of Dehradun city by considering distance and time as the impedance (cost) factors. It has a number of advantages like the users do not need a greater knowledge of the subject as the tool is user-friendly and that allows to access information varied and adapted the needs of the users. This GIS tool for MST can be applied for a nationwide plan called Prime Minister Gram Sadak Yojana in India to provide optimal all weather road connectivity to unconnected villages (points). This tool is also useful for constructing highways or railways spanning several

  11. Simple Algorithms for Distributed Leader Election in Anonymous Synchronous Rings and Complete Networks Inspired by Neural Development in Fruit Flies.

    PubMed

    Xu, Lei; Jeavons, Peter

    2015-11-01

    Leader election in anonymous rings and complete networks is a very practical problem in distributed computing. Previous algorithms for this problem are generally designed for a classical message passing model where complex messages are exchanged. However, the need to send and receive complex messages makes such algorithms less practical for some real applications. We present some simple synchronous algorithms for distributed leader election in anonymous rings and complete networks that are inspired by the development of the neural system of the fruit fly. Our leader election algorithms all assume that only one-bit messages are broadcast by nodes in the network and processors are only able to distinguish between silence and the arrival of one or more messages. These restrictions allow implementations to use a simpler message-passing architecture. Even with these harsh restrictions our algorithms are shown to achieve good time and message complexity both analytically and experimentally.

  12. Development and Implementation of a Hardware In-the-Loop Test Bed for Unmanned Aerial Vehicle Control Algorithms

    NASA Technical Reports Server (NTRS)

    Nyangweso, Emmanuel; Bole, Brian

    2014-01-01

    Successful prediction and management of battery life using prognostic algorithms through ground and flight tests is important for performance evaluation of electrical systems. This paper details the design of test beds suitable for replicating loading profiles that would be encountered in deployed electrical systems. The test bed data will be used to develop and validate prognostic algorithms for predicting battery discharge time and battery failure time. Online battery prognostic algorithms will enable health management strategies. The platform used for algorithm demonstration is the EDGE 540T electric unmanned aerial vehicle (UAV). The fully designed test beds developed and detailed in this paper can be used to conduct battery life tests by controlling current and recording voltage and temperature to develop a model that makes a prediction of end-of-charge and end-of-life of the system based on rapid state of health (SOH) assessment.

  13. Development of region processing algorithm for HSTAMIDS: status and field test results

    NASA Astrophysics Data System (ADS)

    Ngan, Peter; Burke, Sean; Cresci, Roger; Wilson, Joseph N.; Gader, Paul; Ho, K. C.; Bartosz, Elizabeth; Duvoisin, Herbert

    2007-04-01

    The Region Processing Algorithm (RPA) has been developed by the Office of the Army Humanitarian Demining Research and Development (HD R&D) Program as part of improvements for the AN/PSS-14. The effort was a collaboration between the HD R&D Program, L-3 Communication CyTerra Corporation, University of Florida, Duke University and University of Missouri. RPA has been integrated into and implemented in a real-time AN/PSS-14. The subject unit was used to collect data and tested for its performance at three Army test sites within the United States of America. This paper describes the status of the technology and its recent test results.

  14. Ocean observations with EOS/MODIS: Algorithm development and post launch studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1995-01-01

    Several significant accomplishments were made during the present reporting period. (1) Initial simulations to understand the applicability of the MODerate Resolution Imaging Spectrometer (MODIS) 1380 nm band for removing the effects of stratospheric aerosols and thin cirrus clouds were completed using a model for an aged volcanic aerosol. The results suggest that very simple procedures requiring no a priori knowledge of the optical properties of the stratospheric aerosol may be as effective as complex procedures requiring full knowledge of the aerosol properties, except the concentration which is estimated from the reflectance at 1380 nm. The limitations of this conclusion will be examined in the next reporting period; (2) The lookup tables employed in the implementation of the atmospheric correction algorithm have been modified in several ways intended to improve the accuracy and/or speed of processing. These have been delivered to R. Evans for implementation into the MODIS prototype processing algorithm for testing; (3) A method was developed for removal of the effects of the O2 'A' absorption band from SeaWiFS band 7 (745-785 nm). This is important in that SeaWiFS imagery will be used as a test data set for the MODIS atmospheric correction algorithm over the oceans; and (4) Construction of a radiometer, and associated deployment boom, for studying the spectral reflectance of oceanic whitecaps at sea was completed. The system was successfully tested on a cruise off Hawaii on which whitecaps were plentiful during October-November. This data set is now under analysis.

  15. The settling dynamics of flocculating mud-sand mixtures: Part 1—Empirical algorithm development

    NASA Astrophysics Data System (ADS)

    Manning, Andrew James; Baugh, John V.; Spearman, Jeremy R.; Pidduck, Emma L.; Whitehouse, Richard J. S.

    2011-03-01

    , and in most cases produced excessive over-estimations in MSF. The reason for these predictive errors was that this hybrid approach still treated mud and sand separately. This is potentially reasonable if the sediments are segregated and non-interactive, but appears to be unacceptable when the mud and sand are flocculating via an interactive matrix. The MSSV empirical model may be regarded as a `first stage' approximation for scientists and engineers either wishing to investigate mixed-sediment flocculation and its depositional characteristics in a quantifiable framework, or simulate mixed-sediment settling in a numerical sediment transport model where flocculation is occurring. The preliminary assessment concluded that in general when all the SPM and shear stress range data were combined, the net result indicated that the new mixed-sediment settling velocity empirical model was only in error by -3 to -6.7% across the experimental mud:sand mixture ratios. Tuning of the algorithm coefficients is required for the accurate prediction of depositional rates in a specific estuary, as was demonstrated by the algorithm calibration using data from Portsmouth Harbour. The development of a more physics-based model, which captures the essential features of the empirical MSSV model, would be more universally applicable.

  16. A Prototype Hail Detection Algorithm and Hail Climatology Developed with the Advanced Microwave Sounding Unit (AMSU)

    NASA Technical Reports Server (NTRS)

    Ferraro, Ralph; Beauchamp, James; Cecil, Dan; Heymsfeld, Gerald

    2015-01-01

    In previous studies published in the open literature, a strong relationship between the occurrence of hail and the microwave brightness temperatures (primarily at 37 and 85 GHz) was documented. These studies were performed with the Nimbus-7 SMMR, the TRMM Microwave Imager (TMI) and most recently, the Aqua AMSR-E sensor. This lead to climatologies of hail frequency from TMI and AMSR-E, however, limitations include geographical domain of the TMI sensor (35 S to 35 N) and the overpass time of the Aqua satellite (130 am/pm local time), both of which reduce an accurate mapping of hail events over the global domain and the full diurnal cycle. Nonetheless, these studies presented exciting, new applications for passive microwave sensors. Since 1998, NOAA and EUMETSAT have been operating the AMSU-A/B and the MHS on several operational satellites: NOAA-15 through NOAA-19; MetOp-A and -B. With multiple satellites in operation since 2000, the AMSU/MHS sensors provide near global coverage every 4 hours, thus, offering a much larger time and temporal sampling than TRMM or AMSR-E. With similar observation frequencies near 30 and 85 GHz and additionally three at the 183 GHz water vapor band, the potential to detect strong convection associated with severe storms on a more comprehensive time and space scale exists. In this study, we develop a prototype AMSU-based hail detection algorithm through the use of collocated satellite and surface hail reports over the continental U.S. for a 12-year period (2000-2011). Compared with the surface observations, the algorithm detects approximately 40 percent of hail occurrences. The simple threshold algorithm is then used to generate a hail climatology that is based on all available AMSU observations during 2000-11 that is stratified in several ways, including total hail occurrence by month (March through September), total annual, and over the diurnal cycle. Independent comparisons are made compared to similar data sets derived from other

  17. Microphysical particle properties derived from inversion algorithms developed in the framework of EARLINET

    NASA Astrophysics Data System (ADS)

    Müller, D.; Böckmann, C.; Kolgotin, A.; Schneidenbach, L.; Chemyakin, E.; Rosemann, J.; Znak, P.; Romanov, A.

    2015-12-01

    We present a summary on the current status of two inversion algorithms that are used in EARLINET for the inversion of data collected with EARLINET multiwavelength Raman lidars. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. Development of these two algorithms started in 2000 when EARLINET was founded. The algorithms are based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithms allow us to derive particle effective radius, and volume and surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo can be computed from the retrieved microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. We discuss the current status of these manually operated algorithms, the potentially achievable accuracy of data products, and the goals for future work on the basis of a few exemplary simulations with synthetic optical data. The optical data used in our study cover a range of Ångström exponents and extinction-to-backscatter (lidar) ratios that are found from lidar measurements of various aerosol types. We also tested aerosol scenarios that are considered highly unlikely, e.g., the lidar ratios fall outside the commonly accepted range of values measured with Raman lidar, even though the underlying microphysical particle properties are not uncommon. The goal of this part of the study is to test robustness of the algorithms toward their ability to identify aerosol types that have not been measured so far, but cannot be ruled out based on our current knowledge of

  18. Description of ALARMA: the alarm algorithm developed for the Nuclear Car Wash

    SciTech Connect

    Luu, T; Biltoft, P; Church, J; Descalle, M; Hall, J; Manatt, D; Mauger, J; Norman, E; Petersen, D; Pruet, J; Prussin, S; Slaughter, D

    2006-11-28

    The goal of any alarm algorithm should be that it provide the necessary tools to derive confidence limits on whether the existence of fissile materials is present in cargo containers. It should be able to extract these limits from (usually) noisy and/or weak data while maintaining a false alarm rate (FAR) that is economically suitable for port operations. It should also be able to perform its analysis within a reasonably short amount of time (i.e. {approx} seconds). To achieve this, it is essential that the algorithm be able to identify and subtract any interference signature that might otherwise be confused with a fissile signature. Lastly, the algorithm itself should be user-intuitive and user-friendly so that port operators with little or no experience with detection algorithms may use it with relative ease. In support of the Nuclear Car Wash project at Lawrence Livermore Laboratory, we have developed an alarm algorithm that satisfies the above requirements. The description of the this alarm algorithm, dubbed ALARMA, is the purpose of this technical report. The experimental setup of the nuclear car wash has been well documented [1, 2, 3]. The presence of fissile materials is inferred by examining the {beta}-delayed gamma spectrum induced after a brief neutron irradiation of cargo, particularly in the high-energy region above approximately 2.5 MeV. In this region naturally occurring gamma rays are virtually non-existent. Thermal-neutron induced fission of {sup 235}U and {sup 239}P, on the other hand, leaves a unique {beta}-delayed spectrum [4]. This spectrum comes from decays of fission products having half-lives as large as 30 seconds, many of which have high Q-values. Since high-energy photons penetrate matter more freely, it is natural to look for unique fissile signatures in this energy region after neutron irradiation. The goal of this interrogation procedure is a 95% success rate of detection of as little as 5 kilograms of fissile material while retaining

  19. White Light Modeling, Algorithm Development, and Validation on the Micro-arcsecond Metrology Testbed

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.; Regher, Martin; Shen, Tsae Pyng

    2004-01-01

    The Space Interferometry Mission (SIM) scheduled for launch in early 2010, is an optical interferometer that will perform narrow angle and global wide angle astrometry with unprecedented accuracy, providing differential position accuracies of 1uas, and 4uas global accuracies in position, proper motion and parallax. The astrometric observations of the SIM instrument are performed via delay measurements provided by three Michelson-type, white light interferometers. Two 'guide' interferometers acquire fringes on bright guide stars in order to make highly precise measurements of variations in spacecraft attitude, while the third interferometer performs the science measurement. SIM derives its performance from a combination of precise fringe measurements of the interfered starlight (a few ten-thousandths of a wave) and very precise (tens of picometers) relative distance measurements made between a set of fiducials. The focus of the present paper is on the development and analysis of algorithms for accurate white light estimation, and on validating some of these algorithms on the MicroArcsecond Testbed.

  20. Development of an algorithm for production of inactivated arbovirus antigens in cell culture

    PubMed Central

    Goodman, C.H.; Russell, B.J.; Velez, J.O.; Laven, J.J.; Nicholson, W.L; Bagarozzi, D.A.; Moon, J.L.; Bedi, K.; Johnson, B.W.

    2015-01-01

    Arboviruses are medically important pathogens that cause human disease ranging from a mild fever to encephalitis. Laboratory diagnosis is essential to differentiate arbovirus infections from other pathogens with similar clinical manifestations. The Arboviral Diseases Branch (ADB) reference laboratory at the CDC Division of Vector-Borne Diseases (DVBD) produces reference antigens used in serological assays such as the virus-specific immunoglobulin M antibody-capture enzyme-linked immunosorbent assay (MAC-ELISA). Antigen production in cell culture has largely replaced the use of suckling mice; however, the methods are not directly transferable. The development of a cell culture antigen production algorithm for nine arboviruses from the three main arbovirus families, Flaviviridae, Togaviridae, and Bunyaviridae, is described here. Virus cell culture growth and harvest conditions were optimized, inactivation methods were evaluated, and concentration procedures were compared for each virus. Antigen performance was evaluated by the MAC-ELISA at each step of the procedure. The antigen production algorithm is a framework for standardization of methodology and quality control; however, a single antigen production protocol was not applicable to all arboviruses and needed to be optimized for each virus. PMID:25102428

  1. Development of an algorithm for production of inactivated arbovirus antigens in cell culture.

    PubMed

    Goodman, C H; Russell, B J; Velez, J O; Laven, J J; Nicholson, W L; Bagarozzi, D A; Moon, J L; Bedi, K; Johnson, B W

    2014-11-01

    Arboviruses are medically important pathogens that cause human disease ranging from a mild fever to encephalitis. Laboratory diagnosis is essential to differentiate arbovirus infections from other pathogens with similar clinical manifestations. The Arboviral Diseases Branch (ADB) reference laboratory at the CDC Division of Vector-Borne Diseases (DVBD) produces reference antigens used in serological assays such as the virus-specific immunoglobulin M antibody-capture enzyme-linked immunosorbent assay (MAC-ELISA). Antigen production in cell culture has largely replaced the use of suckling mice; however, the methods are not directly transferable. The development of a cell culture antigen production algorithm for nine arboviruses from the three main arbovirus families, Flaviviridae, Togaviridae, and Bunyaviridae, is described here. Virus cell culture growth and harvest conditions were optimized, inactivation methods were evaluated, and concentration procedures were compared for each virus. Antigen performance was evaluated by the MAC-ELISA at each step of the procedure. The antigen production algorithm is a framework for standardization of methodology and quality control; however, a single antigen production protocol was not applicable to all arboviruses and needed to be optimized for each virus.

  2. Development of an algorithm for production of inactivated arbovirus antigens in cell culture.

    PubMed

    Goodman, C H; Russell, B J; Velez, J O; Laven, J J; Nicholson, W L; Bagarozzi, D A; Moon, J L; Bedi, K; Johnson, B W

    2014-11-01

    Arboviruses are medically important pathogens that cause human disease ranging from a mild fever to encephalitis. Laboratory diagnosis is essential to differentiate arbovirus infections from other pathogens with similar clinical manifestations. The Arboviral Diseases Branch (ADB) reference laboratory at the CDC Division of Vector-Borne Diseases (DVBD) produces reference antigens used in serological assays such as the virus-specific immunoglobulin M antibody-capture enzyme-linked immunosorbent assay (MAC-ELISA). Antigen production in cell culture has largely replaced the use of suckling mice; however, the methods are not directly transferable. The development of a cell culture antigen production algorithm for nine arboviruses from the three main arbovirus families, Flaviviridae, Togaviridae, and Bunyaviridae, is described here. Virus cell culture growth and harvest conditions were optimized, inactivation methods were evaluated, and concentration procedures were compared for each virus. Antigen performance was evaluated by the MAC-ELISA at each step of the procedure. The antigen production algorithm is a framework for standardization of methodology and quality control; however, a single antigen production protocol was not applicable to all arboviruses and needed to be optimized for each virus. PMID:25102428

  3. Development of TIF based figuring algorithm for deterministic pitch tool polishing

    NASA Astrophysics Data System (ADS)

    Yi, Hyun-Su; Kim, Sug-Whan; Yang, Ho-Soon; Lee, Yun-Woo

    2007-12-01

    Pitch is perhaps the oldest material used for optical polishing, leaving superior surface texture, and has been used widely in the optics shop floor. However, for its unpredictable controllability of removal characteristics, the pitch tool polishing has been rarely analysed quantitatively and many optics shops rely heavily on optician's "feel" even today. In order to bring a degree of process controllability to the pitch tool polishing, we added motorized tool motions to the conventional Draper type polishing machine and modelled the tool path in the absolute machine coordinate. We then produced a number of Tool Influence Function (TIF) both from an analytical model and a series of experimental polishing runs using the pitch tool. The theoretical TIFs agreed well with the experimental TIFs to the profile accuracy of 79 % in terms of its shape. The surface figuring algorithm was then developed in-house utilizing both theoretical and experimental TIFs. We are currently undertaking a series of trial figuring experiments to prove the performance of the polishing algorithm, and the early results indicate that the highly deterministic material removal control with the pitch tool can be achieved to a certain level of form error. The machine renovation, TIF theory and experimental confirmation, figuring simulation results are reported together with implications to deterministic polishing.

  4. Development of a pharmacogenetic-guided warfarin dosing algorithm for Puerto Rican patients

    PubMed Central

    Ramos, Alga S; Seip, Richard L; Rivera-Miranda, Giselle; Felici-Giovanini, Marcos E; Garcia-Berdecia, Rafael; Alejandro-Cowan, Yirelia; Kocherla, Mohan; Cruz, Iadelisse; Feliu, Juan F; Cadilla, Carmen L; Renta, Jessica Y; Gorowski, Krystyna; Vergara, Cunegundo; Ruaño, Gualberto; Duconge, Jorge

    2012-01-01

    Aim This study was aimed at developing a pharmacogenetic-driven warfarin-dosing algorithm in 163 admixed Puerto Rican patients on stable warfarin therapy. Patients & methods A multiple linear-regression analysis was performed using log-transformed effective warfarin dose as the dependent variable, and combining CYP2C9 and VKORC1 genotyping with other relevant nongenetic clinical and demographic factors as independent predictors. Results The model explained more than two-thirds of the observed variance in the warfarin dose among Puerto Ricans, and also produced significantly better ‘ideal dose’ estimates than two pharmacogenetic models and clinical algorithms published previously, with the greatest benefit seen in patients ultimately requiring <7 mg/day. We also assessed the clinical validity of the model using an independent validation cohort of 55 Puerto Rican patients from Hartford, CT, USA (R2 = 51%). Conclusion Our findings provide the basis for planning prospective pharmacogenetic studies to demonstrate the clinical utility of genotyping warfarin-treated Puerto Rican patients. PMID:23215886

  5. Search for Active-State Conformation of Drug Target GPCR Using Real-Coded Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Ishino, Yoko; Harada, Takanori; Aida, Misako

    G-Protein coupled receptors (GPCRs) comprise a large superfamily of proteins and are a target for nearly 50% of drugs in clinical use today. GPCRs have a unique structural motif, seven transmembrane helices, and it is known that agonists and antagonists dock with a GPCR in its ``active'' and ``inactive'' condition, respectively. Knowing conformations of both states is eagerly anticipated for elucidation of drug action mechanism. Since GPCRs are difficult to crystallize, the 3D structures of these receptors have not yet been determined by X-ray crystallography, except the inactive-state conformation of two proteins. The conformation of them enabled the inactive form of other GPCRs to be modeled by computer-aided homology modeling. However, to date, the active form of GPCRs has not been solved. This paper describes a novel method to predict the 3D structure of an active-state GPCR aiming at molecular docking-based virtual screening using real-coded genetic algorithm (real-coded GA), receptor-ligand docking simulations, and molecular dynamics (MD) simulations. The basic idea of the method is that the MD is first used to calculate an average 3D coordinates of all atoms of a GPCR protein against heat fluctuation on the pico- or nano- second time scale, and then real-coded GA involving receptor-ligand docking simulations functions to determine the rotation angle of each helix as a movement on wider time scale. The method was validated using human leukotriene B4 receptor BLT1 as a sample GPCR. Our study demonstrated that the established evolutionary search for the active state of the leukotriene receptor provided the appropriate 3D structure of the receptor to dock with its agonists.

  6. Development of a new metal artifact reduction algorithm by using an edge preserving method for CBCT imaging

    NASA Astrophysics Data System (ADS)

    Kim, Juhye; Nam, Haewon; Lee, Rena

    2015-07-01

    CT (computed tomography) images, metal materials such as tooth supplements or surgical clips can cause metal artifact and degrade image quality. In severe cases, this may lead to misdiagnosis. In this research, we developed a new MAR (metal artifact reduction) algorithm by using an edge preserving filter and the MATLAB program (Mathworks, version R2012a). The proposed algorithm consists of 6 steps: image reconstruction from projection data, metal segmentation, forward projection, interpolation, applied edge preserving smoothing filter, and new image reconstruction. For an evaluation of the proposed algorithm, we obtained both numerical simulation data and data for a Rando phantom. In the numerical simulation data, four metal regions were added into the Shepp Logan phantom for metal artifacts. The projection data of the metal-inserted Rando phantom were obtained by using a prototype CBCT scanner manufactured by medical engineering and medical physics (MEMP) laboratory research group in medical science at Ewha Womans University. After these had been adopted the proposed algorithm was performed, and the result were compared with the original image (with metal artifact without correction) and with a corrected image based on linear interpolation. Both visual and quantitative evaluations were done. Compared with the original image with metal artifacts and with the image corrected by using linear interpolation, both the numerical and the experimental phantom data demonstrated that the proposed algorithm reduced the metal artifact. In conclusion, the evaluation in this research showed that the proposed algorithm outperformed the interpolation based MAR algorithm. If an optimization and a stability evaluation of the proposed algorithm can be performed, the developed algorithm is expected to be an effective tool for eliminating metal artifacts even in commercial CT systems.

  7. Development of a generally applicable morphokinetic algorithm capable of predicting the implantation potential of embryos transferred on Day 3

    PubMed Central

    Petersen, Bjørn Molt; Boel, Mikkel; Montag, Markus; Gardner, David K.

    2016-01-01

    STUDY QUESTION Can a generally applicable morphokinetic algorithm suitable for Day 3 transfers of time-lapse monitored embryos originating from different culture conditions and fertilization methods be developed for the purpose of supporting the embryologist's decision on which embryo to transfer back to the patient in assisted reproduction? SUMMARY ANSWER The algorithm presented here can be used independently of culture conditions and fertilization method and provides predictive power not surpassed by other published algorithms for ranking embryos according to their blastocyst formation potential. WHAT IS KNOWN ALREADY Generally applicable algorithms have so far been developed only for predicting blastocyst formation. A number of clinics have reported validated implantation prediction algorithms, which have been developed based on clinic-specific culture conditions and clinical environment. However, a generally applicable embryo evaluation algorithm based on actual implantation outcome has not yet been reported. STUDY DESIGN, SIZE, DURATION Retrospective evaluation of data extracted from a database of known implantation data (KID) originating from 3275 embryos transferred on Day 3 conducted in 24 clinics between 2009 and 2014. The data represented different culture conditions (reduced and ambient oxygen with various culture medium strategies) and fertilization methods (IVF, ICSI). The capability to predict blastocyst formation was evaluated on an independent set of morphokinetic data from 11 218 embryos which had been cultured to Day 5. PARTICIPANTS/MATERIALS, SETTING, METHODS The algorithm was developed by applying automated recursive partitioning to a large number of annotation types and derived equations, progressing to a five-fold cross-validation test of the complete data set and a validation test of different incubation conditions and fertilization methods. The results were expressed as receiver operating characteristics curves using the area under the

  8. Moving toward Teamwork through Professional Development Activities

    ERIC Educational Resources Information Center

    Fitzgerald, Meghan M.; Theilheimer, Rachel

    2013-01-01

    This qualitative study of three Head Start Centers analyzed surveys, interviews, and focus group data to determine how education coordinators, teachers, and teacher assistants believed professional development activities could support teamwork at their centers. The researchers sorted data related to teamwork into four categories: knowledge and…

  9. Development of a space activity suit

    NASA Technical Reports Server (NTRS)

    Annis, J. F.; Webb, P.

    1971-01-01

    The development of a series of prototype space activity suit (SAS) assemblies is discussed. The SAS is a new type of pressure suit designed especially for extravehicular activity. It consists of a set of carefully tailored elastic fabric garments which have been engineered to supply sufficient counterpressure to the body to permit subjects to breath O2 at pressures up to 200 mm Hg without circulatory difficulty. A closed, positive pressure breathing system (PPBS) and a full bubble helmet were also developed to complete the system. The ultimate goal of the SAS is to improve the range of activity and decrease the energy cost of work associated with wearing conventional gas filled pressure suits. Results are presented from both laboratory (1 atmosphere) and altitude chamber tests with subjects wearing various SAS assemblies. In laboratory tests lasting up to three hours, the SAS was worn while subjects breathed O2 at pressures up to 170 mm Hg without developing physiological problems. The only physiological symptoms apparent were a moderate tachycardia related to breathing pressures above 130 mm Hg, and a small collection of edema fluid in the hands. Both problems were considered to be related to areas of under-pressurization by the garments. These problems, it is suggested, can ultimately be corrected by the development of new elastic fabrics and tailoring techniques. Energy cost of activity, and mobility and dexterity of subjects in the SAS, were found to be superior to those in comparable tests on subjects in full pressure suits.

  10. Developing Web Literacy in Collaborative Inquiry Activities

    ERIC Educational Resources Information Center

    Kuiper, Els; Volman, Monique; Terwel, Jan

    2009-01-01

    Although many children are technically skilled in using the Web, their competences to use it in a critical and meaningful way are usually less well developed. In this article, we report on a multiple case study focusing on the possibilities and limitations of collaborative inquiry activities as an appropriate context to acquire Web literacy skills…

  11. Active Learning through Toy Design and Development

    ERIC Educational Resources Information Center

    Sirinterlikci, Arif; Zane, Linda; Sirinterlikci, Aleea L.

    2009-01-01

    This article presents an initiative that is based on active learning pedagogy by engaging elementary and middle school students in the toy design and development field. The case study presented in this article is about student learning experiences during their participation in the TOYchallenge National Toy Design Competition. Students followed the…

  12. Interactive Video Training and Development Activity.

    ERIC Educational Resources Information Center

    Troy State Univ., AL.

    The Interactive Video Training and Development Activity of Troy State University (Troy, Alabama) is described in this report. The project has trained more than 30 people in the production of interactive video programs since its inception in 1983. Since 1985, training programs have been offered twice a year to individuals within and outside the…

  13. Child Development: An Active Learning Approach

    ERIC Educational Resources Information Center

    Levine, Laura E.; Munsch, Joyce

    2010-01-01

    Within each chapter of this innovative topical text, the authors engage students by demonstrating the wide range of real-world applications of psychological research connected to child development. In particular, the distinctive Active Learning features incorporated throughout the book foster a dynamic and personal learning process for students.…

  14. Developing Image Processing Meta-Algorithms with Data Mining of Multiple Metrics

    PubMed Central

    Cunha, Alexandre; Toga, A. W.; Parker, D. Stott

    2014-01-01

    People often use multiple metrics in image processing, but here we take a novel approach of mining the values of batteries of metrics on image processing results. We present a case for extending image processing methods to incorporate automated mining of multiple image metric values. Here by a metric we mean any image similarity or distance measure, and in this paper we consider intensity-based and statistical image measures and focus on registration as an image processing problem. We show how it is possible to develop meta-algorithms that evaluate different image processing results with a number of different metrics and mine the results in an automated fashion so as to select the best results. We show that the mining of multiple metrics offers a variety of potential benefits for many image processing problems, including improved robustness and validation. PMID:24653748

  15. Development of Great Lakes algorithms for the Nimbus-G coastal zone color scanner

    NASA Technical Reports Server (NTRS)

    Tanis, F. J.; Lyzenga, D. R.

    1981-01-01

    A series of experiments in the Great Lakes designed to evaluate the application of the Nimbus G satellite Coastal Zone Color Scanner (CZCS) were conducted. Absorption and scattering measurement data were reduced to obtain a preliminary optical model for the Great Lakes. Available optical models were used in turn to calculate subsurface reflectances for expected concentrations of chlorophyll-a pigment and suspended minerals. Multiple nonlinear regression techniques were used to derive CZCS water quality prediction equations from Great Lakes simulation data. An existing atmospheric model was combined with a water model to provide the necessary simulation data for evaluation of the preliminary CZCS algorithms. A CZCS scanner model was developed which accounts for image distorting scanner and satellite motions. This model was used in turn to generate mapping polynomials that define the transformation from the original image to one configured in a polyconic projection. Four computer programs (FORTRAN IV) for image transformation are presented.

  16. Development of a Collins-type cryocooler floating piston control algorithm

    NASA Astrophysics Data System (ADS)

    Hogan, Jake; Hannon, Charles L.; Brisson, John

    2012-06-01

    The Collins-type cryocooler uses a floating piston design for the working fluid expansion. The piston floats between a cold volume, where the working fluid is expanded, and a warm volume. The piston is shuttled between opposite ends of the closed cylinder by opening and closing valves connecting several reservoirs at various pressures to the warm volume. Ideally, these pressures should be distributed between the high and low system pressure to gain good control of the piston motion. In this work, a numerical quasi-steady thermodynamic model is developed for the piston cycle. The model determines the steady state pressure distribution of the reservoirs for a given control algorithm. The results are then extended to show how valve timing modifications can be used to overcome helium leakage past the piston during operation.

  17. Development of a Low-Lift Chiller Controller and Simplified Precooling Control Algorithm - Final Report

    SciTech Connect

    Gayeski, N.; Armstrong, Peter; Alvira, M.; Gagne, J.; Katipamula, Srinivas

    2011-11-30

    KGS Buildings LLC (KGS) and Pacific Northwest National Laboratory (PNNL) have developed a simplified control algorithm and prototype low-lift chiller controller suitable for model-predictive control in a demonstration project of low-lift cooling. Low-lift cooling is a highly efficient cooling strategy conceived to enable low or net-zero energy buildings. A low-lift cooling system consists of a high efficiency low-lift chiller, radiant cooling, thermal storage, and model-predictive control to pre-cool thermal storage overnight on an optimal cooling rate trajectory. We call the properly integrated and controlled combination of these elements a low-lift cooling system (LLCS). This document is the final report for that project.

  18. Development of Web-Based Menu Planning Support System and its Solution Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Kashima, Tomoko; Matsumoto, Shimpei; Ishii, Hiroaki

    2009-10-01

    Recently lifestyle-related diseases have become an object of public concern, while at the same time people are being more health conscious. As an essential factor for causing the lifestyle-related diseases, we assume that the knowledge circulation on dietary habits is still insufficient. This paper focuses on everyday meals close to our life and proposes a well-balanced menu planning system as a preventive measure of lifestyle-related diseases. The system is developed by using a Web-based frontend and it provides multi-user services and menu information sharing capabilities like social networking services (SNS). The system is implemented on a Web server running Apache (HTTP server software), MySQL (database management system), and PHP (scripting language for dynamic Web pages). For the menu planning, a genetic algorithm is applied by understanding this problem as multidimensional 0-1 integer programming.

  19. Develop algorithms to improve detectability of defects in Sonic IR imaging NDE

    NASA Astrophysics Data System (ADS)

    Obeidat, Omar; Yu, Qiuye; Han, Xiaoyan

    2016-02-01

    Sonic Infrared (IR) technology is relative new in the NDE family. It is a fast, wide area imaging method. It combines ultrasound excitation and infrared imaging while the former to apply ultrasound energy thus induce friction heating in defects and the latter to capture the IR emission from the target. This technology can detect both surface and subsurface defects such as cracks and disbands/delaminations in various materials, metal/metal alloy or composites. However, certain defects may results in only very small IR signature be buried in noise or heating patterns. In such cases, to effectively extract the defect signals becomes critical in identifying the defects. In this paper, we will present algorithms which are developed to improve the detectability of defects in Sonic IR.

  20. Algorithm and code development for unsteady three-dimensional Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Obayashi, Shigeru

    1993-01-01

    In the last two decades, there have been extensive developments in computational aerodynamics, which constitutes a major part of the general area of computational fluid dynamics. Such developments are essential to advance the understanding of the physics of complex flows, to complement expensive wind-tunnel tests, and to reduce the overall design cost of an aircraft, particularly in the area of aeroelasticity. Aeroelasticity plays an important role in the design and development of aircraft, particularly modern aircraft, which tend to be more flexible. Several phenomena that can be dangerous and limit the performance of an aircraft occur because of the interaction of the flow with flexible components. For example, an aircraft with highly swept wings may experience vortex-induced aeroelastic oscillations. Also, undesirable aeroelastic phenomena due to the presence and movement of shock waves occur in the transonic range. Aeroelastically critical phenomena, such as a low transonic flutter speed, have been known to occur through limited wind-tunnel tests and flight tests. Aeroelastic tests require extensive cost and risk. An aeroelastic wind-tunnel experiment is an order of magnitude more expensive than a parallel experiment involving only aerodynamics. By complementing the wind-tunnel experiments with numerical simulations the overall cost of the development of aircraft can be considerably reduced. In order to accurately compute aeroelastic phenomenon it is necessary to solve the unsteady Euler/Navier-Stokes equations simultaneously with the structural equations of motion. These equations accurately describe the flow phenomena for aeroelastic applications. At Ames a code, ENSAERO, is being developed for computing the unsteady aerodynamics and aeroelasticity of aircraft and it solves the Euler/Navier-Stokes equations. The purpose of this contract is to continue the algorithm enhancements of ENSAERO and to apply the code to complicated geometries. During the last year

  1. Hierarchical, multi-sensor based classification of daily life activities: comparison with state-of-the-art algorithms using a benchmark dataset.

    PubMed

    Leutheuser, Heike; Schuldhaus, Dominik; Eskofier, Bjoern M

    2013-01-01

    Insufficient physical activity is the 4th leading risk factor for mortality. Methods for assessing the individual daily life activity (DLA) are of major interest in order to monitor the current health status and to provide feedback about the individual quality of life. The conventional assessment of DLAs with self-reports induces problems like reliability, validity, and sensitivity. The assessment of DLAs with small and light-weight wearable sensors (e.g. inertial measurement units) provides a reliable and objective method. State-of-the-art human physical activity classification systems differ in e.g. the number and kind of sensors, the performed activities, and the sampling rate. Hence, it is difficult to compare newly proposed classification algorithms to existing approaches in literature and no commonly used dataset exists. We generated a publicly available benchmark dataset for the classification of DLAs. Inertial data were recorded with four sensor nodes, each consisting of a triaxial accelerometer and a triaxial gyroscope, placed on wrist, hip, chest, and ankle. Further, we developed a novel, hierarchical, multi-sensor based classification system for the distinction of a large set of DLAs. Our hierarchical classification system reached an overall mean classification rate of 89.6% and was diligently compared to existing state-of-the-art algorithms using our benchmark dataset. For future research, the dataset can be used in the evaluation process of new classification algorithms and could speed up the process of getting the best performing and most appropriate DLA classification system.

  2. Development of a new genetic algorithm to solve the feedstock scheduling problem in an anaerobic digester

    NASA Astrophysics Data System (ADS)

    Cram, Ana Catalina

    As worldwide environmental awareness grow, alternative sources of energy have become important to mitigate climate change. Biogas in particular reduces greenhouse gas emissions that contribute to global warming and has the potential of providing 25% of the annual demand for natural gas in the U.S. In 2011, 55,000 metric tons of methane emissions were reduced and 301 metric tons of carbon dioxide emissions were avoided through the use of biogas alone. Biogas is produced by anaerobic digestion through the fermentation of organic material. It is mainly composed of methane with a rage of 50 to 80% in its concentration. Carbon dioxide covers 20 to 50% and small amounts of hydrogen, carbon monoxide and nitrogen. The biogas production systems are anaerobic digestion facilities and the optimal operation of an anaerobic digester requires the scheduling of all batches from multiple feedstocks during a specific time horizon. The availability times, biomass quantities, biogas production rates and storage decay rates must all be taken into account for maximal biogas production to be achieved during the planning horizon. Little work has been done to optimize the scheduling of different types of feedstock in anaerobic digestion facilities to maximize the total biogas produced by these systems. Therefore, in the present thesis, a new genetic algorithm is developed with the main objective of obtaining the optimal sequence in which different feedstocks will be processed and the optimal time to allocate to each feedstock in the digester with the main objective of maximizing the production of biogas considering different types of feedstocks, arrival times and decay rates. Moreover, all batches need to be processed in the digester in a specified time with the restriction that only one batch can be processed at a time. The developed algorithm is applied to 3 different examples and a comparison with results obtained in previous studies is presented.

  3. Developing Multiple Diverse Potential Designs for Heat Transfer Utilizing Graph Based Evolutionary Algorithms

    SciTech Connect

    David J. Muth Jr.

    2006-09-01

    This paper examines the use of graph based evolutionary algorithms (GBEAs) to find multiple acceptable solutions for heat transfer in engineering systems during the optimization process. GBEAs are a type of evolutionary algorithm (EA) in which a topology, or geography, is imposed on an evolving population of solutions. The rates at which solutions can spread within the population are controlled by the choice of topology. As in nature geography can be used to develop and sustain diversity within the solution population. Altering the choice of graph can create a more or less diverse population of potential solutions. The choice of graph can also affect the convergence rate for the EA and the number of mating events required for convergence. The engineering system examined in this paper is a biomass fueled cookstove used in developing nations for household cooking. In this cookstove wood is combusted in a small combustion chamber and the resulting hot gases are utilized to heat the stove’s cooking surface. The spatial temperature profile of the cooking surface is determined by a series of baffles that direct the flow of hot gases. The optimization goal is to find baffle configurations that provide an even temperature distribution on the cooking surface. Often in engineering, the goal of optimization is not to find the single optimum solution but rather to identify a number of good solutions that can be used as a starting point for detailed engineering design. Because of this a key aspect of evolutionary optimization is the diversity of the solutions found. The key conclusion in this paper is that GBEA’s can be used to create multiple good solutions needed to support engineering design.

  4. Pedunculopontine Gamma Band Activity and Development

    PubMed Central

    Garcia-Rill, Edgar; Luster, Brennon; Mahaffey, Susan; MacNicol, Melanie; Hyde, James R.; D’Onofrio, Stasia M.; Phillips, Cristy

    2015-01-01

    This review highlights the most important discovery in the reticular activating system in the last 10 years, the manifestation of gamma band activity in cells of the reticular activating system (RAS), especially in the pedunculopontine nucleus, which is in charge of waking and rapid eye movement (REM) sleep. The identification of different cell groups manifesting P/Q-type Ca2+ channels that control waking vs. those that manifest N-type channels that control REM sleep provides novel avenues for the differential control of waking vs. REM sleep. Recent discoveries on the development of this system can help explain the developmental decrease in REM sleep and the basic rest-activity cycle. PMID:26633526

  5. Pedunculopontine Gamma Band Activity and Development.

    PubMed

    Garcia-Rill, Edgar; Luster, Brennon; Mahaffey, Susan; MacNicol, Melanie; Hyde, James R; D'Onofrio, Stasia M; Phillips, Cristy

    2015-01-01

    This review highlights the most important discovery in the reticular activating system in the last 10 years, the manifestation of gamma band activity in cells of the reticular activating system (RAS), especially in the pedunculopontine nucleus, which is in charge of waking and rapid eye movement (REM) sleep. The identification of different cell groups manifesting P/Q-type Ca(2+) channels that control waking vs. those that manifest N-type channels that control REM sleep provides novel avenues for the differential control of waking vs. REM sleep. Recent discoveries on the development of this system can help explain the developmental decrease in REM sleep and the basic rest-activity cycle. PMID:26633526

  6. Detection of surface algal blooms using the newly developed algorithm surface algal bloom index (SABI)

    NASA Astrophysics Data System (ADS)

    Alawadi, Fahad

    2010-10-01

    Quantifying ocean colour properties has evolved over the past two decades from being able to merely detect their biological activity to the ability to estimate chlorophyll concentration using optical satellite sensors like MODIS and MERIS. The production of chlorophyll spatial distribution maps is a good indicator of plankton biomass (primary production) and is useful for the tracing of oceanographic currents, jets and blooms, including harmful algal blooms (HABs). Depending on the type of HABs involved and the environmental conditions, if their concentration rises above a critical threshold, it can impact the flora and fauna of the aquatic habitat through the introduction of the so called "red tide" phenomenon. The estimation of chlorophyll concentration is derived from quantifying the spectral relationship between the blue and the green bands reflected from the water column. This spectral relationship is employed in the standard ocean colour chlorophyll-a (Chlor-a) product, but is incapable of detecting certain macro-algal species that float near to or at the water surface in the form of dense filaments or mats. The ability to accurately identify algal formations that sometimes appear as oil spill look-alikes in satellite imagery, contributes towards the reduction of false-positive incidents arising from oil spill monitoring operations. Such algal formations that occur in relatively high concentrations may experience, as in land vegetation, what is known as the "red-edge" effect. This phenomena occurs at the highest reflectance slope between the maximum absorption in the red due to the surrounding ocean water and the maximum reflectance in the infra-red due to the photosynthetic pigments present in the surface algae. A new algorithm termed the surface algal bloom index (SABI), has been proposed to delineate the spatial distributions of floating micro-algal species like for example cyanobacteria or exposed inter-tidal vegetation like seagrass. This algorithm was

  7. GASAKe: forecasting landslide activations by a genetic-algorithms-based hydrological model

    NASA Astrophysics Data System (ADS)

    Terranova, O. G.; Gariano, S. L.; Iaquinta, P.; Iovine, G. G. R.

    2015-07-01

    GASAKe is a new hydrological model aimed at forecasting the triggering of landslides. The model is based on genetic algorithms and allows one to obtain thresholds for the prediction of slope failures using dates of landslide activations and rainfall series. It can be applied to either single landslides or a set of similar slope movements in a homogeneous environment. Calibration of the model provides families of optimal, discretized solutions (kernels) that maximize the fitness function. Starting from the kernels, the corresponding mobility functions (i.e., the predictive tools) can be obtained through convolution with the rain series. The base time of the kernel is related to the magnitude of the considered slope movement, as well as to the hydro-geological complexity of the site. Generally, shorter base times are expected for shallow slope instabilities compared to larger-scale phenomena. Once validated, the model can be applied to estimate the timing of future landslide activations in the same study area, by employing measured or forecasted rainfall series. Examples of application of GASAKe to a medium-size slope movement (the Uncino landslide at San Fili, in Calabria, southern Italy) and to a set of shallow landslides (in the Sorrento Peninsula, Campania, southern Italy) are discussed. In both cases, a successful calibration of the model has been achieved, despite unavoidable uncertainties concerning the dates of occurrence of the slope movements. In particular, for the Sorrento Peninsula case, a fitness of 0.81 has been obtained by calibrating the model against 10 dates of landslide activation; in the Uncino case, a fitness of 1 (i.e., neither missing nor false alarms) has been achieved using five activations. As for temporal validation, the experiments performed by considering further dates of activation have also proved satisfactory. In view of early-warning applications for civil protection, the capability of the model to simulate the occurrences of the

  8. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  9. Developing a modified SEBAL algorithm that is responsive to advection by using limited weather data

    NASA Astrophysics Data System (ADS)

    Mkhwanazi, Mcebisi

    The use of Remote Sensing ET algorithms in water management, especially for agricultural purposes is increasing, and there are more models being introduced. The Surface Energy Balance Algorithm for Land (SEBAL) and its variant, Mapping Evapotranspiration with Internalized Calibration (METRIC) are some of the models that are being widely used. While SEBAL has several advantages over other RS models, including that it does not require prior knowledge of soil, crop and other ground details, it has the downside of underestimating evapotranspiration (ET) on days when there is advection, which may be in most cases in arid and semi-arid areas. METRIC, however has been modified to be able to account for advection, but in doing so it requires hourly weather data. In most developing countries, while accurate estimates of ET are required, the weather data necessary to use METRIC may not be available. This research therefore was meant to develop a modified version of SEBAL that would require minimal weather data that may be available in these areas, and still estimate ET accurately. The data that were used to develop this model were minimum and maximum temperatures, wind data, preferably the run of wind in the afternoon, and wet bulb temperature. These were used to quantify the advected energy that would increase ET in the field. This was a two-step process; the first was developing the model for standard conditions, which was described as a healthy cover of alfalfa, 40-60 cm tall and not short of water. Under standard conditions, when estimated ET using modified SEBAL was compared with lysimeter-measured ET, the modified SEBAL model had a Mean Bias Error (MBE) of 2.2 % compared to -17.1 % from the original SEBAL. The Root Mean Square Error (RMSE) was lower for the modified SEBAL model at 10.9 % compared to 25.1 % for the original SEBAL. The modified SEBAL model, developed on an alfalfa field in Rocky Ford, was then tested on other crops; beans and wheat. It was also tested on

  10. Active vibration reduction of a flexible structure bonded with optimised piezoelectric pairs using half and quarter chromosomes in genetic algorithms

    NASA Astrophysics Data System (ADS)

    Daraji, A. H.; Hale, J. M.

    2012-08-01

    The optimal placement of sensors and actuators in active vibration control is limited by the number of candidates in the search space. The search space of a small structure discretized to one hundred elements for optimising the location of ten actuators gives 1.73 × 1013 possible solutions, one of which is the global optimum. In this work, a new quarter and half chromosome technique based on symmetry is developed, by which the search space for optimisation of sensor/actuator locations in active vibration control of flexible structures may be greatly reduced. The technique is applied to the optimisation for eight and ten actuators located on a 500×500mm square plate, in which the search space is reduced by up to 99.99%. This technique helps for updating genetic algorithm program by updating natural frequencies and mode shapes in each generation to find the global optimal solution in a greatly reduced number of generations. An isotropic plate with piezoelectric sensor/actuator pairs bonded to its surface was investigated using the finite element method and Hamilton's principle based on first order shear deformation theory. The placement and feedback gain of ten and eight sensor/actuator pairs was optimised for a cantilever and clamped-clamped plate to attenuate the first six modes of vibration, using minimization of linear quadratic index as an objective function.

  11. Development of novel active transport membrande devices

    SciTech Connect

    Laciak, D.V.

    1994-11-01

    Air Products has undertaken a research program to fabricate and evaluate gas separation membranes based upon promising ``active-transport`` (AT) materials recently developed in our laboratories. Active Transport materials are ionic polymers and molten salts which undergo reversible interaction or reaction with ammonia and carbon dioxide. The materials are useful for separating these gases from mixtures with hydrogen. Moreover, AT membranes have the unique property of possessing high permeability towards ammnonia and carbon dioxide but low permeability towards hydrogen and can thus be used to permeate these components from a gas stream while retaining hydrogen at high pressure.

  12. Production of proxy datasets in support of GOES-R algorithm development

    NASA Astrophysics Data System (ADS)

    Hillger, Don; Brummer, Renate; Grasso, Louie; Sengupta, Manajit; DeMaria, Robert; DeMaria, Mark

    2009-08-01

    Realistic simulated satellite imagery for GOES-R ABI using state of the art mesoscale modeling and accurate radiative transfer is being produced at the Cooperative Institute for Research in the Atmosphere (CIRA) and used in developing and testing new products. Products which have been produced in support of the GOES-R Algorithm Working Group (AWG) include 6-hour imagery at 5 minute intervals for 4 GOES-R ABI bands (2.25 μm, 3.9 μm, 10.35 μm, and 11.2 μm) that include fire hotspots. The imagery was initially produced at 400 m resolution and a point-spread function applied on the data to create ABI resolution imagery. Also created was corresponding imagery for current GOES at 2 bands (3.9 μm and 10.7 μm). These fire hotspots were simulated for 4 different cases over Kansas, Central America, and California. Additionally, high quality imagery for 10 GOES-R ABI bands (3.9 μm and higher) were produced for 4 extreme weather events. These simulations include a lake effect snow case, a severe weather case, Hurricane Wilma, and Hurricane Lili. All simulations for extreme weather events were also performed for current GOES and compared with available imagery for quality control purposes. Future work focuses on the creation of additional fire proxy datasets including true-color imagery for 3 ABI visible bands. This project also supports the GOES-R AWG Aviation Team in their effort to test their convective initiation algorithm by providing simulated ABI datasets for bands between 2.25 μm and 13.3 μm for a severe weather case. In addition, simulated ABI was generated from MSG infrared (IR) window band imagery and corresponding simulated ABI for the 7 tropical cyclones from 2006-2008 that became hurricanes in the east Atlantic for evaluation of the GOES-R ADT algorithm conducted by the University of Wisconsin Cooperative Institute for Meteorological Satellite Studies (CIMSS).

  13. Development of a Near-Real Time Hail Damage Swath Identification Algorithm for Vegetation

    NASA Technical Reports Server (NTRS)

    Bell, Jordan R.; Molthan, Andrew L.; Schultz, Lori A.; McGrath, Kevin M.; Burks, Jason E.

    2015-01-01

    The Midwest is home to one of the world's largest agricultural growing regions. Between the time period of late May through early September, and with irrigation and seasonal rainfall these crops are able to reach their full maturity. Using moderate to high resolution remote sensors, the monitoring of the vegetation can be achieved using the red and near-infrared wavelengths. These wavelengths allow for the calculation of vegetation indices, such as Normalized Difference Vegetation Index (NDVI). The vegetation growth and greenness, in this region, grows and evolves uniformly as the growing season progresses. However one of the biggest threats to Midwest vegetation during the time period is thunderstorms that bring large hail and damaging winds. Hail and wind damage to crops can be very expensive to crop growers and, damage can be spread over long swaths associated with the tracks of the damaging storms. Damage to the vegetation can be apparent in remotely sensed imagery and is visible from space after storms slightly damage the crops, allowing for changes to occur slowly over time as the crops wilt or more readily apparent if the storms strip material from the crops or destroy them completely. Previous work on identifying these hail damage swaths used manual interpretation by the way of moderate and higher resolution satellite imagery. With the development of an automated and near-real time hail swath damage identification algorithm, detection can be improved, and more damage indicators be created in a faster and more efficient way. The automated detection of hail damage swaths will examine short-term, large changes in the vegetation by differencing near-real time eight day NDVI composites and comparing them to post storm imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard Terra and Aqua and Visible Infrared Imaging Radiometer Suite (VIIRS) aboard Suomi NPP. In addition land surface temperatures from these instruments will be examined as

  14. Nonlinear Motion Cueing Algorithm: Filtering at Pilot Station and Development of the Nonlinear Optimal Filters for Pitch and Roll

    NASA Technical Reports Server (NTRS)

    Zaychik, Kirill B.; Cardullo, Frank M.

    2012-01-01

    Telban and Cardullo have developed and successfully implemented the non-linear optimal motion cueing algorithm at the Visual Motion Simulator (VMS) at the NASA Langley Research Center in 2005. The latest version of the non-linear algorithm performed filtering of motion cues in all degrees-of-freedom except for pitch and roll. This manuscript describes the development and implementation of the non-linear optimal motion cueing algorithm for the pitch and roll degrees of freedom. Presented results indicate improved cues in the specified channels as compared to the original design. To further advance motion cueing in general, this manuscript describes modifications to the existing algorithm, which allow for filtering at the location of the pilot's head as opposed to the centroid of the motion platform. The rational for such modification to the cueing algorithms is that the location of the pilot's vestibular system must be taken into account as opposed to the off-set of the centroid of the cockpit relative to the center of rotation alone. Results provided in this report suggest improved performance of the motion cueing algorithm.

  15. Development of algorithms for capacitance imaging techniques for fluidized bed flow fields. 1990 Annual report

    SciTech Connect

    Loudin, W.J.

    1991-01-01

    The objective of this research is to provide support for the instrumentation concept of a High Resolution Capacitance Imaging System (HRCIS). The work involves the development and evaluation of the mathematical theory and associated models and algorithms which reduce the electronic measurements to valid physical characterizations. The research and development require the investigation of techniques to solve large systems of equations based on capacitance measurements for various electrode configurations in order to estimate densities of materials in a cross-section of a fluidized bed. Capacitance measurements are made for 400 connections of the 32-electrode system; 400 corresponding electric-field curves are constructed by solving a second order partial differential equation. These curves are used to partition the circular disk into 193 regions called pixels, and the density of material in each pixel is to be estimated. Two methods of approximating densities have been developed and consideration of a third method has been initiated. One method (Method 1) is based on products of displacement currents for intersecting electric-field curves on a cross section. For each pixel one point of intersection is chosen, and the product of the capacitance measurements is found. Both the product and the square-root-of-product seem to yield good relative distribution of densities.

  16. Development of algorithms for capacitance imaging techniques for fluidized bed flow fields

    SciTech Connect

    Loudin, W.J.

    1991-01-01

    The objective of this research is to provide support for the instrumentation concept of a High Resolution Capacitance Imaging System (HRCIS). The work involves the development and evaluation of the mathematical theory and associated models and algorithms which reduce the electronic measurements to valid physical characterizations. The research and development require the investigation of techniques to solve large systems of equations based on capacitance measurements for various electrode configurations in order to estimate densities of materials in a cross-section of a fluidized bed. Capacitance measurements are made for 400 connections of the 32-electrode system; 400 corresponding electric-field curves are constructed by solving a second order partial differential equation. These curves are used to partition the circular disk into 193 regions called pixels, and the density of material in each pixel is to be estimated. Two methods of approximating densities have been developed and consideration of a third method has been initiated. One method (Method 1) is based on products of displacement currents for intersecting electric-field curves on a cross section. For each pixel one point of intersection is chosen, and the product of the capacitance measurements is found. Both the product and the square-root-of-product seem to yield good relative distribution of densities.

  17. Innovative approach in the development of computer assisted algorithm for spine pedicle screw placement.

    PubMed

    Solitro, Giovanni F; Amirouche, Farid

    2016-04-01

    Pedicle screws are typically used for fusion, percutaneous fixation, and means of gripping a spinal segment. The screws act as a rigid and stable anchor points to bridge and connect with a rod as part of a construct. The foundation of the fusion is directly related to the placement of these screws. Malposition of pedicle screws causes intraoperative complications such as pedicle fractures and dural lesions and is a contributing factor to fusion failure. Computer assisted spine surgery (CASS) and patient-specific drill templates were developed to reduce this failure rate, but the trajectory of the screws remains a decision driven by anatomical landmarks often not easily defined. Current data shows the need of a robust and reliable technique that prevents screw misplacement. Furthermore, there is a need to enhance screw insertion guides to overcome the distortion of anatomical landmarks, which is viewed as a limiting factor by current techniques. The objective of this study is to develop a method and mathematical lemmas that are fundamental to the development of computer algorithms for pedicle screw placement. Using the proposed methodology, we show how we can generate automated optimal safe screw insertion trajectories based on the identification of a set of intrinsic parameters. The results, obtained from the validation of the proposed method on two full thoracic segments, are similar to previous morphological studies. The simplicity of the method, being pedicle arch based, is applicable to vertebrae where landmarks are either not well defined, altered or distorted. PMID:26922675

  18. Genetic algorithms for the application of Activated Sludge Model No. 1.

    PubMed

    Kim, S; Lee, H; Kim, J; Kim, C; Ko, J; Woo, H; Kim, S

    2002-01-01

    The genetic algorithm (GA) has been integrated into the IWA ASM No. 1 to calibrate important stoichiometric and kinetic parameters. The evolutionary feature of GA was used to configure the multiple local optima as well as the global optimum. The objective function of optimization was designed to minimize the difference between estimated and measured effluent concentrations at the activated sludge system. Both steady state and dynamic data of the simulation benchmark were used for calibration using denitrification layout. Depending upon the confidence intervals and objective functions, the proposed method provided distributions of parameter space. Field data have been collected and applied to validate calibration capacity of GA. Dynamic calibration was suggested to capture periodic variations of inflow concentrations. Also, in order to verify this proposed method in real wastewater treatment plant, measured data sets for substrate concentrations were obtained from Haeundae wastewater treatment plant and used to estimate parameters in the dynamic system. The simulation results with calibrated parameters matched well with the observed concentrations of effluent COD. PMID:11936660

  19. The development of a near-real time hail damage swath identification algorithm for vegetation

    NASA Astrophysics Data System (ADS)

    Bell, Jordan R.

    The central United States is primarily covered in agricultural lands with a growing season that peaks during the same time as the region's climatological maximum for severe weather. These severe thunderstorms can bring large hail that can cause extensive areas of crop damage, which can be difficult to survey from the ground. Satellite remote sensing can help with the identification of these damaged areas. This study examined three techniques for identifying damage using satellite imagery that could be used in the development of a near-real time algorithm formulated for the detection of damage to agriculture caused by hail. The three techniques: a short term Normalized Difference Vegetation Index (NDVI) change product, a modified Vegetation Health Index (mVHI) that incorporates both NDVI and land surface temperature (LST), and a feature detection technique based on NDVI and LST anomalies were tested on a single training case and five case studies. Skill scores were computed for each of the techniques during the training case and each case study. Among the best-performing case studies, the probability of detection (POD) for the techniques ranged from 0.527 - 0.742. Greater skill was noted for environments that occurred later in the growing season over areas where the land cover was consistently one or two types of uniform vegetation. The techniques struggled in environments where the land cover was not able to provide uniform vegetation, resulting in POD of 0.067 - 0.223. The feature detection technique was selected to be used for the near-real-time algorithm, based on the consistent performance throughout the entire growing season.

  20. Development of the Tardivo Algorithm to Predict Amputation Risk of Diabetic Foot

    PubMed Central

    Tardivo, João Paulo; Baptista, Maurício S.; Correa, João Antonio; Adami, Fernando; Pinhal, Maria Aparecida Silva

    2015-01-01

    Diabetes is a chronic disease that affects almost 19% of the elderly population in Brazil and similar percentages around the world. Amputation of lower limbs in diabetic patients who present foot complications is a common occurrence with a significant reduction of life quality, and heavy costs on the health system. Unfortunately, there is no easy protocol to define the conditions that should be considered to proceed to amputation. The main objective of the present study is to create a simple prognostic score to evaluate the diabetic foot, which is called Tardivo Algorithm. Calculation of the score is based on three main factors: Wagner classification, signs of peripheral arterial disease (PAD), which is evaluated by using Peripheral Arterial Disease Classification, and the location of ulcers. The final score is obtained by multiplying the value of the individual factors. Patients with good peripheral vascularization received a value of 1, while clinical signs of ischemia received a value of 2 (PAD 2). Ulcer location was defined as forefoot, midfoot and hind foot. The conservative treatment used in patients with scores below 12 was based on a recently developed Photodynamic Therapy (PDT) protocol. 85.5% of these patients presented a good outcome and avoided amputation. The results showed that scores 12 or higher represented a significantly higher probability of amputation (Odds ratio and logistic regression-IC 95%, 12.2–1886.5). The Tardivo algorithm is a simple prognostic score for the diabetic foot, easily accessible by physicians. It helps to determine the amputation risk and the best treatment, whether it is conservative or surgical management. PMID:26281044

  1. Development and validation of a simple algorithm for initiation of CPAP in neonates with respiratory distress in Malawi

    PubMed Central

    Hundalani, Shilpa G; Richards-Kortum, Rebecca; Oden, Maria; Kawaza, Kondwani; Gest, Alfred; Molyneux, Elizabeth

    2015-01-01

    Background Low-cost bubble continuous positive airway pressure (bCPAP) systems have been shown to improve survival in neonates with respiratory distress, in developing countries including Malawi. District hospitals in Malawi implementing CPAP requested simple and reliable guidelines to enable healthcare workers with basic skills and minimal training to determine when treatment with CPAP is necessary. We developed and validated TRY (T: Tone is good, R: Respiratory Distress and Y=Yes) CPAP, a simple algorithm to identify neonates with respiratory distress who would benefit from CPAP. Objective To validate the TRY CPAP algorithm for neonates with respiratory distress in a low-resource setting. Methods We constructed an algorithm using a combination of vital signs, tone and birth weight to determine the need for CPAP in neonates with respiratory distress. Neonates admitted to the neonatal ward of Queen Elizabeth Central Hospital, in Blantyre, Malawi, were assessed in a prospective, cross-sectional study. Nurses and paediatricians-in-training assessed neonates to determine whether they required CPAP using the TRY CPAP algorithm. To establish the accuracy of the TRY CPAP algorithm in evaluating the need for CPAP, their assessment was compared with the decision of a neonatologist blinded to the TRY CPAP algorithm findings. Results 325 neonates were evaluated over a 2-month period; 13% were deemed to require CPAP by the neonatologist. The inter-rater reliability with the algorithm was 0.90 for nurses and 0.97 for paediatricians-in-training using the neonatologist's assessment as the reference standard. Conclusions The TRY CPAP algorithm has the potential to be a simple and reliable tool to assist nurses and clinicians in identifying neonates who require treatment with CPAP in low-resource settings. PMID:25877290

  2. Activity at work, innovation and sustainable development.

    PubMed

    Béguin, P; Duarte, F; Lima, F; Pueyo, V

    2012-01-01

    The aim of this paper is to present and discuss a French-Brazilian project (CAPES-COFECUB) centered on the relations between sustainable development, innovation and changes in work activities that accompany these innovations for sustainable development. Sustainable development calls for an integrated approach of three dimensions: social equity, economic viability and environmental sustainability. In order to achieve this integration, considerable innovations efforts are required. However, the work, understood as a productive act, is deeply lacking in the current researches. Starting from the idea that work is a "fundamental need" the goal of this project is to propose innovative methods that can be used for designing production systems from the perspective of sustainable development. PMID:22316705

  3. Technology development activities supporting tank waste remediation

    SciTech Connect

    Bonner, W.F.; Beeman, G.H.

    1994-06-01

    This document summarizes work being conducted under the U.S. Department of Energy`s Office of Technology Development (EM-50) in support of the Tank Waste Remediation System (TWRS) Program. The specific work activities are organized by the following categories: safety, characterization, retrieval, barriers, pretreatment, low-level waste, and high-level waste. In most cases, the activities presented here were identified as supporting tank remediation by EM-50 integrated program or integrated demonstration lead staff and the selections were further refined by contractor staff. Data sheets were prepared from DOE-HQ guidance to the field issued in September 1993. Activities were included if a significant portion of the work described provides technology potentially needed by TWRS; consequently, not all parts of each description necessarily support tank remediation.

  4. Historical development of active middle ear implants.

    PubMed

    Carlson, Matthew L; Pelosi, Stanley; Haynes, David S

    2014-12-01

    Active middle ear implants (AMEIs) are sophisticated technologies designed to overcome many of the shortcomings of conventional hearing aids, including feedback, distortion, and occlusion effect. Three AMEIs are currently approved by the US Food and Drug Administration for implantation in patients with sensorineural hearing loss. In this article, the history of AMEI technologies is reviewed, individual component development is outlined, past and current implant systems are described, and design and implementation successes and dead ends are highlighted. Past and ongoing challenges facing AMEI development are reviewed.

  5. Calibration and algorithm development for estimation of nitrogen in wheat crop using tractor mounted N-sensor.

    PubMed

    Singh, Manjeet; Kumar, Rajneesh; Sharma, Ankit; Singh, Bhupinder; Thind, S K

    2015-01-01

    The experiment was planned to investigate the tractor mounted N-sensor (Make Yara International) to predict nitrogen (N) for wheat crop under different nitrogen levels. It was observed that, for tractor mounted N-sensor, spectrometers can scan about 32% of total area of crop under consideration. An algorithm was developed using a linear relationship between sensor sufficiency index (SIsensor) and SISPAD to calculate the N app as a function of SISPAD. There was a strong correlation among sensor attributes (sensor value, sensor biomass, and sensor NDVI) and different N-levels. It was concluded that tillering stage is most prominent stage to predict crop yield as compared to the other stages by using sensor attributes. The algorithms developed for tillering and booting stages are useful for the prediction of N-application rates for wheat crop. N-application rates predicted by algorithm developed and sensor value were almost the same for plots with different levels of N applied. PMID:25811039

  6. Calibration and Algorithm Development for Estimation of Nitrogen in Wheat Crop Using Tractor Mounted N-Sensor

    PubMed Central

    Singh, Manjeet; Kumar, Rajneesh; Sharma, Ankit; Singh, Bhupinder; Thind, S. K.

    2015-01-01

    The experiment was planned to investigate the tractor mounted N-sensor (Make Yara International) to predict nitrogen (N) for wheat crop under different nitrogen levels. It was observed that, for tractor mounted N-sensor, spectrometers can scan about 32% of total area of crop under consideration. An algorithm was developed using a linear relationship between sensor sufficiency index (SIsensor) and SISPAD to calculate the Napp as a function of SISPAD. There was a strong correlation among sensor attributes (sensor value, sensor biomass, and sensor NDVI) and different N-levels. It was concluded that tillering stage is most prominent stage to predict crop yield as compared to the other stages by using sensor attributes. The algorithms developed for tillering and booting stages are useful for the prediction of N-application rates for wheat crop. N-application rates predicted by algorithm developed and sensor value were almost the same for plots with different levels of N applied. PMID:25811039

  7. Accessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records

    PubMed Central

    MacRae, J; Darlow, B; McBain, L; Jones, O; Stubbe, M; Turner, N; Dowell, A

    2015-01-01

    Objective To develop a natural language processing software inference algorithm to classify the content of primary care consultations using electronic health record Big Data and subsequently test the algorithm's ability to estimate the prevalence and burden of childhood respiratory illness in primary care. Design Algorithm development and validation study. To classify consultations, the algorithm is designed to interrogate clinical narrative entered as free text, diagnostic (Read) codes created and medications prescribed on the day of the consultation. Setting Thirty-six consenting primary care practices from a mixed urban and semirural region of New Zealand. Three independent sets of 1200 child consultation records were randomly extracted from a data set of all general practitioner consultations in participating practices between 1 January 2008–31 December 2013 for children under 18 years of age (n=754 242). Each consultation record within these sets was independently classified by two expert clinicians as respiratory or non-respiratory, and subclassified according to respiratory diagnostic categories to create three ‘gold standard’ sets of classified records. These three gold standard record sets were used to train, test and validate the algorithm. Outcome measures Sensitivity, specificity, positive predictive value and F-measure were calculated to illustrate the algorithm's ability to replicate judgements of expert clinicians within the 1200 record gold standard validation set. Results The algorithm was able to identify respiratory consultations in the 1200 record validation set with a sensitivity of 0.72 (95% CI 0.67 to 0.78) and a specificity of 0.95 (95% CI 0.93 to 0.98). The positive predictive value of algorithm respiratory classification was 0.93 (95% CI 0.89 to 0.97). The positive predictive value of the algorithm classifying consultations as being related to specific respiratory diagnostic categories ranged from 0.68 (95% CI 0.40 to 1.00; other

  8. Ice surface temperature retrieval from AVHRR, ATSR, and passive microwave satellite data: Algorithm development and application

    NASA Technical Reports Server (NTRS)

    Key, Jeff; Maslanik, James; Steffen, Konrad

    1994-01-01

    One essential parameter used in the estimation of radiative and turbulent heat fluxes from satellite data is surface temperature. Sea and land surface temperature (SST and LST) retrieval algorithms that utilize the thermal infrared portion of the spectrum have been developed, with the degree of success dependent primarily upon the variability of the surface and atmospheric characteristics. However, little effort has been directed to the retrieval of the sea ice surface temperature (IST) in the Arctic and Antarctic pack ice or the ice sheet surface temperature over Antarctica and Greenland. The reason is not one of methodology, but rather our limited knowledge of atmospheric temperature, humidity, and aerosol vertical, spatial and temporal distributions, the microphysical properties of polar clouds, and the spectral characteristics of snow, ice, and water surfaces. Over the open ocean the surface is warm, dark, and relatively homogeneous. This makes SST retrieval, including cloud clearing, a fairly straightforward task. Over the ice, however, the surface within a single satellite pixel is likely to be highly heterogeneous, a mixture of ice of various thicknesses, open water, and snow cover in the case of sea ice. Additionally, the Arctic is cloudy - very cloudy - with typical cloud cover amounts ranging from 60-90 percent. There are few observations of cloud cover amounts over Antarctica. The goal of this research is to increase our knowledge of surface temperature patterns and magnitudes in both polar regions, by examining existing data and improving our ability to use satellite data as a monitoring tool. Four instruments are of interest in this study: the AVHRR, ATSR, SMMR, and SSM/I. Our objectives are as follows. Refine the existing AVHRR retrieval algorithm defined in Key and Haefliger (1992; hereafter KH92) and applied elsewhere. Develop a method for IST retrieval from ATSR data similar to the one used for SST. Further investigate the possibility of estimating

  9. Development of a lightning activity nowcasting tool

    NASA Astrophysics Data System (ADS)

    Karagiannidis, Athanassios; Lagouvardos, Kostas; Kotroni, Vassiliki

    2015-04-01

    Electrical phenomena inside thunderstorm clouds are a significant threat to numerous activities. Summertime convective activity is usually associated to local thermal instability, which is hard to predict using numerical weather prediction models. Despite their relatively small areal extend, these thunderstorms can be violent, resulting to infrastructure damage and loss of life. In the frame of TALOS project the National Observatory of Athens has developed a lightning activity nowcasting tool. This tool uses as sole inputs: (i) real time infrared Meteosat Second Generation (MSG) imagery and (ii) real time flashes provided by the VLF lightning detection system ZEUS, which is operated by the National Observatory of Athens. The MSG SEVIRI 10.8 and 6.2μm channels data are utilized to produce 3 Interest Fields (IFs). These fields are the TB10.8 brightness temperature (indicative of the cloud top glaciation), the TB6.2-TB10.8 difference (indicative of the cloud depth) and the TB10.8 15 minute trend, which will be referenced as "TB10.8trend" (indicative of the cloud growth rate). The latter is defined as the difference between two successive 15 minutes images of the TB10.8. When a predefined threshold value is surpassed, the delimited area is considered to be favorable for lightning activity. A statistical procedure is employed to identify the optimum threshold values for the three IFs, based on the performance of each one. The assessment of their efficiency showed that these three IFs can be used independently as predictors of lightning activity. However, in an effort to improve the tool's efficiency a combined estimation is performed. When all three IFs agree that lightning activity is expected over an area, then a Warning Level 3 (WL3) is issued. When 2 or 1 IFs indicate upcoming activity then a WL2 or WL1 is issued. The assessment of the efficiency of the combined IF tool showed that the combined estimation is more skillful than the individual IFs estimations. In a

  10. Stress and Androgen Activity During Fetal Development.

    PubMed

    Barrett, Emily S; Swan, Shanna H

    2015-10-01

    Prenatal stress is known to alter hypothalamic-pituitary-adrenal axis activity, and more recent evidence suggests that it may also affect androgen activity. In animal models, prenatal stress disrupts the normal surge of testosterone in the developing male, whereas in females, associations differ by species. In humans, studies show that (1) associations between prenatal stress and child outcomes are often sex-dependent, (2) prenatal stress predicts several disorders with notable sex differences in prevalence, and (3) prenatal exposure to stressful life events may be associated with masculinized reproductive tract development and play behavior in girls. In this minireview, we examine the existing literature on prenatal stress and androgenic activity and present new, preliminary data indicating that prenatal stress may also modify associations between prenatal exposure to diethylhexyl phthalate, (a synthetic, antiandrogenic chemical) and reproductive development in infant boys. Taken together, these data support the hypothesis that prenatal exposure to both chemical and nonchemical stressors may alter sex steroid pathways in the maternal-placental-fetal unit and ultimately alter hormone-dependent developmental endpoints. PMID:26241065

  11. Stress and Androgen Activity During Fetal Development

    PubMed Central

    Swan, Shanna H.

    2015-01-01

    Prenatal stress is known to alter hypothalamic-pituitary-adrenal axis activity, and more recent evidence suggests that it may also affect androgen activity. In animal models, prenatal stress disrupts the normal surge of testosterone in the developing male, whereas in females, associations differ by species. In humans, studies show that (1) associations between prenatal stress and child outcomes are often sex-dependent, (2) prenatal stress predicts several disorders with notable sex differences in prevalence, and (3) prenatal exposure to stressful life events may be associated with masculinized reproductive tract development and play behavior in girls. In this minireview, we examine the existing literature on prenatal stress and androgenic activity and present new, preliminary data indicating that prenatal stress may also modify associations between prenatal exposure to diethylhexyl phthalate, (a synthetic, antiandrogenic chemical) and reproductive development in infant boys. Taken together, these data support the hypothesis that prenatal exposure to both chemical and nonchemical stressors may alter sex steroid pathways in the maternal-placental-fetal unit and ultimately alter hormone-dependent developmental endpoints. PMID:26241065

  12. Identifying types of physical activity with a single accelerometer: evaluating laboratory-trained algorithms in daily life.

    PubMed

    Gyllensten, Illapha Cuba; Bonomi, Alberto G

    2011-09-01

    Accurate identification of physical activity types has been achieved in laboratory conditions using single-site accelerometers and classification algorithms. This methodology is then applied to free-living subjects to determine activity behavior. This study is aimed at analyzing the reproducibility of the accuracy of laboratory-trained classification algorithms in free-living subjects during daily life. A support vector machine (SVM), a feed-forward neural network (NN), and a decision tree (DT) were trained with data collected by a waist-mounted accelerometer during a laboratory trial. The reproducibility of the classification performance was tested on data collected in daily life using a multiple-site accelerometer augmented with an activity diary for 20 healthy subjects (age: 30 ± 9; BMI: 23.0 ± 2.6 kg/m(2)). Leave-one-subject-out cross validation of the training data showed accuracies of 95.1 ± 4.3%, 91.4 ± 6.7%, and 92.2 ± 6.6% for the SVM, NN, and DT, respectively. All algorithms showed a significantly decreased accuracy in daily life as compared to the reference truth represented by the IDEEA and diary classifications (75.6 ± 10.4%, 74.8 ± 9.7%, and 72.2 ± 10.3%; p < 0.05). In conclusion, cross validation of training data overestimates the accuracy of the classification algorithms in daily life.

  13. Examination of a genetic algorithm for the application in high-throughput downstream process development.

    PubMed

    Treier, Katrin; Berg, Annette; Diederich, Patrick; Lang, Katharina; Osberghaus, Anna; Dismer, Florian; Hubbuch, Jürgen

    2012-10-01

    Compared to traditional strategies, application of high-throughput experiments combined with optimization methods can potentially speed up downstream process development and increase our understanding of processes. In contrast to the method of Design of Experiments in combination with response surface analysis (RSA), optimization approaches like genetic algorithms (GAs) can be applied to identify optimal parameter settings in multidimensional optimizations tasks. In this article the performance of a GA was investigated applying parameters applicable in high-throughput downstream process development. The influence of population size, the design of the initial generation and selection pressure on the optimization results was studied. To mimic typical experimental data, four mathematical functions were used for an in silico evaluation. The influence of GA parameters was minor on landscapes with only one optimum. On landscapes with several optima, parameters had a significant impact on GA performance and success in finding the global optimum. Premature convergence increased as the number of parameters and noise increased. RSA was shown to be comparable or superior for simple systems and low to moderate noise. For complex systems or high noise levels, RSA failed, while GA optimization represented a robust tool for process optimization. Finally, the effect of different objective functions is shown exemplarily for a refolding optimization of lysozyme.

  14. Evaluating the administration costs of biologic drugs: development of a cost algorithm.

    PubMed

    Tetteh, Ebenezer K; Morris, Stephen

    2014-12-01

    Biologic drugs, as with all other medical technologies, are subject to a number of regulatory, marketing, reimbursement (financing) and other demand-restricting hurdles applied by healthcare payers. One example is the routine use of cost-effectiveness analyses or health technology assessments to determine which medical technologies offer value-for-money. The manner in which these assessments are conducted suggests that, holding all else equal, the economic value of biologic drugs may be determined by how much is spent on administering these drugs or trade-offs between drug acquisition and administration costs. Yet, on the supply-side, it seems very little attention is given to how manufacturing and formulation choices affect healthcare delivery costs. This paper evaluates variations in the administration costs of biologic drugs, taking care to ensure consistent inclusion of all relevant cost resources. From this, it develops a regression-based algorithm with which manufacturers could possibly predict, during process development, how their manufacturing and formulation choices may impact on the healthcare delivery costs of their products. PMID:26208926

  15. Development of fast line scanning imaging algorithm for diseased chicken detection

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Chieh; Chao, Kuanglin; Chen, Yud-Ren; Kim, Moon S.

    2005-11-01

    A hyperspectral line-scan imaging system for automated inspection of wholesome and diseased chickens was developed and demonstrated. The hyperspectral imaging system consisted of an electron-multiplying charge-coupled-device (EMCCD) camera and an imaging spectrograph. The system used a spectrograph to collect spectral measurements across a pixel-wide vertical linear field of view through which moving chicken carcasses passed. After a series of image calibration procedures, the hyperspectral line-scan images were collected for chickens on a laboratory simulated processing line. From spectral analysis, four key wavebands for differentiating between wholesome and systemically diseased chickens were selected: 413 nm, 472 nm, 515 nm, and 546 nm, and a reference waveband, 622 nm. The ratio of relative reflectance between each key wavelength and the reference wavelength was calculated as an image feature. A fuzzy logic-based algorithm utilizing the key wavebands was developed to identify individual pixels on the chicken surface exhibiting symptoms of systemic disease. Two differentiation methods were built to successfully differentiate 72 systemically diseased chickens from 65 wholesome chickens.

  16. Adaptive algorithm for active control of high-amplitude acoustic field in resonator

    NASA Astrophysics Data System (ADS)

    Červenka, M.; Bednařík, M.; Koníček, P.

    2008-06-01

    This work is concerned with suppression of nonlinear effects in piston-driven acoustic resonators by means of two-frequency driving technique. An iterative adaptive algorithm is proposed to calculate parameters of the driving signal in order that amplitude of the second harmonics of the acoustic pressure is minimized. Functionality of the algorithm is verified firstly by means of numerical model and secondly, it is used in real computer-controlled experiment. The numerical and experimental results show that the proposed algorithm can be successfully used for generation of high-amplitude shock-free acoustic field in resonators.

  17. Development of an Innovative Algorithm for Aerodynamics-Structure Interaction Using Lattice Boltzmann Method

    NASA Technical Reports Server (NTRS)

    Mei, Ren-Wei; Shyy, Wei; Yu, Da-Zhi; Luo, Li-Shi; Rudy, David (Technical Monitor)

    2001-01-01

    The lattice Boltzmann equation (LBE) is a kinetic formulation which offers an alternative computational method capable of solving fluid dynamics for various systems. Major advantages of the method are owing to the fact that the solution for the particle distribution functions is explicit, easy to implement, and the algorithm is natural to parallelize. In this final report, we summarize the works accomplished in the past three years. Since most works have been published, the technical details can be found in the literature. Brief summary will be provided in this report. In this project, a second-order accurate treatment of boundary condition in the LBE method is developed for a curved boundary and tested successfully in various 2-D and 3-D configurations. To evaluate the aerodynamic force on a body in the context of LBE method, several force evaluation schemes have been investigated. A simple momentum exchange method is shown to give reliable and accurate values for the force on a body in both 2-D and 3-D cases. Various 3-D LBE models have been assessed in terms of efficiency, accuracy, and robustness. In general, accurate 3-D results can be obtained using LBE methods. The 3-D 19-bit model is found to be the best one among the 15-bit, 19-bit, and 27-bit LBE models. To achieve desired grid resolution and to accommodate the far field boundary conditions in aerodynamics computations, a multi-block LBE method is developed by dividing the flow field into various blocks each having constant lattice spacing. Substantial contribution to the LBE method is also made through the development of a new, generalized lattice Boltzmann equation constructed in the moment space in order to improve the computational stability, detailed theoretical analysis on the stability, dispersion, and dissipation characteristics of the LBE method, and computational studies of high Reynolds number flows with singular gradients. Finally, a finite difference-based lattice Boltzmann method is

  18. Development and verification of an analytical algorithm to predict absorbed dose distributions in ocular proton therapy using Monte Carlo simulations.

    PubMed

    Koch, Nicholas C; Newhauser, Wayne D

    2010-02-01

    Proton beam radiotherapy is an effective and non-invasive treatment for uveal melanoma. Recent research efforts have focused on improving the dosimetric accuracy of treatment planning and overcoming the present limitation of relative analytical dose calculations. Monte Carlo algorithms have been shown to accurately predict dose per monitor unit (D/MU) values, but this has yet to be shown for analytical algorithms dedicated to ocular proton therapy, which are typically less computationally expensive than Monte Carlo algorithms. The objective of this study was to determine if an analytical method could predict absolute dose distributions and D/MU values for a variety of treatment fields like those used in ocular proton therapy. To accomplish this objective, we used a previously validated Monte Carlo model of an ocular nozzle to develop an analytical algorithm to predict three-dimensional distributions of D/MU values from pristine Bragg peaks and therapeutically useful spread-out Bragg peaks (SOBPs). Results demonstrated generally good agreement between the analytical and Monte Carlo absolute dose calculations. While agreement in the proximal region decreased for beams with less penetrating Bragg peaks compared with the open-beam condition, the difference was shown to be largely attributable to edge-scattered protons. A method for including this effect in any future analytical algorithm was proposed. Comparisons of D/MU values showed typical agreement to within 0.5%. We conclude that analytical algorithms can be employed to accurately predict absolute proton dose distributions delivered by an ocular nozzle.

  19. Development of a deformable dosimetric phantom to verify dose accumulation algorithms for adaptive radiotherapy

    PubMed Central

    Zhong, Hualiang; Adams, Jeffrey; Glide-Hurst, Carri; Zhang, Hualin; Li, Haisen; Chetty, Indrin J.

    2016-01-01

    Adaptive radiotherapy may improve treatment outcomes for lung cancer patients. Because of the lack of an effective tool for quality assurance, this therapeutic modality is not yet accepted in clinic. The purpose of this study is to develop a deformable physical phantom for validation of dose accumulation algorithms in regions with heterogeneous mass. A three-dimensional (3D) deformable phantom was developed containing a tissue-equivalent tumor and heterogeneous sponge inserts. Thermoluminescent dosimeters (TLDs) were placed at multiple locations in the phantom each time before dose measurement. Doses were measured with the phantom in both the static and deformed cases. The deformation of the phantom was actuated by a motor driven piston. 4D computed tomography images were acquired to calculate 3D doses at each phase using Pinnacle and EGSnrc/DOSXYZnrc. These images were registered using two registration software packages: VelocityAI and Elastix. With the resultant displacement vector fields (DVFs), the calculated 3D doses were accumulated using a mass-and energy congruent mapping method and compared to those measured by the TLDs at four typical locations. In the static case, TLD measurements agreed with all the algorithms by 1.8% at the center of the tumor volume and by 4.0% in the penumbra. In the deformable case, the phantom's deformation was reproduced within 1.1 mm. For the 3D dose calculated by Pinnacle, the total dose accumulated with the Elastix DVF agreed well to the TLD measurements with their differences <2.5% at four measured locations. When the VelocityAI DVF was used, their difference increased up to 11.8%. For the 3D dose calculated by EGSnrc/DOSXYZnrc, the total doses accumulated with the two DVFs were within 5.7% of the TLD measurements which are slightly over the rate of 5% for clinical acceptance. The detector-embedded deformable phantom allows radiation dose to be measured in a dynamic environment, similar to deforming lung tissues, supporting

  20. Development of a deformable dosimetric phantom to verify dose accumulation algorithms for adaptive radiotherapy.

    PubMed

    Zhong, Hualiang; Adams, Jeffrey; Glide-Hurst, Carri; Zhang, Hualin; Li, Haisen; Chetty, Indrin J

    2016-01-01

    Adaptive radiotherapy may improve treatment outcomes for lung cancer patients. Because of the lack of an effective tool for quality assurance, this therapeutic modality is not yet accepted in clinic. The purpose of this study is to develop a deformable physical phantom for validation of dose accumulation algorithms in regions with heterogeneous mass. A three-dimensional (3D) deformable phantom was developed containing a tissue-equivalent tumor and heterogeneous sponge inserts. Thermoluminescent dosimeters (TLDs) were placed at multiple locations in the phantom each time before dose measurement. Doses were measured with the phantom in both the static and deformed cases. The deformation of the phantom was actuated by a motor driven piston. 4D computed tomography images were acquired to calculate 3D doses at each phase using Pinnacle and EGSnrc/DOSXYZnrc. These images were registered using two registration software packages: VelocityAI and Elastix. With the resultant displacement vector fields (DVFs), the calculated 3D doses were accumulated using a mass-and energy congruent mapping method and compared to those measured by the TLDs at four typical locations. In the static case, TLD measurements agreed with all the algorithms by 1.8% at the center of the tumor volume and by 4.0% in the penumbra. In the deformable case, the phantom's deformation was reproduced within 1.1 mm. For the 3D dose calculated by Pinnacle, the total dose accumulated with the Elastix DVF agreed well to the TLD measurements with their differences <2.5% at four measured locations. When the VelocityAI DVF was used, their difference increased up to 11.8%. For the 3D dose calculated by EGSnrc/DOSXYZnrc, the total doses accumulated with the two DVFs were within 5.7% of the TLD measurements which are slightly over the rate of 5% for clinical acceptance. The detector-embedded deformable phantom allows radiation dose to be measured in a dynamic environment, similar to deforming lung tissues, supporting

  1. Active controlled studies in antibiotic drug development.

    PubMed

    Dane, Aaron

    2011-01-01

    The increasing concern of antibacterial resistance has been well documented, as has the relative lack of antibiotic development. This paradox is in part due to challenges with clinical development of antibiotics. Because of their rapid progression, untreated bacterial infections are associated with significant morbidity and mortality. As a consequence, placebo-controlled studies of new agents are unethical. Rather, pivotal development studies are mostly conducted using non-inferiority designs versus an active comparator. Further, infections because of comparator-resistant isolates must usually be excluded from the trial programme. Unfortunately, the placebo-controlled data classically used in support of non-inferiority designs are largely unavailable for antibiotics. The only available data are from the 1930s and 1940s and their use is associated with significant concerns regarding constancy and assay sensitivity. Extended public debate on this challenge has led to proposed solutions by some in which these concerns are addressed by using very conservative approaches to trial design, endpoints and non-inferiority margins, in some cases leading to potentially impractical studies. To compound this challenge, different Regulatory Authorities seem to be taking different approaches to these key issues. If harmonisation does not occur, antibiotic development will become increasingly challenging, with the risk of further decreases in the amount of antibiotic drug development. However with clarity on Regulatory requirements and an ability to feasibly conduct global development programmes, it should be possible to bring much needed additional antibiotics to patients.

  2. Active Thermal Control System Development for Exploration

    NASA Technical Reports Server (NTRS)

    Westheimer, David

    2007-01-01

    All space vehicles or habitats require thermal management to maintain a safe and operational environment for both crew and hardware. Active Thermal Control Systems (ATCS) perform the functions of acquiring heat from both crew and hardware within a vehicle, transporting that heat throughout the vehicle, and finally rejecting that energy into space. Almost all of the energy used in a space vehicle eventually turns into heat, which must be rejected in order to maintain an energy balance and temperature control of the vehicle. For crewed vehicles, Active Thermal Control Systems are pumped fluid loops that are made up of components designed to perform these functions. NASA has been actively developing technologies that will enable future missions or will provide significant improvements over the state of the art technologies. These technologies have are targeted for application on the Crew Exploration Vehicle (CEV), or Orion, and a Lunar Surface Access Module (LSAM). The technologies that have been selected and are currently under development include: fluids that enable single loop ATCS architectures, a gravity insensitive vapor compression cycle heat pump, a sublimator with reduced sensitivity to feedwater contamination, an evaporative heat sink that can operate in multiple ambient pressure environments, a compact spray evaporator, and lightweight radiators that take advantage of carbon composites and advanced optical coatings.

  3. Development of an algorithm for heartbeats detection and classification in Holter records based on temporal and morphological features

    NASA Astrophysics Data System (ADS)

    García, A.; Romano, H.; Laciar, E.; Correa, R.

    2011-12-01

    In this work a detection and classification algorithm for heartbeats analysis in Holter records was developed. First, a QRS complexes detector was implemented and their temporal and morphological characteristics were extracted. A vector was built with these features; this vector is the input of the classification module, based on discriminant analysis. The beats were classified in three groups: Premature Ventricular Contraction beat (PVC), Atrial Premature Contraction beat (APC) and Normal Beat (NB). These beat categories represent the most important groups of commercial Holter systems. The developed algorithms were evaluated in 76 ECG records of two validated open-access databases "arrhythmias MIT BIH database" and "MIT BIH supraventricular arrhythmias database". A total of 166343 beats were detected and analyzed, where the QRS detection algorithm provides a sensitivity of 99.69 % and a positive predictive value of 99.84 %. The classification stage gives sensitivities of 97.17% for NB, 97.67% for PCV and 92.78% for APC.

  4. Algorithm development for the retrieval of coastal water constituents from satellite Modular Optoelectronic Scanner images

    NASA Astrophysics Data System (ADS)

    Hetscher, Matthias; Krawczyk, Harald; Neumann, Andreas; Walzel, Thomas; Zimmermann, Gerhard

    1997-10-01

    DLR's imaging spectrometer the Modular Optoelectronic Scanner (MOS) on the Indian remote sensing satellite IRS-P3 has been orbiting since March 1996. MOS consists of two spectrometers, one narrow band spectrometer around 760 nm for retrieval of atmospheric parameters and a second one in the IVS/NIR region with an additional line camera at 1,6 micrometers . The instrument was especially designed for the remote sensing of coastal zone water and the determination and distinction of its constituents. MOS was developed and manufactured at the Institute of Space Sensor Technology (ISST) and launched in a joint effort with the Indian Space Research Organization (ISRO). The high spectral resolution of MOS offers the possibility of using the differences in spectral signatures of remote sensing objects for quantitative determination of geophysical parameters. In ISST a linear estimator to derive water constituents and aerosol optical thickness has been developed, exploiting Principal Component Inversion (PCI) of modeled top-of- atmosphere and experimental radiance data sets. The estimator results in sets of weighting coefficients for each measurement band, depending on the geophysical situations. Because of systematic misinterpretation due to non- adequateness of model and real situation the further development implies the parallel improvement of used water models and recalibration with in-situ data. The paper will present for selected test sites of the European coasts results of algorithm application. It will show the improvement of the estimated water constituents by using regional specific model parameter. Derived maps of chlorophyll like pigments, sediments and aerosol optical thickness ar presented.

  5. Development of adaptive noise reduction filter algorithm for pediatric body images in a multi-detector CT

    NASA Astrophysics Data System (ADS)

    Nishimaru, Eiji; Ichikawa, Katsuhiro; Okita, Izumi; Ninomiya, Yuuji; Tomoshige, Yukihiro; Kurokawa, Takehiro; Ono, Yutaka; Nakamura, Yuko; Suzuki, Masayuki

    2008-03-01

    Recently, several kinds of post-processing image filters which reduce the noise of computed tomography (CT) images have been proposed. However, these image filters are mostly for adults. Because these are not very effective in small (< 20 cm) display fields of view (FOV), we cannot use them for pediatric body images (e.g., premature babies and infant children). We have developed a new noise reduction filter algorithm for pediatric body CT images. This algorithm is based on a 3D post-processing in which the output pixel values are calculated by nonlinear interpolation in z-directions on original volumetric-data-sets. This algorithm does not need the in-plane (axial plane) processing, so the spatial resolution does not change. From the phantom studies, our algorithm could reduce SD up to 40% without affecting the spatial resolution of x-y plane and z-axis, and improved the CNR up to 30%. This newly developed filter algorithm will be useful for the diagnosis and radiation dose reduction of the pediatric body CT images.

  6. Development, analysis, and testing of robust nonlinear guidance algorithms for space applications

    NASA Astrophysics Data System (ADS)

    Wibben, Daniel R.

    This work focuses on the analysis and application of various nonlinear, autonomous guidance algorithms that utilize sliding mode control to guarantee system stability and robustness. While the basis for the algorithms has previously been proposed, past efforts barely scratched the surface of the theoretical details and implications of these algorithms. Of the three algorithms that are the subject of this research, two are directly derived from optimal control theory and augmented using sliding mode control. Analysis of the derivation of these algorithms has shown that they are two different representations of the same result, one of which uses a simple error state model (Delta r/Deltav) and the other uses definitions of the zero-effort miss and zero-effort velocity (ZEM/ZEV) values. By investigating the dynamics of the defined sliding surfaces and their impact on the overall system, many implications have been deduced regarding the behavior of these systems which are noted to feature time-varying sliding modes. A formal finite time stability analysis has also been performed to theoretically demonstrate that the algorithms globally stabilize the system in finite time in the presence of perturbations and unmodeled dynamics. The third algorithm that has been subject to analysis is derived from a direct application of higher-order sliding mode control and Lyapunov stability analysis without consideration of optimal control theory and has been named the Multiple Sliding Surface Guidance (MSSG). Via use of reinforcement learning methods an optimal set of gains has been found that make the guidance perform similarly to an open-loop optimal solution. Careful side-by-side inspection of the MSSG and Optimal Sliding Guidance (OSG) algorithms has shown some striking similarities. A detailed comparison of the algorithms has demonstrated that though they are nearly indistinguishable at first glance, there are some key differences between the two algorithms and they are indeed

  7. Development of Variational Guiding Center Algorithms for Parallel Calculations in Experimental Magnetic Equilibria

    SciTech Connect

    Ellison, C. Leland; Finn, J. M.; Qin, H.; Tang, William M.

    2014-10-01

    Structure-preserving algorithms obtained via discrete variational principles exhibit strong promise for the calculation of guiding center test particle trajectories. The non-canonical Hamiltonian structure of the guiding center equations forms a novel and challenging context for geometric integration. To demonstrate the practical relevance of these methods, a prototypical variational midpoint algorithm is applied to an experimental magnetic equilibrium. The stability characteristics, conservation properties, and implementation requirements associated with the variational algorithms are addressed. Furthermore, computational run time is reduced for large numbers of particles by parallelizing the calculation on GPU hardware.

  8. Development of an algorithm to meaningfully interpret patterns in street-level methane concentrations

    NASA Astrophysics Data System (ADS)

    von Fischer, Joseph; Salo, Jessica; Griebenow, Claire; Bischak, Linde; Cooley, Daniel; Ham, Jay; Schumacher, Russ

    2013-04-01

    Methane (CH4) is an important greenhouse gas that has 70x greater heat forcing per molecule than CO2 over its ~10 year atmospheric residence time. Given this short residence time, there has been a surge of interest in mitigating anthropogenic CH4 sources because they will have a more immediate effect on warming rates. Recent observations of CH4 concentrations around the city of Boston reveal that natural gas distribution systems can have a very large number of leaks. However, there are a number of conceptual and practical challenges associated with interpretation of CH4 data gathered by car at the street level. In this presentation, we detail our efforts to develop an "algorithm" or set of standard practices for interpreting these patterns based on our own findings. At the most basic, we have evaluated approaches for vehicle driving patterns and management of the raw data. We also identify techniques for evaluating data quality and discerning when elevated CH4 may be due to other vehicles (e.g., CNG-powered city buses). We then compare methods for identifying "peaks" in CH4 concentration, and we discuss several approaches for relating concentration, space and wind data to emission rates. Finally, we provide some considerations for how the data from individual peaks might be aggregated to larger spatial scales.

  9. Development and evaluation of a new contoured cushion system with an optimized normalization algorithm.

    PubMed

    Li, Sujiao; Zhang, Zhengxiang; Wang, Jue

    2014-01-01

    Prevention of pressure sores remains a significant problem confronting spinal cord injury patients and the elderly with limited mobility. One vital aspect of this subject concerns the development of cushions to decrease pressure ulcers for seated patients, particularly those bound by wheelchairs. Here, we present a novel cushion system that employs interface pressure distribution between the cushion and the buttocks to design custom contoured foam cushion. An optimized normalization algorithm was proposed, with which interface pressure distribution was transformed into the carving depth of foam cushions according to the biomechanical characteristics of the foam. The shape and pressure-relief performance of the custom contoured foam cushions was investigated. The outcomes showed that the contoured shape of personalized cushion matched the buttock contour very well. Moreover, the custom contoured cushion could alleviate pressure under buttocks and increase subjective comfort and stability significantly. Furthermore, the fabricating method not only decreased the unit production cost but also simplified the procedure for manufacturing. All in all, this prototype seat cushion would be an effective and economical way to prevent pressure ulcers. PMID:25227054

  10. Development and Implementation of Image-based Algorithms for Measurement of Deformations in Material Testing

    PubMed Central

    Barazzetti, Luigi; Scaioni, Marco

    2010-01-01

    This paper presents the development and implementation of three image-based methods used to detect and measure the displacements of a vast number of points in the case of laboratory testing on construction materials. Starting from the needs of structural engineers, three ad hoc tools for crack measurement in fibre-reinforced specimens and 2D or 3D deformation analysis through digital images were implemented and tested. These tools make use of advanced image processing algorithms and can integrate or even substitute some traditional sensors employed today in most laboratories. In addition, the automation provided by the implemented software, the limited cost of the instruments and the possibility to operate with an indefinite number of points offer new and more extensive analysis in the field of material testing. Several comparisons with other traditional sensors widely adopted inside most laboratories were carried out in order to demonstrate the accuracy of the implemented software. Implementation details, simulations and real applications are reported and discussed in this paper. PMID:22163612

  11. High-order derivative spectroscopy for selecting spectral regions and channels for remote sensing algorithm development

    NASA Astrophysics Data System (ADS)

    Bostater, Charles R., Jr.

    1999-12-01

    A remote sensing reflectance model, which describes the transfer of irradiant light within a plant canopy or water column has previously been used to simulate the nadir viewing reflectance of vegetation canopies and leaves under solar induced or an artificial light source and the water surface reflectance. Wavelength dependent features such as canopy reflectance leaf absorption and canopy bottom reflectance as well as water absorption and water bottom reflectance have been used to simulate or generate synthetic canopy and water surface reflectance signatures. This paper describes how derivative spectroscopy can be utilized to invert the synthetic or modeled as well as measured reflectance signatures with the goal of selecting the optimal spectral channels or regions of these environmental media. Specifically, in this paper synthetic and measured reflectance signatures are used for selecting vegetative dysfunction variables for different plant species. The measured reflectance signatures as well as model derived or synthetic signatures are processed using extremely fast higher order derivative processing techniques which filter the synthetic/modeled or measured spectra and automatically selects the optimal channels for automatic and direct algorithm application. The higher order derivative filtering technique makes use of a translating and dilating, derivative spectroscopy signal processing (TDDS-SPR) approach based upon remote sensing science and radiative transfer theory. Thus the technique described, unlike other signal processing techniques being developed for hyperspectral signatures and associated imagery, is based upon radiative transfer theory instead of statistical or purely mathematical operational techniques such as wavelets.

  12. Development of an algorithm to improve the accuracy of dose delivery in Gamma Knife radiosurgery

    NASA Astrophysics Data System (ADS)

    Cernica, George Dumitru

    2007-12-01

    Gamma Knife stereotactic radiosurgery has demonstrated decades of successful treatments. Despite its high spatial accuracy, the Gamma Knife's planning software, GammaPlan, uses a simple exponential as the TPR curve for all four collimator sizes, and a skull scaling device to acquire ruler measurements to interpolate a threedimensional spline to model the patient's skull. The consequences of these approximations have not been previously investigated. The true TPR curves of the four collimators were measured by blocking 200 of the 201 sources with steel plugs. Additional attenuation was provided through the use of a 16 cm tungsten sphere, designed to enable beamlet measurements along one axis. TPR, PDD, and beamlet profiles were obtained using both an ion chamber and GafChromic EBT film for all collimators. Additionally, an in-house planning algorithm able to calculate the contour of the skull directly from an image set and implement the measured beamlet data in shot time calculations was developed. Clinical and theoretical Gamma Knife cases were imported into our algorithm. The TPR curves showed small deviations from a simple exponential curve, with average discrepancies under 1%, but with a maximum discrepancy of 2% found for the 18 mm collimator beamlet at shallow depths. The consequences on the PDD of the of the beamlets were slight, with a maximum of 1.6% found with the 18 mm collimator beamlet. Beamlet profiles of the 4 mm, 8 mm, and 14 mm showed some underestimates of the off-axis ratio near the shoulders (up to 10%). The toes of the profiles were underestimated for all collimators, with differences up to 7%. Shot times were affected by up to 1.6% due to TPR differences, but clinical cases showed deviations by no more than 0.5%. The beamlet profiles affected the dose calculations more significantly, with shot time calculations differing by as much as 0.8%. The skull scaling affected the shot time calculations the most significantly, with differences of up to 5

  13. The cyclotron development activities at CIAE

    NASA Astrophysics Data System (ADS)

    Zhang, Tianjue; Li, Zhenguo; An, Shizhong; Yin, Zhiguo; Yang, Jianjun; Yang, Fang

    2011-12-01

    The cyclotron has an obvious advantage in offering high average current and beam power. Cyclotron development for various applications, e.g. radioactive ion-beam (RIB) generation, clean nuclear energy systems, medical diagnostics and isotope production, were performed at China Institute of Atomic Energy (CIAE) for over 50 years. At the moment two cyclotrons are being built at CIAE, the 100 MeV, CYCIAE-100, and a 14 MeV, the CYCIAE-14. Meanwhile, we are designing and proposing to build a number of cyclotrons with different energies, among them are the CYCIAE-70, the CYCIAE-800, and the upgrading of CYCIAE-CRM, which is going to increase its beam current to mA level. The contribution will present an overall introduction to the cyclotron development activities conducted at CIAE, with different emphasis to each project in order to demonstrate the design and construction highlights.

  14. Cyfip1 Regulates Presynaptic Activity during Development

    PubMed Central

    Hsiao, Kuangfu; Harony-Nicolas, Hala; Buxbaum, Joseph D.

    2016-01-01

    Copy number variations encompassing the gene encoding Cyfip1 have been associated with a variety of human diseases, including autism and schizophrenia. Here we show that juvenile mice hemizygous for Cyfip1 have altered presynaptic function, enhanced protein translation, and increased levels of F-actin. In developing hippocampus, reduced Cyfip1 levels serve to decrease paired pulse facilitation and increase miniature EPSC frequency without a change in amplitude. Higher-resolution examination shows these changes to be caused primarily by an increase in presynaptic terminal size and enhanced vesicle release probability. Short hairpin-mediated knockdown of Cyfip1 coupled with expression of mutant Cyfip1 proteins indicates that the presynaptic alterations are caused by dysregulation of the WAVE regulatory complex. Such dysregulation occurs downstream of Rac1 as acute exposure to Rac1 inhibitors rescues presynaptic responses in culture and in hippocampal slices. The data serve to highlight an early and essential role for Cyfip1 in the generation of normally functioning synapses and suggest a means by which changes in Cyfip1 levels could impact the generation of neural networks and contribute to abnormal and maladaptive behaviors. SIGNIFICANCE STATEMENT Several developmental brain disorders have been associated with gene duplications and deletions that serve to increase or decrease levels of encoded proteins. Cyfip1 is one such protein, but the role it plays in brain development is poorly understood. We asked whether decreased Cyfip1 levels altered the function of developing synapses. The data show that synapses with reduced Cyfip1 are larger and release neurotransmitter more rapidly. These effects are due to Cyfip1's role in actin polymerization and are reversed by expression of a Cyfip1 mutant protein retaining actin regulatory function or by inhibiting Rac1. Thus, Cyfip1 has a more prominent early role regulating presynaptic activity during a stage of development when

  15. The Development of Several Electromagnetic Monitoring Strategies and Algorithms for Validating Pre-Earthquake Electromagnetic Signals

    NASA Astrophysics Data System (ADS)

    Bleier, T. E.; Dunson, J. C.; Roth, S.; Mueller, S.; Lindholm, C.; Heraud, J. A.

    2012-12-01

    QuakeFinder, a private research group in California, reports on the development of a 100+ station network consisting of 3-axis induction magnetometers, and air conductivity sensors to collect and characterize pre-seismic electromagnetic (EM) signals. These signals are combined with daily Infra Red signals collected from the GOES weather satellite infrared (IR) instrument to compare and correlate with the ground EM signals, both from actual earthquakes and boulder stressing experiments. This presentation describes the efforts QuakeFinder has undertaken to automatically detect these pulse patterns using their historical data as a reference, and to develop other discriminative algorithms that can be used with air conductivity sensors, and IR instruments from the GOES satellites. The overall big picture results of the QuakeFinder experiment are presented. In 2007, QuakeFinder discovered the occurrence of strong uni-polar pulses in their magnetometer coil data that increased in tempo dramatically prior to the M5.1 earthquake at Alum Rock, California. Suggestions that these pulses might have been lightning or power-line arcing did not fit with the data actually recorded as was reported in Bleier [2009]. Then a second earthquake occurred near the same site on January 7, 2010 as was reported in Dunson [2011], and the pattern of pulse count increases before the earthquake occurred similarly to the 2007 event. There were fewer pulses, and the magnitude of them was decreased, both consistent with the fact that the earthquake was smaller (M4.0 vs M5.4) and farther away (7Km vs 2km). At the same time similar effects were observed at the QuakeFinder Tacna, Peru site before the May 5th, 2010 M6.2 earthquake and a cluster of several M4-5 earthquakes.

  16. Development of effluent removal prediction model efficiency in septic sludge treatment plant through clonal selection algorithm.

    PubMed

    Ting, Sie Chun; Ismail, A R; Malek, M A

    2013-11-15

    This study aims at developing a novel effluent removal management tool for septic sludge treatment plants (SSTP) using a clonal selection algorithm (CSA). The proposed CSA articulates the idea of utilizing an artificial immune system (AIS) to identify the behaviour of the SSTP, that is, using a sequence batch reactor (SBR) technology for treatment processes. The novelty of this study is the development of a predictive SSTP model for effluent discharge adopting the human immune system. Septic sludge from the individual septic tanks and package plants will be desuldged and treated in SSTP before discharging the wastewater into a waterway. The Borneo Island of Sarawak is selected as the case study. Currently, there are only two SSTPs in Sarawak, namely the Matang SSTP and the Sibu SSTP, and they are both using SBR technology. Monthly effluent discharges from 2007 to 2011 in the Matang SSTP are used in this study. Cross-validation is performed using data from the Sibu SSTP from April 2011 to July 2012. Both chemical oxygen demand (COD) and total suspended solids (TSS) in the effluent were analysed in this study. The model was validated and tested before forecasting the future effluent performance. The CSA-based SSTP model was simulated using MATLAB 7.10. The root mean square error (RMSE), mean absolute percentage error (MAPE), and correction coefficient (R) were used as performance indexes. In this study, it was found that the proposed prediction model was successful up to 84 months for the COD and 109 months for the TSS. In conclusion, the proposed CSA-based SSTP prediction model is indeed beneficial as an engineering tool to forecast the long-run performance of the SSTP and in turn, prevents infringement of future environmental balance in other towns in Sarawak.

  17. Molecular descriptor subset selection in theoretical peptide quantitative structure-retention relationship model development using nature-inspired optimization algorithms.

    PubMed

    Žuvela, Petar; Liu, J Jay; Macur, Katarzyna; Bączek, Tomasz

    2015-10-01

    In this work, performance of five nature-inspired optimization algorithms, genetic algorithm (GA), particle swarm optimization (PSO), artificial bee colony (ABC), firefly algorithm (FA), and flower pollination algorithm (FPA), was compared in molecular descriptor selection for development of quantitative structure-retention relationship (QSRR) models for 83 peptides that originate from eight model proteins. The matrix with 423 descriptors was used as input, and QSRR models based on selected descriptors were built using partial least squares (PLS), whereas root mean square error of prediction (RMSEP) was used as a fitness function for their selection. Three performance criteria, prediction accuracy, computational cost, and the number of selected descriptors, were used to evaluate the developed QSRR models. The results show that all five variable selection methods outperform interval PLS (iPLS), sparse PLS (sPLS), and the full PLS model, whereas GA is superior because of its lowest computational cost and higher accuracy (RMSEP of 5.534%) with a smaller number of variables (nine descriptors). The GA-QSRR model was validated initially through Y-randomization. In addition, it was successfully validated with an external testing set out of 102 peptides originating from Bacillus subtilis proteomes (RMSEP of 22.030%). Its applicability domain was defined, from which it was evident that the developed GA-QSRR exhibited strong robustness. All the sources of the model's error were identified, thus allowing for further application of the developed methodology in proteomics.

  18. Successive smoothing algorithm for constructing the semiempirical model developed at ONERA to predict unsteady aerodynamic forces. [aeroelasticity in helicopters

    NASA Technical Reports Server (NTRS)

    Petot, D.; Loiseau, H.

    1982-01-01

    Unsteady aerodynamic methods adopted for the study of aeroelasticity in helicopters are considered with focus on the development of a semiempirical model of unsteady aerodynamic forces acting on an oscillating profile at high incidence. The successive smoothing algorithm described leads to the model's coefficients in a very satisfactory manner.

  19. Detection of fruit-fly infestation in olives using X-ray imaging: Algorithm development and prospects

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An algorithm using a Bayesian classifier was developed to automatically detect olive fruit fly infestations in x-ray images of olives. The data set consisted of 249 olives with various degrees of infestation and 161 non-infested olives. Each olive was x-rayed on film and digital images were acquired...

  20. A preliminary report on the development of MATLAB tensor classes for fast algorithm prototyping.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson

    2004-07-01

    We describe three MATLAB classes for manipulating tensors in order to allow fast algorithm prototyping. A tensor is a multidimensional or N-way array. We present a tensor class for manipulating tensors which allows for tensor multiplication and 'matricization.' We have further added two classes for representing tensors in decomposed format: cp{_}tensor and tucker{_}tensor. We demonstrate the use of these classes by implementing several algorithms that have appeared in the literature.

  1. The development of algorithms for parallel knowledge discovery using graphics accelerators

    NASA Astrophysics Data System (ADS)

    Zieliński, Paweł; Mulawka, Jan

    2011-10-01

    The paper broaches topics of selected knowledge discovery algorithms. Different implementations have been verified on parallel platforms, including graphics accelerators using CUDA technology, multi-core microprocessors using OpenMP and many graphics accelerators. Results of investigations have been compared in terms of performance and scalability. Different types of data representation were also tested. The possibilities of both platforms, using the classification algorithms: the k-nearest neighbors, support vector machines and logistic regression are discussed.

  2. Developing JSequitur to Study the Hierarchical Structure of Biological Sequences in a Grammatical Inference Framework of String Compression Algorithms.

    PubMed

    Galbadrakh, Bulgan; Lee, Kyung-Eun; Park, Hyun-Seok

    2012-12-01

    Grammatical inference methods are expected to find grammatical structures hidden in biological sequences. One hopes that studies of grammar serve as an appropriate tool for theory formation. Thus, we have developed JSequitur for automatically generating the grammatical structure of biological sequences in an inference framework of string compression algorithms. Our original motivation was to find any grammatical traits of several cancer genes that can be detected by string compression algorithms. Through this research, we could not find any meaningful unique traits of the cancer genes yet, but we could observe some interesting traits in regards to the relationship among gene length, similarity of sequences, the patterns of the generated grammar, and compression rate.

  3. End-to-End Design, Development and Testing of GOES-R Level 1 and 2 Algorithms

    NASA Astrophysics Data System (ADS)

    Zaccheo, T.; Copeland, A.; Steinfelt, E.; Van Rompay, P.; Werbos, A.

    2012-12-01

    GOES-R is the next generation of the National Oceanic and Atmospheric Administration's (NOAA) Geostationary Operational Environmental Satellite (GOES) System, and it represents a new technological era in operational geostationary environmental satellite systems. GOES-R will provide advanced products, based on government-supplied algorithms, which describe the state of the atmosphere, land, and oceans over the Western Hemisphere. The Harris GOES-R Core Ground Segment (GS) Team will provide the ground processing software and infrastructure needed to produce and distribute these data products. As part of this effort, new or updated Level 1b and Level 2+ algorithms will be deployed in the GOES-R Product Generation (PG) Element. In this work, we describe the general approach currently being employed to migrate these Level 1b (L1b) and Level 2+ (L2+) GOES-R PG algorithms from government-provided scientific descriptions to their implementation as integrated software, and provide an overview of how Product Generation software works with the other elements of the Ground Segment to produce Level 1/Level 2+ end-products. In general, GOES-R L1b algorithms ingest reformatted raw sensor data and ancillary information to produce geo-located GOES-R L1b data, and GOES-R L2+ algorithms ingest L1b data and other ancillary/auxiliary/intermediate information to produce L2+ products such as aerosol optical depth, rainfall rate, derived motion winds, and snow cover. In this presentation we provide an overview of the Algorithm development life cycle, the common Product Generation software architecture, and the common test strategies used to verify/validate the scientific implementation. This work will highlight the Software Integration and Test phase of the software life-cycle and the suite of automated test/analysis tools developed to insure the implemented algorithms meet desired reproducibility. As part of this discussion we will summarize the results of our algorithm testing to date

  4. Characterizing the Preturbulence Environment for Sensor Development, New Hazard Algorithms and NASA Experimental Flight Planning

    NASA Technical Reports Server (NTRS)

    Kaplan, Michael L.; Lin, Yuh-Lang

    2004-01-01

    During the grant period, several tasks were performed in support of the NASA Turbulence Prediction and Warning Systems (TPAWS) program. The primary focus of the research was on characterizing the preturbulence environment by developing predictive tools and simulating atmospheric conditions that preceded severe turbulence. The goal of the research being to provide both dynamical understanding of conditions that preceded turbulence as well as providing predictive tools in support of operational NASA B-757 turbulence research flights. The advancements in characterizing the preturbulence environment will be applied by NASA to sensor development for predicting turbulence onboard commercial aircraft. Numerical simulations with atmospheric models as well as multi-scale observational analyses provided insights into the environment organizing turbulence in a total of forty-eight specific case studies of severe accident producing turbulence on commercial aircraft. These accidents exclusively affected commercial aircraft. A paradigm was developed which diagnosed specific atmospheric circulation systems from the synoptic scale down to the meso-y scale that preceded turbulence in both clear air and in proximity to convection. The emphasis was primarily on convective turbulence as that is what the TPAWS program is most focused on in terms of developing improved sensors for turbulence warning and avoidance. However, the dynamical paradigm also has applicability to clear air and mountain turbulence. This dynamical sequence of events was then employed to formulate and test new hazard prediction indices that were first tested in research simulation studies and then ultimately were further tested in support of the NASA B-757 turbulence research flights. The new hazard characterization algorithms were utilized in a Real Time Turbulence Model (RTTM) that was operationally employed to support the NASA B-757 turbulence research flights. Improvements in the RTTM were implemented in an

  5. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: Algorithm development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Vicroy, D. D.; Simmon, D. A.

    1985-01-01

    A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

  6. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: algorithm development and flight test results

    SciTech Connect

    Knox, C.E.; Vicroy, D.D.; Simmon, D.A.

    1985-05-01

    A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

  7. Development of a dose algorithm for the modified panasonic UD-802 personal dosimeter used at three mile island

    SciTech Connect

    Miklos, J. A.; Plato, P.

    1988-01-01

    During the fall of 1981, the personnel dosimetry group at GPU Nuclear Corporation at Three Mile Island (TMI) requested assistance from The University of Michigan (UM) in developing a dose algorithm for use at TMI-2. The dose algorithm had to satisfy the specific needs of TMI-2, particularly the need to distinguish beta-particle emitters of different energies, as well as having the capability of satisfying the requirements of the American National Standards Institute (ANSI) N13.11-1983 standard. A standard Panasonic UD-802 dosimeter was modified by having the plastic filter over element 2 removed. The dosimeter and hanger consists of the elements with a 14 mg/cm/sup 2/ density thickness and the filtrations shown. The hanger on this dosimeter had a double open window to facilitate monitoring for low-energy beta particles. The dose algorithm was written to satisfy the requirements of the ANSI N13.11-1983 standard, to include /sup 204/Tl with mixtures of /sup 204/Tl with /sup 90/Sr//sup 90/Y and /sup 137/Cs, and to include 81- and 200-keV average energy X-ray spectra. Stress tests were conducted to observe the algorithm performance to low doses, temperature, humidity, and the residual response following high-dose irradiations. The ability of the algorithm to determine dose from the beta particles of /sup 147/Pm was also investigated.

  8. Development of a blended-control, predictor-corrector guidance algorithm for a crewed Mars aerocapture vehicle

    NASA Astrophysics Data System (ADS)

    Jits, Roman Yuryevich

    A robust blended-control guidance system for a crewed Mars aerocapture vehicle is developed. The key features of its guidance algorithm are the use of the both bank-angle and angle-of-attack modulation to control the aerobraking vehicle, and the use of multiple controls (sequenced pairs of bank-angles and angles-of-attack) within its numeric predictor-corrector targeting routine. The guidance algorithm macrologic is based on extensive open loop trajectory analyses, described in the present research, which led to the selection of a blended-control scheme. A heuristic approach to recover from situations where no converged guidance solution could be found by the numeric predictor-corrector is implemented in the guidance algorithm, and has been successfully demonstrated in a large number of test runs. In this research both the outer and inner loop of the guidance and control system employ the POST (Program to Optimize Simulated Trajectories) computer code as the basic simulation module. At each guidance update, the inner loop solves the rigorous three-dimensional equations of motion and computes the control (bank-angle and angle-of-attack) sequence that is required to meet the required atmospheric exit conditions. Throughout the aerocapture trajectory, the guidance algorithm modifies this control sequence computed by the inner loop, and generates commanded controls for the vehicle, which, when implemented by the outer loop, meet an imposed g-load constraint of 5 Earth g's and compensate for unexpected off-nominal conditions. This blended-control, predictor-corrector guidance algorithm has been successfully developed, implemented and tested and has been shown to be capable of meeting the prescribed g-load constraint and guiding the vehicle to the desired exit conditions for a range of off-nominal factors much wider than those which could be accommodated by prior algorithms and bank-angle-only guidance.

  9. An Efficient Correction Algorithm for Eliminating Image Misalignment Effects on Co-Phasing Measurement Accuracy for Segmented Active Optics Systems.

    PubMed

    Yue, Dan; Xu, Shuyan; Nie, Haitao; Wang, Zongyang

    2016-01-01

    The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD) algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality.

  10. An Efficient Correction Algorithm for Eliminating Image Misalignment Effects on Co-Phasing Measurement Accuracy for Segmented Active Optics Systems

    PubMed Central

    Yue, Dan; Xu, Shuyan; Nie, Haitao; Wang, Zongyang

    2016-01-01

    The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD) algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality. PMID:26934045

  11. Diagnosis and treatment of acute ankle injuries: development of an evidence-based algorithm

    PubMed Central

    Polzer, Hans; Kanz, Karl Georg; Prall, Wolf Christian; Haasters, Florian; Ockert, Ben; Mutschler, Wolf; Grote, Stefan

    2011-01-01

    Acute ankle injuries are among the most common injuries in emergency departments. However, there are still no standardized examination procedures or evidence-based treatment. Therefore, the aim of this study was to systematically search the current literature, classify the evidence, and develop an algorithm for the diagnosis and treatment of acute ankle injuries. We systematically searched PubMed and the Cochrane Database for randomized controlled trials, meta-analyses, systematic reviews or, if applicable, observational studies and classified them according to their level of evidence. According to the currently available literature, the following recommendations have been formulated: i) the Ottawa Ankle/Foot Rule should be applied in order to rule out fractures; ii) physical examination is sufficient for diagnosing injuries to the lateral ligament complex; iii) classification into stable and unstable injuries is applicable and of clinical importance; iv) the squeeze-, crossed leg- and external rotation test are indicative for injuries of the syndesmosis; v) magnetic resonance imaging is recommended to verify injuries of the syndesmosis; vi) stable ankle sprains have a good prognosis while for unstable ankle sprains, conservative treatment is at least as effective as operative treatment without the related possible complications; vii) early functional treatment leads to the fastest recovery and the least rate of reinjury; viii) supervised rehabilitation reduces residual symptoms and re-injuries. Taken these recommendations into account, we present an applicable and evidence-based, step by step, decision pathway for the diagnosis and treatment of acute ankle injuries, which can be implemented in any emergency department or doctor's practice. It provides quality assurance for the patient and promotes confidence in the attending physician. PMID:22577506

  12. A real-time and self-calibrating algorithm based on triaxial accelerometer signals for the detection of human posture and activity.

    PubMed

    Curone, Davide; Bertolotti, Gian Mario; Cristiani, Andrea; Secco, Emanuele Lindo; Magenes, Giovanni

    2010-07-01

    Assessment of human activity and posture with triaxial accelerometers provides insightful information about the functional ability: classification of human activities in rehabilitation and elderly surveillance contexts has been already proposed in the literature. In the meanwhile, recent technological advances allow developing miniaturized wearable devices, integrated within garments, which may extend this assessment to novel tasks, such as real-time remote surveillance of workers and emergency operators intervening in harsh environments. We present an algorithm for human posture and activity-level detection, based on the real-time processing of the signals produced by one wearable triaxial accelerometer. The algorithm is independent of the sensor orientation with respect to the body. Furthermore, it associates to its outputs a "reliability" value, representing the classification quality, in order to launch reliable alarms only when effective dangerous conditions are detected. The system was tested on a customized device to estimate the computational resources needed for real-time functioning. Results exhibit an overall 96.2% accuracy when classifying both static and dynamic activities.

  13. Development of an Algorithm for MODIS and VIIRS Cloud Optical Property Data Record Continuity

    NASA Astrophysics Data System (ADS)

    Meyer, K.; Platnick, S. E.; Ackerman, S. A.; Heidinger, A. K.; Holz, R.; Wind, G.; Amarasinghe, N.; Marchant, B.

    2015-12-01

    The launch of Suomi NPP in the fall of 2011 began the next generation of U.S. operational polar orbiting environmental observations. Similar to MODIS, the VIIRS imager provides visible through IR observations at moderate spatial resolution with a 1330 LT equatorial crossing consistent with MODIS on the Aqua platform. However, unlike MODIS, VIIRS lacks key water vapor and CO2 absorbing channels used by the MODIS cloud algorithms for high cloud detection and cloud-top property retrievals. In addition, there is a significant change in the spectral location of the 2.1μm shortwave-infrared channel used by MODIS for cloud optical/microphysical retrievals. Given the instrument differences between MODIS EOS and VIIRS S-NPP/JPSS, we discuss our adopted method for merging the 15+ year MODIS observational record with VIIRS in order to generate cloud optical property data record continuity across the observing systems. The optical property retrieval code uses heritage algorithms that produce the existing MODIS cloud optical and microphysical properties product (MOD06). As explained in other presentations submitted to this session, the NOAA AWG/CLAVR-x cloud-top property algorithm and a common MODIS-VIIRS cloud mask feed into the optical property algorithm to account for the different channel sets of the two imagers. Data granule and aggregated examples for the current version of the algorithm will be shown.

  14. Algorithm and code development for unsteady three-dimensional Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Obayashi, Shigeru

    1991-01-01

    A streamwise upwind algorithm for solving the unsteady 3-D Navier-Stokes equations was extended to handle the moving grid system. It is noted that the finite volume concept is essential to extend the algorithm. The resulting algorithm is conservative for any motion of the coordinate system. Two extensions to an implicit method were considered and the implicit extension that makes the algorithm computationally efficient is implemented into Ames's aeroelasticity code, ENSAERO. The new flow solver has been validated through the solution of test problems. Test cases include three-dimensional problems with fixed and moving grids. The first test case shown is an unsteady viscous flow over an F-5 wing, while the second test considers the motion of the leading edge vortex as well as the motion of the shock wave for a clipped delta wing. The resulting algorithm has been implemented into ENSAERO. The upwind version leads to higher accuracy in both steady and unsteady computations than the previously used central-difference method does, while the increase in the computational time is small.

  15. Active optics control development at the LBT

    NASA Astrophysics Data System (ADS)

    Ashby, David S.; Biddick, Christopher; Hill, John M.

    2014-07-01

    The Large Binocular Telescope (LBT) is built around two 8.4 m-diameter primary mirrors placed with a centerline separation of 14.4 m in a common altitude/azimuth mount. Each side of the telescope can utilize a deployable prime focus instrument; alternatively, the beam can be directed to a Gregorian instrument by utilizing a deployable secondary mirror. The direct-Gregorian beam can be intercepted and redirected to several bent-Gregorian instruments by utilizing a deployable tertiary mirror. Two of the available bent-Gregorian instruments are interferometers, capable of coherently combining the beams from the two sides of the telescope. Active optics can utilize as many as 26 linearly independent degrees of freedom to position the primary, secondary and tertiary mirrors to control optical collimation while the telescope operates in its numerous observing modes. Additionally, by applying differential forces at 160 locations on each primary mirror, active optics controls the primary mirror figure. The authors explore the challenges associated with collimation and primary mirror figure control at the LBT and outline the ongoing related development aimed at optimizing image quality and preparing the telescope for interferometric operations.

  16. System ID modern control algorithms for active aerodynamic load control and impact on gearbox loading.

    SciTech Connect

    Berg, Jonathan Charles; Halse, Chris; Crowther, Ashley; Barlas, Thanasis; Wilson, David Gerald; Berg, Dale E.; Resor, Brian Ray

    2010-06-01

    Prior work on active aerodynamic load control (AALC) of wind turbine blades has demonstrated that appropriate use of this technology has the potential to yield significant reductions in blade loads, leading to a decrease in wind cost of energy. While the general concept of AALC is usually discussed in the context of multiple sensors and active control devices (such as flaps) distributed over the length of the blade, most work to date has been limited to consideration of a single control device per blade with very basic Proportional Derivative controllers, due to limitations in the aeroservoelastic codes used to perform turbine simulations. This work utilizes a new aeroservoelastic code developed at Delft University of Technology to model the NREL/Upwind 5 MW wind turbine to investigate the relative advantage of utilizing multiple-device AALC. System identification techniques are used to identify the frequencies and shapes of turbine vibration modes, and these are used with modern control techniques to develop both Single-Input Single-Output (SISO) and Multiple-Input Multiple-Output (MIMO) LQR flap controllers. Comparison of simulation results with these controllers shows that the MIMO controller does yield some improvement over the SISO controller in fatigue load reduction, but additional improvement is possible with further refinement. In addition, a preliminary investigation shows that AALC has the potential to reduce off-axis gearbox loads, leading to reduced gearbox bearing fatigue damage and improved lifetimes.

  17. Time series analysis to identify thermal precursors and develop forecasting algorithms: case studies from Bezymianny, Shiveluch, Kliuchevskoi and Karymsky

    NASA Astrophysics Data System (ADS)

    van Manen, S. M.; Dehn, J.; Blake, S.

    2010-12-01

    Volcanic ash injected into aircraft routes poses a severe risk to both life and cargo, and can have a severe economic impact as exemplified by the recent Eyjafjallajokull eruption, which cost the airline industry approximately $200 million per day. Here we present detailed quantitative analyses of AVHRR (Advanced Very High Resolution Radiometer) thermal data from 1993-2008 from Bezymianny, Shiveluch, Kliuchevskoi and Karymsky (Russia). Quantitative analysis of long-term time series of thermal satellite data is an effective tool to monitor volcanic activity and identify potential thermal precursory signals. Bezymianny and Shiveluch have many outwardly similar characteristics, both erupt intermediate composition magma, have exploded in the past century with a lateral blast, and now having similar dome volumes of approximately 0.3-0.4 km3. Both also have an almost continuous thermal presence, but their thermal signatures indicate highly different behaviour. At Bezymianny, a successful algorithm has been developed based on the trends observed prior to known explosions. It uses contextual, temporal and fixed threshold approaches to analyze slope and intercept values of straight lines fitted through 30-day moving windows of AVHRR thermal data. However, this approach was not successful at Shiveluch. We suggest that the difference is due to the physical properties of their specific magmas, magma supply rates and subsurface structure. The greater extrusion rate observed at Shiveluch could inhibit gas exsolution, therefore resulting in more, but less well-defined, explosive activity than is observed at Bezymianny. Consistent thermal precursors were not observed at Kliuchevskoi or Karymsky. At Kliuchevskoi fast magma ascent rates and relatively low magma viscosity are thought to prevent the generation of thermal precursors. Karymsky on the other hand shows a lot of thermal activity and has the highest long-term magma discharge rate of the four volcanoes, but the AVHRR data do

  18. Towards developing robust algorithms for solving partial differential equations on MIMD machines

    NASA Technical Reports Server (NTRS)

    Saltz, J. H.; Naik, V. K.

    1985-01-01

    Methods for efficient computation of numerical algorithms on a wide variety of MIMD machines are proposed. These techniques reorganize the data dependency patterns to improve the processor utilization. The model problem finds the time-accurate solution to a parabolic partial differential equation discretized in space and implicitly marched forward in time. The algorithms are extensions of Jacobi and SOR. The extensions consist of iterating over a window of several timesteps, allowing efficient overlap of computation with communication. The methods increase the degree to which work can be performed while data are communicated between processors. The effect of the window size and of domain partitioning on the system performance is examined both by implementing the algorithm on a simulated multiprocessor system.

  19. Towards developing robust algorithms for solving partial differential equations on MIMD machines

    NASA Technical Reports Server (NTRS)

    Saltz, Joel H.; Naik, Vijay K.

    1988-01-01

    Methods for efficient computation of numerical algorithms on a wide variety of MIMD machines are proposed. These techniques reorganize the data dependency patterns to improve the processor utilization. The model problem finds the time-accurate solution to a parabolic partial differential equation discretized in space and implicitly marched forward in time. The algorithms are extensions of Jacobi and SOR. The extensions consist of iterating over a window of several timesteps, allowing efficient overlap of computation with communication. The methods increase the degree to which work can be performed while data are communicated between processors. The effect of the window size and of domain partitioning on the system performance is examined both by implementing the algorithm on a simulated multiprocessor system.

  20. Quantitative structure-activity relationship (QSAR) for insecticides: development of predictive in vivo insecticide activity models.

    PubMed

    Naik, P K; Singh, T; Singh, H

    2009-07-01

    Quantitative structure-activity relationship (QSAR) analyses were performed independently on data sets belonging to two groups of insecticides, namely the organophosphates and carbamates. Several types of descriptors including topological, spatial, thermodynamic, information content, lead likeness and E-state indices were used to derive quantitative relationships between insecticide activities and structural properties of chemicals. A systematic search approach based on missing value, zero value, simple correlation and multi-collinearity tests as well as the use of a genetic algorithm allowed the optimal selection of the descriptors used to generate the models. The QSAR models developed for both organophosphate and carbamate groups revealed good predictability with r(2) values of 0.949 and 0.838 as well as [image omitted] values of 0.890 and 0.765, respectively. In addition, a linear correlation was observed between the predicted and experimental LD(50) values for the test set data with r(2) of 0.871 and 0.788 for both the organophosphate and carbamate groups, indicating that the prediction accuracy of the QSAR models was acceptable. The models were also tested successfully from external validation criteria. QSAR models developed in this study should help further design of novel potent insecticides.

  1. CoSMIR Measurements in Support of GPM Algorithm Development and Validation

    NASA Astrophysics Data System (ADS)

    Skofronick Jackson, G.; Wang, J. R.

    2012-12-01

    The Conical Scanning Millimeter-wave Imaging Radiometer (CoSMIR) is an aircraft instrument flown to reduce assumptions in GPM (Global Precipitation Measurement) satellite retrieval algorithms and to provide validation data. This instrument plays the role of an airborne high-frequency simulator for the GPM mission's Microwave Imager (GMI) and has channels particularly sensitive to precipitating ice and snow in clouds. CoSMIR flew on the ER-2 in the Midlatitude Continental Convective Cloud Experiment (MC3E) field campaign in Oklahoma during April-June 2011 and on the DC-8 for the GPM Cold-season Precipitation Experiment (GCPEx) in January-February 2012 in Ontario, Canada. A unique feature of CoSMIR is that it is programmed to acquire radiometric measurements in both conical and cross-track scans nearly simultaneously. An advantage of this dual scanning is that it allows comparison/validation with satellite measurements in conical modes (e.g., TRMM, SSMIS) and cross-track modes (e.g., ATMS, MHS). The GPM Microwave Imager (GMI), scheduled for launch in early 2014, includes 13 channels ranging from 10 to 183±7 GHz. Precipitation retrievals over land with a microwave imager such as the Tropical Rainfall Measurement Mission (TRMM) Microwave Imager (TMI) often face difficulties in measuring precipitation because of the lack of surface radiometric contrast and homogeneity. GMI with additional high frequency channels, beyond those of TMI, should improve the retrieval skill over land surface. Furthermore, GMI could expand the global precipitation retrievals to include frozen hydrometeors such as graupel above the melting layer all the way to snow falling at the Earth's surface. To explore these opportunities and acquire data for algorithm development, CoSMIR has six channels that precisely match GMI high frequency channels with the following: 89 (dual-polarized), 165 (dual-polarized), 183.3±1, 183.3±3, and 183.3±7 GHz. During the MC3E field campaign, CoSMIR was on the ER

  2. THE DEVELOPMENT OF A PARAMETERIZED SCATTER REMOVAL ALGORITHM FOR NUCLEAR MATERIALS IDENTIFICATION SYSTEM IMAGING

    SciTech Connect

    Grogan, Brandon R

    2010-05-01

    This report presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects nonintrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross sections of features inside the object can be determined. The cross sections can then be used to identify the materials, and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons that are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized, and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements, and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using the

  3. The Development of a Parameterized Scatter Removal Algorithm for Nuclear Materials Identification System Imaging

    SciTech Connect

    Grogan, Brandon Robert

    2010-03-01

    This dissertation presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects non-intrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross-sections of features inside the object can be determined. The cross sections can then be used to identify the materials and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons which are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using

  4. Development of a Near Real-Time Hail Damage Swath Identification Algorithm for Vegetation

    NASA Technical Reports Server (NTRS)

    Bell, Jordan R.; Molthan, Andrew L.; Schultz, Kori A.; McGrath, Kevin M.; Burks, Jason E.

    2015-01-01

    Every year in the Midwest and Great Plains, widespread greenness forms in conjunction with the latter part of the spring-summer growing season. This prevalent greenness forms as a result of the high concentration of agricultural areas having their crops reach their maturity before the fall harvest. This time of year also coincides with an enhanced hail frequency for the Great Plains (Cintineo et al. 2012). These severe thunderstorms can bring damaging winds and large hail that can result in damage to the surface vegetation. The spatial extent of the damage can relatively small concentrated area or be a vast swath of damage that is visible from space. These large areas of damage have been well documented over the years. In the late 1960s aerial photography was used to evaluate crop damage caused by hail. As satellite remote sensing technology has evolved, the identification of these hail damage streaks has increased. Satellites have made it possible to view these streaks in additional spectrums. Parker et al. (2005) documented two streaks using the Moderate Resolution Imaging Spectroradiometer (MODIS) that occurred in South Dakota. He noted the potential impact that these streaks had on the surface temperature and associated surface fluxes that are impacted by a change in temperature. Gallo et al. (2012) examined at the correlation between radar signatures and ground observations from storms that produced a hail damage swath in Central Iowa also using MODIS. Finally, Molthan et al. (2013) identified hail damage streaks through MODIS, Landsat-7, and SPOT observations of different resolutions for the development of a potential near-real time applications. The manual analysis of hail damage streaks in satellite imagery is both tedious and time consuming, and may be inconsistent from event to event. This study focuses on development of an objective and automatic algorithm to detect these areas of damage in a more efficient and timely manner. This study utilizes the

  5. Algorithms for Performance, Dependability, and Performability Evaluation using Stochastic Activity Networks

    NASA Technical Reports Server (NTRS)

    Deavours, Daniel D.; Qureshi, M. Akber; Sanders, William H.

    1997-01-01

    Modeling tools and technologies are important for aerospace development. At the University of Illinois, we have worked on advancing the state of the art in modeling by Markov reward models in two important areas: reducing the memory necessary to numerically solve systems represented as stochastic activity networks and other stochastic Petri net extensions while still obtaining solutions in a reasonable amount of time, and finding numerically stable and memory-efficient methods to solve for the reward accumulated during a finite mission time. A long standing problem when modeling with high level formalisms such as stochastic activity networks is the so-called state space explosion, where the number of states increases exponentially with size of the high level model. Thus, the corresponding Markov model becomes prohibitively large and solution is constrained by the the size of primary memory. To reduce the memory necessary to numerically solve complex systems, we propose new methods that can tolerate such large state spaces that do not require any special structure in the model (as many other techniques do). First, we develop methods that generate row and columns of the state transition-rate-matrix on-the-fly, eliminating the need to explicitly store the matrix at all. Next, we introduce a new iterative solution method, called modified adaptive Gauss-Seidel, that exhibits locality in its use of data from the state transition-rate-matrix, permitting us to cache portions of the matrix and hence reduce the solution time. Finally, we develop a new memory and computationally efficient technique for Gauss-Seidel based solvers that avoids the need for generating rows of A in order to solve Ax = b. This is a significant performance improvement for on-the-fly methods as well as other recent solution techniques based on Kronecker operators. Taken together, these new results show that one can solve very large models without any special structure.

  6. Activity concentration measurements using a conjugate gradient (Siemens xSPECT) reconstruction algorithm in SPECT/CT.

    PubMed

    Armstrong, Ian S; Hoffmann, Sandra A

    2016-11-01

    The interest in quantitative single photon emission computer tomography (SPECT) shows potential in a number of clinical applications and now several vendors are providing software and hardware solutions to allow 'SUV-SPECT' to mirror metrics used in PET imaging. This brief technical report assesses the accuracy of activity concentration measurements using a new algorithm 'xSPECT' from Siemens Healthcare. SPECT/CT data were acquired from a uniform cylinder with 5, 10, 15 and 20 s/projection and NEMA image quality phantom with 25 s/projection. The NEMA phantom had hot spheres filled with an 8 : 1 activity concentration relative to the background compartment. Reconstructions were performed using parameters defined by manufacturer presets available with the algorithm. The accuracy of activity concentration measurements was assessed. A dose calibrator-camera cross-calibration factor (CCF) was derived from the uniform phantom data. In uniform phantom images, a positive bias was observed, ranging from ∼6% in the lower count images to ∼4% in the higher-count images. On the basis of the higher-count data, a CCF of 0.96 was derived. As expected, considerable negative bias was measured in the NEMA spheres using region mean values whereas positive bias was measured in the four largest NEMA spheres. Nonmonotonically increasing recovery curves for the hot spheres suggested the presence of Gibbs edge enhancement from resolution modelling. Sufficiently accurate activity concentration measurements can easily be measured on images reconstructed with the xSPECT algorithm without a CCF. However, the use of a CCF is likely to improve accuracy further. A manual conversion of voxel values into SUV should be possible, provided that the patient weight, injected activity and time between injection and imaging are all known accurately.

  7. Activity concentration measurements using a conjugate gradient (Siemens xSPECT) reconstruction algorithm in SPECT/CT.

    PubMed

    Armstrong, Ian S; Hoffmann, Sandra A

    2016-11-01

    The interest in quantitative single photon emission computer tomography (SPECT) shows potential in a number of clinical applications and now several vendors are providing software and hardware solutions to allow 'SUV-SPECT' to mirror metrics used in PET imaging. This brief technical report assesses the accuracy of activity concentration measurements using a new algorithm 'xSPECT' from Siemens Healthcare. SPECT/CT data were acquired from a uniform cylinder with 5, 10, 15 and 20 s/projection and NEMA image quality phantom with 25 s/projection. The NEMA phantom had hot spheres filled with an 8 : 1 activity concentration relative to the background compartment. Reconstructions were performed using parameters defined by manufacturer presets available with the algorithm. The accuracy of activity concentration measurements was assessed. A dose calibrator-camera cross-calibration factor (CCF) was derived from the uniform phantom data. In uniform phantom images, a positive bias was observed, ranging from ∼6% in the lower count images to ∼4% in the higher-count images. On the basis of the higher-count data, a CCF of 0.96 was derived. As expected, considerable negative bias was measured in the NEMA spheres using region mean values whereas positive bias was measured in the four largest NEMA spheres. Nonmonotonically increasing recovery curves for the hot spheres suggested the presence of Gibbs edge enhancement from resolution modelling. Sufficiently accurate activity concentration measurements can easily be measured on images reconstructed with the xSPECT algorithm without a CCF. However, the use of a CCF is likely to improve accuracy further. A manual conversion of voxel values into SUV should be possible, provided that the patient weight, injected activity and time between injection and imaging are all known accurately. PMID:27501436

  8. Utilization of Airborne and in Situ Data Obtained in SGP99, SMEX02, CLASIC and SMAPVEX08 Field Campaigns for SMAP Soil Moisture Algorithm Development and Validation

    NASA Technical Reports Server (NTRS)

    Colliander, Andreas; Chan, Steven; Yueh, Simon; Cosh, Michael; Bindlish, Rajat; Jackson, Tom; Njoku, Eni

    2010-01-01

    Field experiment data sets that include coincident remote sensing measurements and in situ sampling will be valuable in the development and validation of the soil moisture algorithms of the NASA's future SMAP (Soil Moisture Active and Passive) mission. This paper presents an overview of the field experiment data collected from SGP99, SMEX02, CLASIC and SMAPVEX08 campaigns. Common in these campaigns were observations of the airborne PALS (Passive and Active L- and S-band) instrument, which was developed to acquire radar and radiometer measurements at low frequencies. The combined set of the PALS measurements and ground truth obtained from all these campaigns was under study. The investigation shows that the data set contains a range of soil moisture values collected under a limited number of conditions. The quality of both PALS and ground truth data meets the needs of the SMAP algorithm development and validation. The data set has already made significant impact on the science behind SMAP mission. The areas where complementing of the data would be most beneficial are also discussed.

  9. Development and validation of a linear recursive "Order- N" algorithm for the simulation of flexible space manipulator dynamics

    NASA Astrophysics Data System (ADS)

    Van Woerkom, P. Th. L. M.; de Boer, A.

    1995-01-01

    Robotic manipulators designed to operate on-board spacecraft and Space Stations are characterized by large spatial dimensions. The structural flexibility inherent in such manipulators introduces a noticeable and undesirable modification of the traditional rigid-body manipulator dynamics. As a result, the dynamics of the complete system comprising a flexible spacecraft or Space Station as a manipulator base, and an attached flexible manipulator, are also modified. Operational requirements related to high manoeuvre accuracy and modest manoeuvre duration, create the need for careful modelling and simulation of the dynamics of such systems. The objective of this paper is to outline the development and validation of an advanced algorithm for the simulation of the dynamics of such flexible spacecraft/space manipulator systems. The requirements imposed during the development of the present prototype dynamics simulator led to the modification and implementation of an existing linear recursive algorithm ("Order- N" algorithm), which requires a computational effort proportional to the number of component bodies in the system. Starting with the Lagrange form of the d'Alembert principle, we first deduce a parametric form which is found to yield—amongst others—the basic forms of the Newton-Euler, the d'Alembert and the Gauss dynamics principles. It is then shown how the application of each of the latter three principles can be made to lead graciously to the desired Order- N algorithm for the flexible multi-body system. The Order- N algorithm thus obtained and validated analytically, forms the basis for the prototype simulator REALDYN, designed to permit numerical simulation of the algorithm on UNIX workstations. Verification, numerical integration and further validation tests have been carried out. Some of the results obtained during the validation exercises could not be explained readily, even in the case of simple multi-body systems. The use of test tools and physical

  10. Development of an algorithm to measure defect geometry using a 3D laser scanner

    NASA Astrophysics Data System (ADS)

    Kilambi, S.; Tipton, S. M.

    2012-08-01

    Current fatigue life prediction models for coiled tubing (CT) require accurate measurements of the defect geometry. Three-dimensional (3D) laser imaging has shown promise toward becoming a nondestructive, non-contacting method of surface defect characterization. Laser imaging provides a detailed photographic image of a flaw, in addition to a detailed 3D surface map from which its critical dimensions can be measured. This paper describes algorithms to determine defect characteristics, specifically depth, width, length and projected cross-sectional area. Curve-fitting methods were compared and implicit algebraic fits have higher probability of convergence compared to explicit geometric fits. Among the algebraic fits, the Taubin circle fit has the least error. The algorithm was able to extract the dimensions of the flaw geometry from the scanned data of CT to within a tolerance of about 0.127 mm, close to the tolerance specified for the laser scanner itself, compared to measurements made using traveling microscopes. The algorithm computes the projected surface area of the flaw, which could previously only be estimated from the dimension measurements and the assumptions made about cutter shape. Although shadows compromised the accuracy of the shape characterization, especially for deep and narrow flaws, the results indicate that the algorithm with laser scanner can be used for non-destructive evaluation of CT in the oil field industry. Further work is needed to improve accuracy, to eliminate shadow effects and to reduce radial deviation.

  11. Developments of aerosol retrieval algorithm for Geostationary Environmental Monitoring Spectrometer (GEMS) and the retrieval accuracy test

    NASA Astrophysics Data System (ADS)

    KIM, M.; Kim, J.; Jeong, U.; Ahn, C.; Bhartia, P. K.; Torres, O.

    2013-12-01

    A scanning UV-Visible spectrometer, the GEMS (Geostationary Environment Monitoring Spectrometer) onboard the GEO-KOMPSAT2B (Geostationary Korea Multi-Purpose Satellite) is planned to be launched in geostationary orbit in 2018. The GEMS employs hyper-spectral imaging with 0.6 nm resolution to observe solar backscatter radiation in the UV and Visible range. In the UV range, the low surface contribution to the backscattered radiation and strong interaction between aerosol absorption and molecular scattering can be advantageous in retrieving aerosol optical properties such as aerosol optical depth (AOD) and single scattering albedo (SSA). By taking the advantage, the OMI UV aerosol algorithm has provided information on the absorbing aerosol (Torres et al., 2007; Ahn et al., 2008). This study presents a UV-VIS algorithm to retrieve AOD and SSA from GEMS. The algorithm is based on the general inversion method, which uses pre-calculated look-up table with assumed aerosol properties and measurement condition. To obtain the retrieval accuracy, the error of the look-up table method occurred by the interpolation of pre-calculated radiances is estimated by using the reference dataset, and the uncertainties about aerosol type and height are evaluated. Also, the GEMS aerosol algorithm is tested with measured normalized radiance from OMI, a provisional data set for GEMS measurement, and the results are compared with the values from AERONET measurements over Asia. Additionally, the method for simultaneous retrieve of the AOD and aerosol height is discussed.

  12. FY-3C/VIRR SST algorithm and cal/val activities at NSMC/CMA

    NASA Astrophysics Data System (ADS)

    Wang, Sujuan; Cui, Peng; Zhang, Peng; Ran, Maonong; Lu, Feng; Wang, Weihe

    2014-11-01

    The National Satellite Meteorological Center (NSMC)/CMA global sea surface temperature (SST) data are derived from measurements made by the Visible and Infrared Radiometer (VIRR) on board the FY-3 series polar orbiting satellites. Quality controlled in situ data from iQUAM (STAR/NESDIS/NOAA) is used in FY-3B/C VIRR matching procedure. The monthly matchup database (MDB) is created from FY-3C VIRR measurements paired with coincident SST measurements from buoys since November 2013. The satellite sensor's brightness temperature and buoy SST pairs are included in the MDB if they are coincident within 3km in space and 1 hour in time. Least-Square Regression is used for estimating the first-guess coefficient and SST residuals. Outliers are removed using Median±2STD, and the final coefficients of robust regression are estimated. A set of SST regression formalisms are tested base on NOAA- 19/AVHRR 2010 MDB. The test shows that, for daytime split-window nonlinear SST (NLSST) is the best, for nighttime triple-window MCSST (TCSST) is the best, which is agree with STAR/NESDIS's. The same regression analysis method also used on FY-3C/VIRR MDB. Compare with the three daytime SST algorithms and five nighttime SST algorithms, the best algorithm to retrieve FY-3C/VIRR SST for daytime is NLSST and for nighttime is TCSST. Compare with the coefficients of nighttime algorithm TCSST, it shows that for FY-3B/C VIRR SST, the contribution of 3.7μm band is smaller than split-window bands. The performance of 3.7μm band of FY-3C/VIRR is better than FY-3B/VIRR, but worse than NOAA-19/AVHRR.

  13. Development of aerosol retrieval algorithm for Geostationary Environmental Monitoring Spectrometer (GEMS)

    NASA Astrophysics Data System (ADS)

    Kim, Mijin; Kim, Jhoon; Park, Sang Seo; Jeong, Ukkyo; Ahn, Changwoo; Bhartia, Pawan. K.; Torres, Omar; Song, Chang-Keun; Han, Jin-Seok

    2014-05-01

    A scanning UV-Visible spectrometer, the GEMS (Geostationary Environment Monitoring Spectrometer) onboard the GEO-KOMPSAT2B (Geostationary Korea Multi-Purpose Satellite) is planned to be launched in geostationary orbit in 2018. The GEMS employs hyper-spectral imaging with 0.6 nm resolution to observe solar backscatter radiation in the UV and Visible range. In the UV range, the low surface contribution to the backscattered radiation and strong interaction between aerosol absorption and molecular scattering can be advantageous in retrieving aerosol optical properties such as aerosol optical depth (AOD) and single scattering albedo (SSA). This study presents a UV-VIS algorithm to retrieve AOD and SSA from GEMS. The algorithm is based on the general inversion method, which uses pre-calculated look-up table (LUT) with assumed aerosol properties and measurement condition. To calculate LUT, aerosol optical properties over Asia [70°E-145°E, 0°N-50°N] are obtained from AERONET inversion data (level 2.0) at 46 AERONET sites, and are applied to VLIDORT (spur, 2006). Because the backscattering radiance in UV-Visible range has significant sensitivity to radiance absorptivity and size distribution of loading aerosol, aerosol types are classified from AERONET inversion data by using aerosol classification method suggested in Lee et al. (2010). Then the LUTs are calculated with average optical properties for each aerosol type. The GEMS aerosol algorithm is tested with OMI level-1B dataset, a provisional data for GEMS measurement. The aerosol types for each measured scene are selected by using both of UVAI and VISAI, and AOD and SSA are simultaneously retrieved by comparing simulated radiance with selected aerosol type and the measured value. The AOD and SSA retrieved from GEMS aerosol algorithm are well matched with OMI products, although the retrieved AOD is slightly higher than OMI value. To detect cloud pixel, spatial standard deviation test of radiance is applied in the

  14. Structured interview for mild traumatic brain injury after military blast: inter-rater agreement and development of diagnostic algorithm.

    PubMed

    Walker, William C; Cifu, David X; Hudak, Anne M; Goldberg, Gary; Kunz, Richard D; Sima, Adam P

    2015-04-01

    The existing gold standard for diagnosing a suspected previous mild traumatic brain injury (mTBI) is clinical interview. But it is prone to bias, especially for parsing the physical versus psychological effects of traumatic combat events, and its inter-rater reliability is unknown. Several standardized TBI interview instruments have been developed for research use but have similar limitations. Therefore, we developed the Virginia Commonwealth University (VCU) retrospective concussion diagnostic interview, blast version (VCU rCDI-B), and undertook this cross-sectional study aiming to 1) measure agreement among clinicians' mTBI diagnosis ratings, 2) using clinician consensus develop a fully structured diagnostic algorithm, and 3) assess accuracy of this algorithm in a separate sample. Two samples (n = 66; n = 37) of individuals within 2 years of experiencing blast effects during military deployment underwent semistructured interview regarding their worst blast experience. Five highly trained TBI physicians independently reviewed and interpreted the interview content and gave blinded ratings of whether or not the experience was probably an mTBI. Paired inter-rater reliability was extremely variable, with kappa ranging from 0.194 to 0.825. In sample 1, the physician consensus prevalence of probable mTBI was 84%. Using these diagnosis ratings, an algorithm was developed and refined from the fully structured portion of the VCU rCDI-B. The final algorithm considered certain symptom patterns more specific for mTBI than others. For example, an isolated symptom of "saw stars" was deemed sufficient to indicate mTBI, whereas an isolated symptom of "dazed" was not. The accuracy of this algorithm, when applied against the actual physician consensus in sample 2, was almost perfect (correctly classified = 97%; Cohen's kappa = 0.91). In conclusion, we found that highly trained clinicians often disagree on historical blast-related mTBI determinations. A fully structured interview

  15. Structured interview for mild traumatic brain injury after military blast: inter-rater agreement and development of diagnostic algorithm.

    PubMed

    Walker, William C; Cifu, David X; Hudak, Anne M; Goldberg, Gary; Kunz, Richard D; Sima, Adam P

    2015-04-01

    The existing gold standard for diagnosing a suspected previous mild traumatic brain injury (mTBI) is clinical interview. But it is prone to bias, especially for parsing the physical versus psychological effects of traumatic combat events, and its inter-rater reliability is unknown. Several standardized TBI interview instruments have been developed for research use but have similar limitations. Therefore, we developed the Virginia Commonwealth University (VCU) retrospective concussion diagnostic interview, blast version (VCU rCDI-B), and undertook this cross-sectional study aiming to 1) measure agreement among clinicians' mTBI diagnosis ratings, 2) using clinician consensus develop a fully structured diagnostic algorithm, and 3) assess accuracy of this algorithm in a separate sample. Two samples (n = 66; n = 37) of individuals within 2 years of experiencing blast effects during military deployment underwent semistructured interview regarding their worst blast experience. Five highly trained TBI physicians independently reviewed and interpreted the interview content and gave blinded ratings of whether or not the experience was probably an mTBI. Paired inter-rater reliability was extremely variable, with kappa ranging from 0.194 to 0.825. In sample 1, the physician consensus prevalence of probable mTBI was 84%. Using these diagnosis ratings, an algorithm was developed and refined from the fully structured portion of the VCU rCDI-B. The final algorithm considered certain symptom patterns more specific for mTBI than others. For example, an isolated symptom of "saw stars" was deemed sufficient to indicate mTBI, whereas an isolated symptom of "dazed" was not. The accuracy of this algorithm, when applied against the actual physician consensus in sample 2, was almost perfect (correctly classified = 97%; Cohen's kappa = 0.91). In conclusion, we found that highly trained clinicians often disagree on historical blast-related mTBI determinations. A fully structured interview

  16. Lightning Jump Algorithm Development for the GOES·R Geostationary Lightning Mapper

    NASA Technical Reports Server (NTRS)

    Schultz. E.; Schultz. C.; Chronis, T.; Stough, S.; Carey, L.; Calhoun, K.; Ortega, K.; Stano, G.; Cecil, D.; Bateman, M.; Goodman, S.

    2014-01-01

    Current work on the lightning jump algorithm to be used in GOES-R Geostationary Lightning Mapper (GLM)'s data stream is multifaceted due to the intricate interplay between the storm tracking, GLM proxy data, and the performance of the lightning jump itself. This work outlines the progress of the last year, where analysis and performance of the lightning jump algorithm with automated storm tracking and GLM proxy data were assessed using over 700 storms from North Alabama. The cases analyzed coincide with previous semi-objective work performed using total lightning mapping array (LMA) measurements in Schultz et al. (2011). Analysis shows that key components of the algorithm (flash rate and sigma thresholds) have the greatest influence on the performance of the algorithm when validating using severe storm reports. Automated objective analysis using the GLM proxy data has shown probability of detection (POD) values around 60% with false alarm rates (FAR) around 73% using similar methodology to Schultz et al. (2011). However, when applying verification methods similar to those employed by the National Weather Service, POD values increase slightly (69%) and FAR values decrease (63%). The relationship between storm tracking and lightning jump has also been tested in a real-time framework at NSSL. This system includes fully automated tracking by radar alone, real-time LMA and radar observations and the lightning jump. Results indicate that the POD is strong at 65%. However, the FAR is significantly higher than in Schultz et al. (2011) (50-80% depending on various tracking/lightning jump parameters) when using storm reports for verification. Given known issues with Storm Data, the performance of the real-time jump algorithm is also being tested with high density radar and surface observations from the NSSL Severe Hazards Analysis & Verification Experiment (SHAVE).

  17. Passive microwave remote sensing of rainfall with SSM/I: Algorithm development and implementation

    NASA Technical Reports Server (NTRS)

    Ferriday, James G.; Avery, Susan K.

    1994-01-01

    A physically based algorithm sensitive to emission and scattering is used to estimate rainfall using the Special Sensor Microwave/Imager (SSM/I). The algorithm is derived from radiative transfer calculations through an atmospheric cloud model specifying vertical distributions of ice and liquid hydrometeors as a function of rain rate. The algorithm is structured in two parts: SSM/I brightness temperatures are screened to detect rainfall and are then used in rain-rate calculation. The screening process distinguishes between nonraining background conditions and emission and scattering associated with hydrometeors. Thermometric temperature and polarization thresholds determined from the radiative transfer calculations are used to detect rain, whereas the rain-rate calculation is based on a linear function fit to a linear combination of channels. Separate calculations for ocean and land account for different background conditions. The rain-rate calculation is constructed to respond to both emission and scattering, reduce extraneous atmospheric and surface effects, and to correct for beam filling. The resulting SSM/I rain-rate estimates are compared to three precipitation radars as well as to a dynamically simulated rainfall event. Global estimates from the SSM/I algorithm are also compared to continental and shipboard measurements over a 4-month period. The algorithm is found to accurately describe both localized instantaneous rainfall events and global monthly patterns over both land and ovean. Over land the 4-month mean difference between SSM/I and the Global Precipitation Climatology Center continental rain gauge database is less than 10%. Over the ocean, the mean difference between SSM/I and the Legates and Willmott global shipboard rain gauge climatology is less than 20%.

  18. Algorithm and simulation development in support of response strategies for contamination events in air and water systems.

    SciTech Connect

    Waanders, Bart Van Bloemen

    2006-01-01

    Chemical/Biological/Radiological (CBR) contamination events pose a considerable threat to our nation's infrastructure, especially in large internal facilities, external flows, and water distribution systems. Because physical security can only be enforced to a limited degree, deployment of early warning systems is being considered. However to achieve reliable and efficient functionality, several complex questions must be answered: (1) where should sensors be placed, (2) how can sparse sensor information be efficiently used to determine the location of the original intrusion, (3) what are the model and data uncertainties, (4) how should these uncertainties be handled, and (5) how can our algorithms and forward simulations be sufficiently improved to achieve real time performance? This report presents the results of a three year algorithmic and application development to support the identification, mitigation, and risk assessment of CBR contamination events. The main thrust of this investigation was to develop (1) computationally efficient algorithms for strategically placing sensors, (2) identification process of contamination events by using sparse observations, (3) characterization of uncertainty through developing accurate demands forecasts and through investigating uncertain simulation model parameters, (4) risk assessment capabilities, and (5) reduced order modeling methods. The development effort was focused on water distribution systems, large internal facilities, and outdoor areas.

  19. Recent Electric Propulsion Development Activities for NASA Science Missions

    NASA Technical Reports Server (NTRS)

    Pencil, Eric J.

    2009-01-01

    (The primary source of electric propulsion development throughout NASA is managed by the In-Space Propulsion Technology Project at the NASA Glenn Research Center for the Science Mission Directorate. The objective of the Electric Propulsion project area is to develop near-term electric propulsion technology to enhance or enable science missions while minimizing risk and cost to the end user. Major hardware tasks include developing NASA s Evolutionary Xenon Thruster (NEXT), developing a long-life High Voltage Hall Accelerator (HIVHAC), developing an advanced feed system, and developing cross-platform components. The objective of the NEXT task is to advance next generation ion propulsion technology readiness. The baseline NEXT system consists of a high-performance, 7-kW ion thruster; a high-efficiency, 7-kW power processor unit (PPU); a highly flexible advanced xenon propellant management system (PMS); a lightweight engine gimbal; and key elements of a digital control interface unit (DCIU) including software algorithms. This design approach was selected to provide future NASA science missions with the greatest value in mission performance benefit at a low total development cost. The objective of the HIVHAC task is to advance the Hall thruster technology readiness for science mission applications. The task seeks to increase specific impulse, throttle-ability and lifetime to make Hall propulsion systems applicable to deep space science missions. The primary application focus for the resulting Hall propulsion system would be cost-capped missions, such as competitively selected, Discovery-class missions. The objective of the advanced xenon feed system task is to demonstrate novel manufacturing techniques that will significantly reduce mass, volume, and footprint size of xenon feed systems over conventional feed systems. This task has focused on the development of a flow control module, which consists of a three-channel flow system based on a piezo-electrically actuated

  20. TH-E-BRE-07: Development of Dose Calculation Error Predictors for a Widely Implemented Clinical Algorithm

    SciTech Connect

    Egan, A; Laub, W

    2014-06-15

    Purpose: Several shortcomings of the current implementation of the analytic anisotropic algorithm (AAA) may lead to dose calculation errors in highly modulated treatments delivered to highly heterogeneous geometries. Here we introduce a set of dosimetric error predictors that can be applied to a clinical treatment plan and patient geometry in order to identify high risk plans. Once a problematic plan is identified, the treatment can be recalculated with more accurate algorithm in order to better assess its viability. Methods: Here we focus on three distinct sources dosimetric error in the AAA algorithm. First, due to a combination of discrepancies in smallfield beam modeling as well as volume averaging effects, dose calculated through small MLC apertures can be underestimated, while that behind small MLC blocks can overestimated. Second, due the rectilinear scaling of the Monte Carlo generated pencil beam kernel, energy is not properly transported through heterogeneities near, but not impeding, the central axis of the beamlet. And third, AAA overestimates dose in regions very low density (< 0.2 g/cm{sup 3}). We have developed an algorithm to detect the location and magnitude of each scenario within the patient geometry, namely the field-size index (FSI), the heterogeneous scatter index (HSI), and the lowdensity index (LDI) respectively. Results: Error indices successfully identify deviations between AAA and Monte Carlo dose distributions in simple phantom geometries. Algorithms are currently implemented in the MATLAB computing environment and are able to run on a typical RapidArc head and neck geometry in less than an hour. Conclusion: Because these error indices successfully identify each type of error in contrived cases, with sufficient benchmarking, this method can be developed into a clinical tool that may be able to help estimate AAA dose calculation errors and when it might be advisable to use Monte Carlo calculations.

  1. A novel hybrid classification model of genetic algorithms, modified k-Nearest Neighbor and developed backpropagation neural network.

    PubMed

    Salari, Nader; Shohaimi, Shamarina; Najafi, Farid; Nallappan, Meenakshii; Karishnarajah, Isthrinayagy

    2014-01-01

    Among numerous artificial intelligence approaches, k-Nearest Neighbor algorithms, genetic algorithms, and artificial neural networks are considered as the most common and effective methods in classification problems in numerous studies. In the present study, the results of the implementation of a novel hybrid feature selection-classification model using the above mentioned methods are presented. The purpose is benefitting from the synergies obtained from combining these technologies for the development of classification models. Such a combination creates an opportunity to invest in the strength of each algorithm, and is an approach to make up for their deficiencies. To develop proposed model, with the aim of obtaining the best array of features, first, feature ranking techniques such as the Fisher's discriminant ratio and class separability criteria were used to prioritize features. Second, the obtained results that included arrays of the top-ranked features were used as the initial population of a genetic algorithm to produce optimum arrays of features. Third, using a modified k-Nearest Neighbor method as well as an improved method of backpropagation neural networks, the classification process was advanced based on optimum arrays of the features selected by genetic algorithms. The performance of the proposed model was compared with thirteen well-known classification models based on seven datasets. Furthermore, the statistical analysis was performed using the Friedman test followed by post-hoc tests. The experimental findings indicated that the novel proposed hybrid model resulted in significantly better classification performance compared with all 13 classification methods. Finally, the performance results of the proposed model was benchmarked against the best ones reported as the state-of-the-art classifiers in terms of classification accuracy for the same data sets. The substantial findings of the comprehensive comparative study revealed that performance of the

  2. A novel hybrid classification model of genetic algorithms, modified k-Nearest Neighbor and developed backpropagation neural network.

    PubMed

    Salari, Nader; Shohaimi, Shamarina; Najafi, Farid; Nallappan, Meenakshii; Karishnarajah, Isthrinayagy

    2014-01-01

    Among numerous artificial intelligence approaches, k-Nearest Neighbor algorithms, genetic algorithms, and artificial neural networks are considered as the most common and effective methods in classification problems in numerous studies. In the present study, the results of the implementation of a novel hybrid feature selection-classification model using the above mentioned methods are presented. The purpose is benefitting from the synergies obtained from combining these technologies for the development of classification models. Such a combination creates an opportunity to invest in the strength of each algorithm, and is an approach to make up for their deficiencies. To develop proposed model, with the aim of obtaining the best array of features, first, feature ranking techniques such as the Fisher's discriminant ratio and class separability criteria were used to prioritize features. Second, the obtained results that included arrays of the top-ranked features were used as the initial population of a genetic algorithm to produce optimum arrays of features. Third, using a modified k-Nearest Neighbor method as well as an improved method of backpropagation neural networks, the classification process was advanced based on optimum arrays of the features selected by genetic algorithms. The performance of the proposed model was compared with thirteen well-known classification models based on seven datasets. Furthermore, the statistical analysis was performed using the Friedman test followed by post-hoc tests. The experimental findings indicated that the novel proposed hybrid model resulted in significantly better classification performance compared with all 13 classification methods. Finally, the performance results of the proposed model was benchmarked against the best ones reported as the state-of-the-art classifiers in terms of classification accuracy for the same data sets. The substantial findings of the comprehensive comparative study revealed that performance of the

  3. A Novel Hybrid Classification Model of Genetic Algorithms, Modified k-Nearest Neighbor and Developed Backpropagation Neural Network

    PubMed Central

    Salari, Nader; Shohaimi, Shamarina; Najafi, Farid; Nallappan, Meenakshii; Karishnarajah, Isthrinayagy

    2014-01-01

    Among numerous artificial intelligence approaches, k-Nearest Neighbor algorithms, genetic algorithms, and artificial neural networks are considered as the most common and effective methods in classification problems in numerous studies. In the present study, the results of the implementation of a novel hybrid feature selection-classification model using the above mentioned methods are presented. The purpose is benefitting from the synergies obtained from combining these technologies for the development of classification models. Such a combination creates an opportunity to invest in the strength of each algorithm, and is an approach to make up for their deficiencies. To develop proposed model, with the aim of obtaining the best array of features, first, feature ranking techniques such as the Fisher's discriminant ratio and class separability criteria were used to prioritize features. Second, the obtained results that included arrays of the top-ranked features were used as the initial population of a genetic algorithm to produce optimum arrays of features. Third, using a modified k-Nearest Neighbor method as well as an improved method of backpropagation neural networks, the classification process was advanced based on optimum arrays of the features selected by genetic algorithms. The performance of the proposed model was compared with thirteen well-known classification models based on seven datasets. Furthermore, the statistical analysis was performed using the Friedman test followed by post-hoc tests. The experimental findings indicated that the novel proposed hybrid model resulted in significantly better classification performance compared with all 13 classification methods. Finally, the performance results of the proposed model was benchmarked against the best ones reported as the state-of-the-art classifiers in terms of classification accuracy for the same data sets. The substantial findings of the comprehensive comparative study revealed that performance of the

  4. Application of genetic algorithm-kernel partial least square as a novel nonlinear feature selection method: activity of carbonic anhydrase II inhibitors.

    PubMed

    Jalali-Heravi, Mehdi; Kyani, Anahita

    2007-05-01

    This paper introduces the genetic algorithm-kernel partial least square (GA-KPLS), as a novel nonlinear feature selection method. This technique combines genetic algorithms (GAs) as powerful optimization methods with KPLS as a robust nonlinear statistical method for variable selection. This feature selection method is combined with artificial neural network to develop a nonlinear QSAR model for predicting activities of a series of substituted aromatic sulfonamides as carbonic anhydrase II (CA II) inhibitors. Eight simple one- and two-dimensional descriptors were selected by GA-KPLS and considered as inputs for developing artificial neural networks (ANNs). These parameters represent the role of acceptor-donor pair, hydrogen bonding, hydrosolubility and lipophilicity of the active sites and also the size of the inhibitors on inhibitor-isozyme interaction. The accuracy of 8-4-1 networks was illustrated by validation techniques of leave-one-out (LOO) and leave-multiple-out (LMO) cross-validations and Y-randomization. Superiority of this method (GA-KPLS-ANN) over the linear one (MLR) in a previous work and also the GA-PLS-ANN in which a linear feature selection method has been used indicates that the GA-KPLS approach is a powerful method for the variable selection in nonlinear systems. PMID:17316919

  5. Credentialing Activities in the Youth Development Field, 1997.

    ERIC Educational Resources Information Center

    National Collaboration for Youth, Washington, DC.

    This report describes credentialing activities that seek to establish standards and promote professional development in the youth development field. Part 1, Federal and State Legislative Activities, focuses on: legislation promoting youth development activities and programs; welfare reform and the need for youth development and after-school…

  6. Algorithm Development and Validation of CDOM Properties for Estuarine and Continental Shelf Waters Along the Northeastern U.S. Coast

    NASA Technical Reports Server (NTRS)

    Mannino, Antonio; Novak, Michael G.; Hooker, Stanford B.; Hyde, Kimberly; Aurin, Dick

    2014-01-01

    An extensive set of field measurements have been collected throughout the continental margin of the northeastern U.S. from 2004 to 2011 to develop and validate ocean color satellite algorithms for the retrieval of the absorption coefficient of chromophoric dissolved organic matter (aCDOM) and CDOM spectral slopes for the 275:295 nm and 300:600 nm spectral range (S275:295 and S300:600). Remote sensing reflectance (Rrs) measurements computed from in-water radiometry profiles along with aCDOM() data are applied to develop several types of algorithms for the SeaWiFS and MODIS-Aqua ocean color satellite sensors, which involve least squares linear regression of aCDOM() with (1) Rrs band ratios, (2) quasi-analytical algorithm-based (QAA based) products of total absorption coefficients, (3) multiple Rrs bands within a multiple linear regression (MLR) analysis, and (4) diffuse attenuation coefficient (Kd). The relative error (mean absolute percent difference; MAPD) for the MLR retrievals of aCDOM(275), aCDOM(355), aCDOM(380), aCDOM(412) and aCDOM(443) for our study region range from 20.4-23.9 for MODIS-Aqua and 27.3-30 for SeaWiFS. Because of the narrower range of CDOM spectral slope values, the MAPD for the MLR S275:295 and QAA-based S300:600 algorithms are much lower ranging from 9.9 and 8.3 for SeaWiFS, respectively, and 8.7 and 6.3 for MODIS, respectively. Seasonal and spatial MODIS-Aqua and SeaWiFS distributions of aCDOM, S275:295 and S300:600 processed with these algorithms are consistent with field measurements and the processes that impact CDOM levels along the continental shelf of the northeastern U.S. Several satellite data processing factors correlate with higher uncertainty in satellite retrievals of aCDOM, S275:295 and S300:600 within the coastal ocean, including solar zenith angle, sensor viewing angle, and atmospheric products applied for atmospheric corrections. Algorithms that include ultraviolet Rrs bands provide a better fit to field measurements than

  7. Development of a Real-Time Pulse Processing Algorithm for TES-Based X-Ray Microcalorimeters

    NASA Technical Reports Server (NTRS)

    Tan, Hui; Hennig, Wolfgang; Warburton, William K.; Doriese, W. Bertrand; Kilbourne, Caroline A.

    2011-01-01

    We report here a real-time pulse processing algorithm for superconducting transition-edge sensor (TES) based x-ray microcalorimeters. TES-based. microca1orimeters offer ultra-high energy resolutions, but the small volume of each pixel requires that large arrays of identical microcalorimeter pixe1s be built to achieve sufficient detection efficiency. That in turn requires as much pulse processing as possible must be performed at the front end of readout electronics to avoid transferring large amounts of data to a host computer for post-processing. Therefore, a real-time pulse processing algorithm that not only can be implemented in the readout electronics but also achieve satisfactory energy resolutions is desired. We have developed an algorithm that can be easily implemented. in hardware. We then tested the algorithm offline using several data sets acquired with an 8 x 8 Goddard TES x-ray calorimeter array and 2x16 NIST time-division SQUID multiplexer. We obtained an average energy resolution of close to 3.0 eV at 6 keV for the multiplexed pixels while preserving over 99% of the events in the data sets.

  8. Development of sleep apnea syndrome screening algorithm by using heart rate variability analysis and support vector machine.

    PubMed

    Nakayama, Chikao; Fujiwara, Koichi; Matsuo, Masahiro; Kano, Manabu; Kadotani, Hiroshi

    2015-08-01

    Although sleep apnea syndrome (SAS) is a common sleep disorder, most patients with sleep apnea are undiagnosed and untreated because it is difficult for patients themselves to notice SAS in daily living. Polysomnography (PSG) is a gold standard test for sleep disorder diagnosis, however PSG cannot be performed in many hospitals. This fact motivates us to develop an SAS screening system that can be used easily at home. The autonomic nervous function of a patient changes during apnea. Since changes in the autonomic nervous function affect fluctuation of the R-R interval (RRI) of an electrocardiogram (ECG), called heart rate variability (HRV), SAS can be detected through monitoring HRV. The present work proposes a new HRV-based SAS screening algorithm by utilizing support vector machine (SVM), which is a well-known pattern recognition method. In the proposed algorithm, various HRV features are derived from RRI data in both apnea and normal respiration periods of patients and healthy people, and an apnea/normal respiration (A/N) discriminant model is built from the derived HRV features by SVM. The result of applying the proposed SAS screening algorithm to clinical data demonstrates that it can discriminate patients with sleep apnea and healthy people appropriately. The sensitivity and the specificity of the proposed algorithm were 100% and 86%, respectively.

  9. Development and application of an algorithm for detecting Phaeocystis globosa blooms in the Case 2 Southern North Sea waters

    PubMed Central

    Astoreca, Rosa; Rousseau, Véronique; Ruddick, Kevin; Knechciak, Cécile; Van Mol, Barbara; Parent, Jean-Yves; Lancelot, Christiane

    2009-01-01

    While mapping algal blooms from space is now well-established, mapping undesirable algal blooms in eutrophicated coastal waters raises further challenge in detecting individual phytoplankton species. In this paper, an algorithm is developed and tested for detecting Phaeocystis globosa blooms in the Southern North Sea. For this purpose, we first measured the light absorption properties of two phytoplankton groups, P. globosa and diatoms, in laboratory-controlled experiments. The main spectral difference between both groups was observed at 467 nm due to the absorption of the pigment chlorophyll c3 only present in P. globosa, suggesting that the absorption at 467 nm can be used to detect this alga in the field. A Phaeocystis-detection algorithm is proposed to retrieve chlorophyll c3 using either total absorption or water-leaving reflectance field data. Application of this algorithm to absorption and reflectance data from Phaeocystis-dominated natural communities shows positive results. Comparison with pigment concentrations and cell counts suggests that the algorithm can flag the presence of P. globosa and provide quantitative information above a chlorophyll c3 threshold of 0.3 mg m−3 equivalent to a P. globosa cell density of 3 × 106 cells L−1. Finally, the possibility of extrapolating this information to remote sensing reflectance data in these turbid waters is evaluated. PMID:19461860

  10. The development of a line-scan imaging algorithm for the detection of fecal contamination on leafy geens

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Chieh; Kim, Moon S.; Chuang, Yung-Kun; Lee, Hoyoung

    2013-05-01

    This paper reports the development of a multispectral algorithm, using the line-scan hyperspectral imaging system, to detect fecal contamination on leafy greens. Fresh bovine feces were applied to the surfaces of washed loose baby spinach leaves. A hyperspectral line-scan imaging system was used to acquire hyperspectral fluorescence images of the contaminated leaves. Hyperspectral image analysis resulted in the selection of the 666 nm and 688 nm wavebands for a multispectral algorithm to rapidly detect feces on leafy greens, by use of the ratio of fluorescence intensities measured at those two wavebands (666 nm over 688 nm). The algorithm successfully distinguished most of the lowly diluted fecal spots (0.05 g feces/ml water and 0.025 g feces/ml water) and some of the highly diluted spots (0.0125 g feces/ml water and 0.00625 g feces/ml water) from the clean spinach leaves. The results showed the potential of the multispectral algorithm with line-scan imaging system for application to automated food processing lines for food safety inspection of leafy green vegetables.

  11. Problem Solving Techniques for the Design of Algorithms.

    ERIC Educational Resources Information Center

    Kant, Elaine; Newell, Allen

    1984-01-01

    Presents model of algorithm design (activity in software development) based on analysis of protocols of two subjects designing three convex hull algorithms. Automation methods, methods for studying algorithm design, role of discovery in problem solving, and comparison of different designs of case study according to model are highlighted.…

  12. Small Fire Detection Algorithm Development using VIIRS 375m Imagery: Application to Agricultural Fires in Eastern China

    NASA Astrophysics Data System (ADS)

    Zhang, Tianran; Wooster, Martin

    2016-04-01

    Until recently, crop residues have been the second largest industrial waste product produced in China and field-based burning of crop residues is considered to remain extremely widespread, with impacts on air quality and potential negative effects on health, public transportation. However, due to the small size and perhaps short-lived nature of the individual burns, the extent of the activity and its spatial variability remains somewhat unclear. Satellite EO data has been used to gauge the timing and magnitude of Chinese crop burning, but current approaches very likely miss significant amounts of the activity because the individual burned areas are either too small to detect with frequently acquired moderate spatial resolution data such as MODIS. The Visible Infrared Imaging Radiometer Suite (VIIRS) on-board Suomi-NPP (National Polar-orbiting Partnership) satellite launched on October, 2011 has one set of multi-spectral channels providing full global coverage at 375 m nadir spatial resolutions. It is expected that the 375 m spatial resolution "I-band" imagery provided by VIIRS will allow active fires to be detected that are ~ 10× smaller than those that can be detected by MODIS. In this study the new small fire detection algorithm is built based on VIIRS-I band global fire detection algorithm and hot spot detection algorithm for the BIRD satellite mission. VIIRS-I band imagery data will be used to identify agricultural fire activity across Eastern China. A 30 m spatial resolution global land cover data map is used for false alarm masking. The ground-based validation is performed using images taken from UAV. The fire detection result is been compared with active fire product from the long-standing MODIS sensor onboard the TERRA and AQUA satellites, which shows small fires missed from traditional MODIS fire product may count for over 1/3 of total fire energy in Eastern China.

  13. Investigating Baseline, Alternative and Copula-based Algorithm for combining Airborne Active and Passive Microwave Observations in the SMAP Context

    NASA Astrophysics Data System (ADS)

    Montzka, C.; Lorenz, C.; Jagdhuber, T.; Laux, P.; Hajnsek, I.; Kunstmann, H.; Entekhabi, D.; Vereecken, H.

    2015-12-01

    The objective of the NASA Soil Moisture Active & Passive (SMAP) mission is to provide global measurements of soil moisture and freeze/thaw states. SMAP integrates L-band radar and radiometer instruments as a single observation system combining the respective strengths of active and passive remote sensing for enhanced soil moisture mapping. Airborne instruments will be a key part of the SMAP validation program. Here, we present an airborne campaign in the Rur catchment, Germany, in which the passive L-band system Polarimetric L-band Multi-beam Radiometer (PLMR2) and the active L-band system F-SAR of DLR were flown simultaneously on the same platform on six dates in 2013. The flights covered the full heterogeneity of the area under investigation, i.e. all types of land cover and experimental monitoring sites with in situ sensors. Here, we used the obtained data sets as a test-bed for the analysis of three active-passive fusion techniques: A) The SMAP baseline algorithm: Disaggregation of passive microwave brightness temperature by active microwave backscatter and subsequent inversion to soil moisture, B), the SMAP alternative algorithm: Estimation of soil moisture by passive sensor data and subsequent disaggregation by active sensor backscatter and C) Copula-based combination of active and passive microwave data. For method C empirical Copulas were generated and theoretical Copulas fitted both on the level of the raw products brightness temperature and backscatter as well as two soil moisture products. Results indicate that the regression parameters for method A and B are dependent on the radar vegetation index (RVI). Similarly, for method C the best performance was gained by generating separate Copulas for individual land use classes. For more in-depth analyses longer time series are necessary as can obtained by airborne campaigns, therefore, the methods will be applied to SMAP data.

  14. Development of Novel Adenosine Monophosphate-Activated Protein Kinase Activators

    PubMed Central

    Guh, Jih-Hwa; Chang, Wei-Ling; Yang, Jian; Lee, Su-Lin; Wei, Shuo; Wang, Dasheng; Kulp, Samuel K.; Chen, Ching-Shih

    2010-01-01

    In light of the unique ability of thiazolidinediones to mediate peroxisome proliferator-activated receptor (PPAR)γ-independent activation of adenosine monophosphate-activated protein kinase (AMPK) and suppression of interleukin (IL)-6 production, we conducted a screening of an in-house, thiazolidinedione-based focused compound library to identify novel agents with these dual pharmacological activities. Cell-based assays pertinent to the activation status of AMPK and mammalian homolog of target of rapamycin (i.e., phosphorylation of AMPK and p70 ribosomal protein S6 kinase, respectively), and IL-6/IL-6 receptor signaling (i.e., IL-6 production and signal transducer and activator of transcription 3 phosphorylation, respectively) in lipopolysaccharide (LPS)-stimulated THP-1 human macrophages were used to screen this compound library, which led to the identification of compound 53 (N-{4-[3-(1-Methylcyclohexylmethyl)-2,4-dioxo-thiazolidin-5-ylidene-methyl]-phenyl}-4-nitro-3-trifluoromethyl-benzenesulfonamide) as the lead agent. Evidence indicates that this drug-induced suppression of LPS-stimulated IL-6 production was attributable to AMPK activation. Furthermore, compound 53-mediated AMPK activation was demonstrated in C-26 colon adenocarcinoma cells, indicating that it is not a cell line-specific event. PMID:20170185

  15. Ocean Observations with EOS/MODIS: Algorithm Development and Post Launch Studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.; Conboy, B. (Technical Monitor)

    1999-01-01

    Significant accomplishments made during the present reporting period include: 1) Installed spectral optimization algorithm in the SeaDas image processing environment and successfully processed SeaWiFS imagery. The results were superior to the standard SeaWiFS algorithm (the MODIS prototype) in a turbid atmosphere off the US East Coast, but similar in a clear (typical) oceanic atmosphere; 2) Inverted ACE-2 LIDAR measurements coupled with sun photometer-derived aerosol optical thickness to obtain the vertical profile of aerosol optical thickness. The profile was validated with simultaneous aircraft measurements; and 3) Obtained LIDAR and CIMEL measurements of typical maritime and mineral dust-dominated marine atmosphere in the U.S. Virgin Islands. Contemporaneous SeaWiFS imagery were also acquired.

  16. Development of an anthropomorphic breast software phantom based on region growing algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Cuiping; Bakic, Predrag R.; Maidment, Andrew D. A.

    2008-03-01

    Software breast phantoms offer greater flexibility in generating synthetic breast images compared to physical phantoms. The realism of such generated synthetic images depends on the method for simulating the three-dimensional breast anatomical structures. We present here a novel algorithm for computer simulation of breast anatomy. The algorithm simulates the skin, regions of predominantly adipose tissue and fibro-glandular tissue, and the matrix of adipose tissue compartments and Cooper's ligaments. The simulation approach is based upon a region growing procedure; adipose compartments are grown from a selected set of seed points with different orientation and growth rate. The simulated adipose compartments vary in shape and size similarly to the anatomical breast variation, resulting in much improved phantom realism compared to our previous simulation based on geometric primitives. The proposed simulation also has an improved control over the breast size and glandularity. Our software breast phantom has been used in a number of applications, including breast tomosynthesis and texture analysis optimization.

  17. A path towards uncertainty assignment in an operational cloud-phase algorithm from ARM vertically pointing active sensors

    NASA Astrophysics Data System (ADS)

    Riihimaki, Laura D.; Comstock, Jennifer M.; Anderson, Kevin K.; Holmes, Aimee; Luke, Edward

    2016-06-01

    Knowledge of cloud phase (liquid, ice, mixed, etc.) is necessary to describe the radiative impact of clouds and their lifetimes, but is a property that is difficult to simulate correctly in climate models. One step towards improving those simulations is to make observations of cloud phase with sufficient accuracy to help constrain model representations of cloud processes. In this study, we outline a methodology using a basic Bayesian classifier to estimate the probabilities of cloud-phase class from Atmospheric Radiation Measurement (ARM) vertically pointing active remote sensors. The advantage of this method over previous ones is that it provides uncertainty information on the phase classification. We also test the value of including higher moments of the cloud radar Doppler spectrum than are traditionally used operationally. Using training data of known phase from the Mixed-Phase Arctic Cloud Experiment (M-PACE) field campaign, we demonstrate a proof of concept for how the method can be used to train an algorithm that identifies ice, liquid, mixed phase, and snow. Over 95 % of data are identified correctly for pure ice and liquid cases used in this study. Mixed-phase and snow cases are more problematic to identify correctly. When lidar data are not available, including additional information from the Doppler spectrum provides substantial improvement to the algorithm. This is a first step towards an operational algorithm and can be expanded to include additional categories such as drizzle with additional training data.

  18. A path towards uncertainty assignment in an operational cloud-phase algorithm from ARM vertically pointing active sensors

    DOE PAGES

    Riihimaki, Laura D.; Comstock, Jennifer M.; Anderson, Kevin K.; Holmes, Aimee; Luke, Edward

    2016-06-10

    Knowledge of cloud phase (liquid, ice, mixed, etc.) is necessary to describe the radiative impact of clouds and their lifetimes, but is a property that is difficult to simulate correctly in climate models. One step towards improving those simulations is to make observations of cloud phase with sufficient accuracy to help constrain model representations of cloud processes. In this study, we outline a methodology using a basic Bayesian classifier to estimate the probabilities of cloud-phase class from Atmospheric Radiation Measurement (ARM) vertically pointing active remote sensors. The advantage of this method over previous ones is that it provides uncertainty informationmore » on the phase classification. We also test the value of including higher moments of the cloud radar Doppler spectrum than are traditionally used operationally. Using training data of known phase from the Mixed-Phase Arctic Cloud Experiment (M-PACE) field campaign, we demonstrate a proof of concept for how the method can be used to train an algorithm that identifies ice, liquid, mixed phase, and snow. Over 95 % of data are identified correctly for pure ice and liquid cases used in this study. Mixed-phase and snow cases are more problematic to identify correctly. When lidar data are not available, including additional information from the Doppler spectrum provides substantial improvement to the algorithm. This is a first step towards an operational algorithm and can be expanded to include additional categories such as drizzle with additional training data.« less

  19. Ocean observations with EOS/MODIS: Algorithm development and post launch studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1996-01-01

    An investigation of the influence of stratospheric aerosol on the performance of the atmospheric correction algorithm is nearly complete. The results indicate how the performance of the algorithm is degraded if the stratospheric aerosol is ignored. Use of the MODIS 1380 nm band to effect a correction for stratospheric aerosols was also studied. Simple algorithms such as subtracting the reflectance at 1380 nm from the visible and near infrared bands can significantly reduce the error; however, only if the diffuse transmittance of the aerosol layer is taken into account. The atmospheric correction code has been modified for use with absorbing aerosols. Tests of the code showed that, in contrast to non absorbing aerosols, the retrievals were strongly influenced by the vertical structure of the aerosol, even when the candidate aerosol set was restricted to a set appropriate to the absorbing aerosol. This will further complicate the problem of atmospheric correction in an atmosphere with strongly absorbing aerosols. Our whitecap radiometer system and solar aureole camera were both tested at sea and performed well. Investigation of a technique to remove the effects of residual instrument polarization sensitivity were initiated and applied to an instrument possessing (approx.) 3-4 times the polarization sensitivity expected for MODIS. Preliminary results suggest that for such an instrument, elimination of the polarization effect is possible at the required level of accuracy by estimating the polarization of the top-of-atmosphere radiance to be that expected for a pure Rayleigh scattering atmosphere. This may be of significance for design of a follow-on MODIS instrument. W.M. Balch participated on two month-long cruises to the Arabian sea, measuring coccolithophore abundance, production, and optical properties. A thorough understanding of the relationship between calcite abundance and light scatter, in situ, will provide the basis for a generic suspended calcite algorithm.

  20. Selection and collection of multi parameter physiological data for cardiac rhythm diagnostic algorithm development

    NASA Astrophysics Data System (ADS)

    Bostock, J.; Weller, P.; Cooklin, M.

    2010-07-01

    Automated diagnostic algorithms are used in implantable cardioverter-defibrillators (ICD's) to detect abnormal heart rhythms. Algorithms misdiagnose and improved specificity is needed to prevent inappropriate therapy. Knowledge engineering (KE) and artificial intelligence (AI) could improve this. A pilot study of KE was performed with artificial neural network (ANN) as AI system. A case note review analysed arrhythmic events stored in patients ICD memory. 13.2% patients received inappropriate therapy. The best ICD algorithm had sensitivity 1.00, specificity 0.69 (p<0.001 different to gold standard). A subset of data was used to train and test an ANN. A feed-forward, back-propagation network with 7 inputs, a 4 node hidden layer and 1 output had sensitivity 1.00, specificity 0.71 (p<0.001). A prospective study was performed using KE to list arrhythmias, factors and indicators for which measurable parameters were evaluated and results reviewed by a domain expert. Waveforms from electrodes in the heart and thoracic bio-impedance; temperature and motion data were collected from 65 patients during cardiac electrophysiological studies. 5 incomplete datasets were due to technical failures. We concluded that KE successfully guided selection of parameters and ANN produced a usable system and that complex data collection carries greater risk of technical failure, leading to data loss.

  1. Development of a Computer-Aided-Design-Based Geometry and Mesh Movement Algorithm for Three-Dimensional Aerodynamic Shape Optimization

    NASA Astrophysics Data System (ADS)

    Truong, Anh Hoang

    This thesis focuses on the development of a Computer-Aided-Design (CAD)-based geometry parameterization method and a corresponding surface mesh movement algorithm suitable for three-dimensional aerodynamic shape optimization. The geometry parameterization method includes a geometry control tool to aid in the construction and manipulation of a CAD geometry through a vendor-neutral application interface, CAPRI. It automates the tedious part of the construction phase involving data entry and provides intuitive and effective design variables that allow for both the flexibility and the precision required to control the movement of the geometry. The surface mesh movement algorithm, on the other hand, transforms an initial structured surface mesh to fit the new geometry using a discrete representation of the new CAD surface provided by CAPRI. Using a unique mapping procedure, the algorithm not only preserves the characteristics of the original surface mesh, but also guarantees that the new mesh points are on the CAD geometry. The new surface mesh is then smoothed in the parametric space before it is transformed back into three-dimensional space. The procedure is efficient in that all the processing is done in the parametric space, incurring minimal computational cost. The geometry parameterization and mesh movement tools are integrated into a three-dimensional shape optimization framework, with a linear-elasticity volume-mesh movement algorithm, a Newton-Krylov flow solver for the Euler equations, and a gradient-based optimizer. The validity and accuracy of the CAD-based optimization algorithm are demonstrated through a number of verification and optimization cases.

  2. Development of Potent Adenosine Monophosphate Activated Protein Kinase (AMPK) Activators.

    PubMed

    Dokla, Eman M E; Fang, Chun-Sheng; Lai, Po-Ting; Kulp, Samuel K; Serya, Rabah A T; Ismail, Nasser S M; Abouzid, Khaled A M; Chen, Ching-Shih

    2015-11-01

    Previously, we reported the identification of a thiazolidinedione-based adenosine monophosphate activated protein kinase (AMPK) activator, compound 1 (N-[4-({3-[(1-methylcyclohexyl)methyl]-2,4-dioxothiazolidin-5-ylidene}methyl)phenyl]-4-nitro-3-(trifluoromethyl)benzenesulfonamide), which provided a proof of concept to delineate the intricate role of AMPK in regulating oncogenic signaling pathways associated with cell proliferation and epithelial-mesenchymal transition (EMT) in cancer cells. In this study, we used 1 as a scaffold to conduct lead optimization, which generated a series of derivatives. Analysis of the antiproliferative and AMPK-activating activities of individual derivatives revealed a distinct structure-activity relationship and identified 59 (N-(3-nitrophenyl)-N'-{4-[(3-{[3,5-bis(trifluoromethyl)phenyl]methyl}-2,4-dioxothiazolidin-5-ylidene)methyl]phenyl}urea) as the optimal agent. Relative to 1, compound 59 exhibits multifold higher potency in upregulating AMPK phosphorylation in various cell lines irrespective of their liver kinase B1 (LKB1) functional status, accompanied by parallel changes in the phosphorylation/expression levels of p70S6K, Akt, Foxo3a, and EMT-associated markers. Consistent with its predicted activity against tumors with activated Akt status, orally administered 59 was efficacious in suppressing the growth of phosphatase and tensin homologue (PTEN)-null PC-3 xenograft tumors in nude mice. Together, these findings suggest that 59 has clinical value in therapeutic strategies for PTEN-negative cancer and warrants continued investigation in this regard.

  3. Development and validation of an automated operational modal analysis algorithm for vibration-based monitoring and tensile load estimation

    NASA Astrophysics Data System (ADS)

    Rainieri, Carlo; Fabbrocino, Giovanni

    2015-08-01

    In the last few decades large research efforts have been devoted to the development of methods for automated detection of damage and degradation phenomena at an early stage. Modal-based damage detection techniques are well-established methods, whose effectiveness for Level 1 (existence) and Level 2 (location) damage detection is demonstrated by several studies. The indirect estimation of tensile loads in cables and tie-rods is another attractive application of vibration measurements. It provides interesting opportunities for cheap and fast quality checks in the construction phase, as well as for safety evaluations and structural maintenance over the structure lifespan. However, the lack of automated modal identification and tracking procedures has been for long a relevant drawback to the extensive application of the above-mentioned techniques in the engineering practice. An increasing number of field applications of modal-based structural health and performance assessment are appearing after the development of several automated output-only modal identification procedures in the last few years. Nevertheless, additional efforts are still needed to enhance the robustness of automated modal identification algorithms, control the computational efforts and improve the reliability of modal parameter estimates (in particular, damping). This paper deals with an original algorithm for automated output-only modal parameter estimation. Particular emphasis is given to the extensive validation of the algorithm based on simulated and real datasets in view of continuous monitoring applications. The results point out that the algorithm is fairly robust and demonstrate its ability to provide accurate and precise estimates of the modal parameters, including damping ratios. As a result, it has been used to develop systems for vibration-based estimation of tensile loads in cables and tie-rods. Promising results have been achieved for non-destructive testing as well as continuous

  4. Development and validation of a novel pedometer algorithm to quantify extended characteristics of the locomotor behavior of dairy cows.

    PubMed

    Alsaaod, M; Niederhauser, J J; Beer, G; Zehner, N; Schuepbach-Regula, G; Steiner, A

    2015-09-01

    Behavior is one of the most important indicators for assessing cattle health and well-being. The objective of this study was to develop and validate a novel algorithm to monitor locomotor behavior of loose-housed dairy cows based on the output of the RumiWatch pedometer (ITIN+HOCH GmbH, Fütterungstechnik, Liestal, Switzerland). Data of locomotion were acquired by simultaneous pedometer measurements at a sampling rate of 10 Hz and video recordings for manual observation later. The study consisted of 3 independent experiments. Experiment 1 was carried out to develop and validate the algorithm for lying behavior, experiment 2 for walking and standing behavior, and experiment 3 for stride duration and stride length. The final version was validated, using the raw data, collected from cows not included in the development of the algorithm. Spearman correlation coefficients were calculated between accelerometer variables and respective data derived from the video recordings (gold standard). Dichotomous data were expressed as the proportion of correctly detected events, and the overall difference for continuous data was expressed as the relative measurement error. The proportions for correctly detected events or bouts were 1 for stand ups, lie downs, standing bouts, and lying bouts and 0.99 for walking bouts. The relative measurement error and Spearman correlation coefficient for lying time were 0.09% and 1; for standing time, 4.7% and 0.96; for walking time, 17.12% and 0.96; for number of strides, 6.23% and 0.98; for stride duration, 6.65% and 0.75; and for stride length, 11.92% and 0.81, respectively. The strong to very high correlations of the variables between visual observation and converted pedometer data indicate that the novel RumiWatch algorithm may markedly improve automated livestock management systems for efficient health monitoring of dairy cows. PMID:26142842

  5. Development and validation of a segmentation-free polyenergetic algorithm for dynamic perfusion computed tomography.

    PubMed

    Lin, Yuan; Samei, Ehsan

    2016-07-01

    Dynamic perfusion imaging can provide the morphologic details of the scanned organs as well as the dynamic information of blood perfusion. However, due to the polyenergetic property of the x-ray spectra, beam hardening effect results in undesirable artifacts and inaccurate CT values. To address this problem, this study proposes a segmentation-free polyenergetic dynamic perfusion imaging algorithm (pDP) to provide superior perfusion imaging. Dynamic perfusion usually is composed of two phases, i.e., a precontrast phase and a postcontrast phase. In the precontrast phase, the attenuation properties of diverse base materials (e.g., in a thorax perfusion exam, base materials can include lung, fat, breast, soft tissue, bone, and metal implants) can be incorporated to reconstruct artifact-free precontrast images. If patient motions are negligible or can be corrected by registration, the precontrast images can then be employed as a priori information to derive linearized iodine projections from the postcontrast images. With the linearized iodine projections, iodine perfusion maps can be reconstructed directly without the influence of various influential factors, such as iodine location, patient size, x-ray spectrum, and background tissue type. A series of simulations were conducted on a dynamic iodine calibration phantom and a dynamic anthropomorphic thorax phantom to validate the proposed algorithm. The simulations with the dynamic iodine calibration phantom showed that the proposed algorithm could effectively eliminate the beam hardening effect and enable quantitative iodine map reconstruction across various influential factors. The error range of the iodine concentration factors ([Formula: see text]) was reduced from [Formula: see text] for filtered back-projection (FBP) to [Formula: see text] for pDP. The quantitative results of the simulations with the dynamic anthropomorphic thorax phantom indicated that the maximum error of iodine concentrations can be reduced from

  6. Optimal placement of active braces by using PSO algorithm in near- and far-field earthquakes

    NASA Astrophysics Data System (ADS)

    Mastali, M.; Kheyroddin, A.; Samali, B.; Vahdani, R.

    2016-03-01

    One of the most important issues in tall buildings is lateral resistance of the load-bearing systems against applied loads such as earthquake, wind and blast. Dual systems comprising core wall systems (single or multi-cell core) and moment-resisting frames are used as resistance systems in tall buildings. In addition to adequate stiffness provided by the dual system, most tall buildings may have to rely on various control systems to reduce the level of unwanted motions stemming from severe dynamic loads. One of the main challenges to effectively control the motion of a structure is limitation in distributing the required control along the structure height optimally. In this paper, concrete shear walls are used as secondary resistance system at three different heights as well as actuators installed in the braces. The optimal actuator positions are found by using optimized PSO algorithm as well as arbitrarily. The control performance of buildings that are equipped and controlled using the PSO algorithm method placement is assessed and compared with arbitrary placement of controllers using both near- and far-field ground motions of Kobe and Chi-Chi earthquakes.

  7. Validation of Algorithms for Basal Insulin Rate Reductions in Type 1 Diabetic Patients Practising Physical Activity

    ClinicalTrials.gov

    2013-04-19

    Type 1 Diabetes With a Subcutaneous Insulin Pump; Adjustment of the Recommended Basal Insulin Flow Rate in the Event of Physical Activity; Adjustment of the Recommended Prandial Insulin in the Event of Physical Activity

  8. Drowsiness/alertness algorithm development and validation using synchronized EEG and cognitive performance to individualize a generalized model

    PubMed Central

    Johnson, Robin R.; Popovic, Djordje P.; Olmstead, Richard E.; Stikic, Maja; Levendowski, Daniel J.; Berka, Chris

    2011-01-01

    A great deal of research over the last century has focused on drowsiness/alertness detection, as fatigue-related physical and cognitive impairments pose a serious risk to public health and safety. Available drowsiness/alertness detection solutions are unsatisfactory for a number of reasons: 1) lack of generalizability, 2) failure to address individual variability in generalized models, and/or 3) they lack a portable, un-tethered application. The current study aimed to address these issues, and determine if an individualized electroencephalography (EEG) based algorithm could be defined to track performance decrements associated with sleep loss, as this is the first step in developing a field deployable drowsiness/alertness detection system. The results indicated that an EEG-based algorithm, individualized using a series of brief "identification" tasks, was able to effectively track performance decrements associated with sleep deprivation. Future development will address the need for the algorithm to predict performance decrements due to sleep loss, and provide field applicability. PMID:21419826

  9. Development and comparative assessment of Raman spectroscopic classification algorithms for lesion discrimination in stereotactic breast biopsies with microcalcifications

    PubMed Central

    Dingari, Narahara Chari; Barman, Ishan; Saha, Anushree; McGee, Sasha; Galindo, Luis H.; Liu, Wendy; Plecha, Donna; Klein, Nina; Dasari, Ramachandra Rao; Fitzmaurice, Maryann

    2014-01-01

    Microcalcifications are an early mammographic sign of breast cancer and a target for stereotactic breast needle biopsy. Here, we develop and compare different approaches for developing Raman classification algorithms to diagnose invasive and in situ breast cancer, fibrocystic change and fibroadenoma that can be associated with microcalcifications. In this study, Raman spectra were acquired from tissue cores obtained from fresh breast biopsies and analyzed using a constituent-based breast model. Diagnostic algorithms based on the breast model fit coefficients were devised using logistic regression, C4.5 decision tree classification, k-nearest neighbor (k-NN) and support vector machine (SVM) analysis, and subjected to leave-one-out cross validation. The best performing algorithm was based on SVM analysis (with radial basis function), which yielded a positive predictive value of 100% and negative predictive value of 96% for cancer diagnosis. Importantly, these results demonstrate that Raman spectroscopy provides adequate diagnostic information for lesion discrimination even in the presence of microcalcifications, which to the best of our knowledge has not been previously reported. Raman spectroscopy and multivariate classification provide accurate discrimination among lesions in stereotactic breast biopsies, irrespective of microcalcification status. PMID:22815240

  10. Development of a remote sensing algorithm for cyanobacterial phycocyanin pigment in the Baltic Sea using neural network approach

    NASA Astrophysics Data System (ADS)

    Riha, Stefan; Krawczyk, Harald

    2011-11-01

    Water quality monitoring in the Baltic Sea is of high ecological importance for all its neighbouring countries. They are highly interested in a regular monitoring of water quality parameters of their regional zones. A special attention is paid to the occurrence and dissemination of algae blooms. Among the appearing blooms the possibly toxicological or harmful cyanobacteria cultures are a special case of investigation, due to their specific optical properties and due to the negative influence on the ecological state of the aquatic system. Satellite remote sensing, with its high temporal and spatial resolution opportunities, allows the frequent observations of large areas of the Baltic Sea with special focus on its two seasonal algae blooms. For a better monitoring of the cyanobacteria dominated summer blooms, adapted algorithms are needed which take into account the special optical properties of blue-green algae. Chlorophyll-a standard algorithms typically fail in a correct recognition of these occurrences. To significantly improve the opportunities of observation and propagation of the cyanobacteria blooms, the Marine Remote Sensing group of DLR has started the development of a model based inversion algorithm that includes a four component bio-optical water model for Case2 waters, which extends the commonly calculated parameter set chlorophyll, Suspended Matter and CDOM with an additional parameter for the estimation of phycocyanin absorption. It was necessary to carry out detailed optical laboratory measurements with different cyanobacteria cultures, occurring in the Baltic Sea, for the generation of a specific bio-optical model. The inversion of satellite remote sensing data is based on an artificial Neural Network technique. This is a model based multivariate non-linear inversion approach. The specifically designed Neural Network is trained with a comprehensive dataset of simulated reflectance values taking into account the laboratory obtained specific optical

  11. Early Childhood: Developing Sense-activities.

    ERIC Educational Resources Information Center

    Shirah, Sue; Dorman, Mildred M.

    1989-01-01

    Described are science activities in which students concentrate on their senses and make discoveries with their eyes, ears, noses, mouths, and hands. Suggested experiments include activities involving cooking, tasting, observing, floating and sinking objects, making rain, and stringed musical instruments. (RT)

  12. Watershed model calibration framework developed using an influence coefficient algorithm and a genetic algorithm and analysis of pollutant discharge characteristics and load reduction in a TMDL planning area.

    PubMed

    Cho, Jae Heon; Lee, Jong Ho

    2015-11-01

    Manual calibration is common in rainfall-runoff model applications. However, rainfall-runoff models include several complicated parameters; thus, significant time and effort are required to manually calibrate the parameters individually and repeatedly. Automatic calibration has relative merit regarding time efficiency and objectivity but shortcomings regarding understanding indigenous processes in the basin. In this study, a watershed model calibration framework was developed using an influence coefficient algorithm and genetic algorithm (WMCIG) to automatically calibrate the distributed models. The optimization problem used to minimize the sum of squares of the normalized residuals of the observed and predicted values was solved using a genetic algorithm (GA). The final model parameters were determined from the iteration with the smallest sum of squares of the normalized residuals of all iterations. The WMCIG was applied to a Gomakwoncheon watershed located in an area that presents a total maximum daily load (TMDL) in Korea. The proportion of urbanized area in this watershed is low, and the diffuse pollution loads of nutrients such as phosphorus are greater than the point-source pollution loads because of the concentration of rainfall that occurs during the summer. The pollution discharges from the watershed were estimated for each land-use type, and the seasonal variations of the pollution loads were analyzed. Consecutive flow measurement gauges have not been installed in this area, and it is difficult to survey the flow and water quality in this area during the frequent heavy rainfall that occurs during the wet season. The Hydrological Simulation Program-Fortran (HSPF) model was used to calculate the runoff flow and water quality in this basin. Using the water quality results, a load duration curve was constructed for the basin, the exceedance frequency of the water quality standard was calculated for each hydrologic condition class, and the percent reduction

  13. Watershed model calibration framework developed using an influence coefficient algorithm and a genetic algorithm and analysis of pollutant discharge characteristics and load reduction in a TMDL planning area.

    PubMed

    Cho, Jae Heon; Lee, Jong Ho

    2015-11-01

    Manual calibration is common in rainfall-runoff model applications. However, rainfall-runoff models include several complicated parameters; thus, significant time and effort are required to manually calibrate the parameters individually and repeatedly. Automatic calibration has relative merit regarding time efficiency and objectivity but shortcomings regarding understanding indigenous processes in the basin. In this study, a watershed model calibration framework was developed using an influence coefficient algorithm and genetic algorithm (WMCIG) to automatically calibrate the distributed models. The optimization problem used to minimize the sum of squares of the normalized residuals of the observed and predicted values was solved using a genetic algorithm (GA). The final model parameters were determined from the iteration with the smallest sum of squares of the normalized residuals of all iterations. The WMCIG was applied to a Gomakwoncheon watershed located in an area that presents a total maximum daily load (TMDL) in Korea. The proportion of urbanized area in this watershed is low, and the diffuse pollution loads of nutrients such as phosphorus are greater than the point-source pollution loads because of the concentration of rainfall that occurs during the summer. The pollution discharges from the watershed were estimated for each land-use type, and the seasonal variations of the pollution loads were analyzed. Consecutive flow measurement gauges have not been installed in this area, and it is difficult to survey the flow and water quality in this area during the frequent heavy rainfall that occurs during the wet season. The Hydrological Simulation Program-Fortran (HSPF) model was used to calculate the runoff flow and water quality in this basin. Using the water quality results, a load duration curve was constructed for the basin, the exceedance frequency of the water quality standard was calculated for each hydrologic condition class, and the percent reduction

  14. Study report on interfacing major physiological subsystem models: An approach for developing a whole-body algorithm

    NASA Technical Reports Server (NTRS)

    Fitzjerrell, D. G.; Grounds, D. J.; Leonard, J. I.

    1975-01-01

    Using a whole body algorithm simulation model, a wide variety and large number of stresses as well as different stress levels were simulated including environmental disturbances, metabolic changes, and special experimental situations. Simulation of short term stresses resulted in simultaneous and integrated responses from the cardiovascular, respiratory, and thermoregulatory subsystems and the accuracy of a large number of responding variables was verified. The capability of simulating significantly longer responses was demonstrated by validating a four week bed rest study. In this case, the long term subsystem model was found to reproduce many experimentally observed changes in circulatory dynamics, body fluid-electrolyte regulation, and renal function. The value of systems analysis and the selected design approach for developing a whole body algorithm was demonstrated.

  15. Use of a Stochastic Joint Inversion Modeling Algorithm to Develop a Hydrothermal Flow Model at a Geothermal Prospect

    NASA Astrophysics Data System (ADS)

    Tompson, A. F. B.; Mellors, R. J.; Dyer, K.; Yang, X.; Chen, M.; Trainor Guitton, W.; Wagoner, J. L.; Ramirez, A. L.

    2014-12-01

    A stochastic joint inverse algorithm is used to analyze diverse geophysical and hydrologic data associated with a geothermal prospect. The approach uses a Markov Chain Monte Carlo (MCMC) global search algorithm to develop an ensemble of hydrothermal groundwater flow models that are most consistent with the observations. The algorithm utilizes an initial conceptual model descriptive of structural (geology), parametric (permeability) and hydrothermal (saturation, temperature) characteristics of the geologic system. Initial (a-priori) estimates of uncertainty in these characteristics are used to drive simulations of hydrothermal fluid flow and related geophysical processes in a large number of random realizations of the conceptual geothermal system spanning these uncertainties. The process seeks to improve the conceptual model by developing a ranked subset of model realizations that best match all available data within a specified norm or tolerance. Statistical (posterior) characteristics of these solutions reflect reductions in the a-priori uncertainties. The algorithm has been tested on a geothermal prospect located at Superstition Mountain, California and has been successful in creating a suite of models compatible with available temperature, surface resistivity, and magnetotelluric (MT) data. Although the MCMC method is highly flexible and capable of accommodating multiple and diverse datasets, a typical inversion may require the evaluation of thousands of possible model runs whose sophistication and complexity may evolve with the magnitude of data considered. As a result, we are testing the use of sensitivity analyses to better identify critical uncertain variables, lower order surrogate models to streamline computational costs, and value of information analyses to better assess optimal use of related data. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL

  16. Use of an Improved Matching Algorithm to Select Scaffolds for Enzyme Design Based on a Complex Active Site Model

    PubMed Central

    Huang, Xiaoqiang; Xue, Jing; Lin, Min; Zhu, Yushan

    2016-01-01

    Active site preorganization helps native enzymes electrostatically stabilize the transition state better than the ground state for their primary substrates and achieve significant rate enhancement. In this report, we hypothesize that a complex active site model for active site preorganization modeling should help to create preorganized active site design and afford higher starting activities towards target reactions. Our matching algorithm ProdaMatch was improved by invoking effective pruning strategies and the native active sites for ten scaffolds in a benchmark test set were reproduced. The root-mean squared deviations between the matched transition states and those in the crystal structures were < 1.0 Å for the ten scaffolds, and the repacking calculation results showed that 91% of the hydrogen bonds within the active sites are recovered, indicating that the active sites can be preorganized based on the predicted positions of transition states. The application of the complex active site model for de novo enzyme design was evaluated by scaffold selection using a classic catalytic triad motif for the hydrolysis of p-nitrophenyl acetate. Eighty scaffolds were identified from a scaffold library with 1,491 proteins and four scaffolds were native esterase. Furthermore, enzyme design for complicated substrates was investigated for the hydrolysis of cephalexin using scaffold selection based on two different catalytic motifs. Only three scaffolds were identified from the scaffold library by virtue of the classic catalytic triad-based motif. In contrast, 40 scaffolds were identified using a more flexible, but still preorganized catalytic motif, where one scaffold corresponded to the α-amino acid ester hydrolase that catalyzes the hydrolysis and synthesis of cephalexin. Thus, the complex active site modeling approach for de novo enzyme design with the aid of the improved ProdaMatch program is a promising approach for the creation of active sites with high catalytic

  17. Use of an Improved Matching Algorithm to Select Scaffolds for Enzyme Design Based on a Complex Active Site Model.

    PubMed

    Huang, Xiaoqiang; Xue, Jing; Lin, Min; Zhu, Yushan

    2016-01-01

    Active site preorganization helps native enzymes electrostatically stabilize the transition state better than the ground state for their primary substrates and achieve significant rate enhancement. In this report, we hypothesize that a complex active site model for active site preorganization modeling should help to create preorganized active site design and afford higher starting activities towards target reactions. Our matching algorithm ProdaMatch was improved by invoking effective pruning strategies and the native active sites for ten scaffolds in a benchmark test set were reproduced. The root-mean squared deviations between the matched transition states and those in the crystal structures were < 1.0 Å for the ten scaffolds, and the repacking calculation results showed that 91% of the hydrogen bonds within the active sites are recovered, indicating that the active sites can be preorganized based on the predicted positions of transition states. The application of the complex active site model for de novo enzyme design was evaluated by scaffold selection using a classic catalytic triad motif for the hydrolysis of p-nitrophenyl acetate. Eighty scaffolds were identified from a scaffold library with 1,491 proteins and four scaffolds were native esterase. Furthermore, enzyme design for complicated substrates was investigated for the hydrolysis of cephalexin using scaffold selection based on two different catalytic motifs. Only three scaffolds were identified from the scaffold library by virtue of the classic catalytic triad-based motif. In contrast, 40 scaffolds were identified using a more flexible, but still preorganized catalytic motif, where one scaffold corresponded to the α-amino acid ester hydrolase that catalyzes the hydrolysis and synthesis of cephalexin. Thus, the complex active site modeling approach for de novo enzyme design with the aid of the improved ProdaMatch program is a promising approach for the creation of active sites with high catalytic

  18. Development of an apnea detection algorithm based on temporal analysis of thoracic respiratory effort signal

    NASA Astrophysics Data System (ADS)

    Dell'Aquila, C. R.; Cañadas, G. E.; Correa, L. S.; Laciar, E.

    2016-04-01

    This work describes the design of an algorithm for detecting apnea episodes, based on analysis of thorax respiratory effort signal. Inspiration and expiration time, and range amplitude of respiratory cycle were evaluated. For range analysis the standard deviation statistical tool was used over respiratory signal temporal windows. The validity of its performance was carried out in 8 records of Apnea-ECG database that has annotations of apnea episodes. The results are: sensitivity (Se) 73%, specificity (Sp) 83%. These values can be improving eliminating artifact of signal records.

  19. Robust integration schemes for generalized viscoplasticity with internal-state variables. Part 2: Algorithmic developments and implementation

    NASA Technical Reports Server (NTRS)

    Li, Wei; Saleeb, Atef F.

    1995-01-01

    This two-part report is concerned with the development of a general framework for the implicit time-stepping integrators for the flow and evolution equations in generalized viscoplastic models. The primary goal is to present a complete theoretical formulation, and to address in detail the algorithmic and numerical analysis aspects involved in its finite element implementation, as well as to critically assess the numerical performance of the developed schemes in a comprehensive set of test cases. On the theoretical side, the general framework is developed on the basis of the unconditionally-stable, backward-Euler difference scheme as a starting point. Its mathematical structure is of sufficient generality to allow a unified treatment of different classes of viscoplastic models with internal variables. In particular, two specific models of this type, which are representative of the present start-of-art in metal viscoplasticity, are considered in applications reported here; i.e., fully associative (GVIPS) and non-associative (NAV) models. The matrix forms developed for both these models are directly applicable for both initially isotropic and anisotropic materials, in general (three-dimensional) situations as well as subspace applications (i.e., plane stress/strain, axisymmetric, generalized plane stress in shells). On the computational side, issues related to efficiency and robustness are emphasized in developing the (local) interative algorithm. In particular, closed-form expressions for residual vectors and (consistent) material tangent stiffness arrays are given explicitly for both GVIPS and NAV models, with their maximum sizes 'optimized' to depend only on the number of independent stress components (but independent of the number of viscoplastic internal state parameters). Significant robustness of the local iterative solution is provided by complementing the basic Newton-Raphson scheme with a line-search strategy for convergence. In the present second part of

  20. SeaWiFS Technical Report Series. Volume 42; Satellite Primary Productivity Data and Algorithm Development: A Science Plan for Mission to Planet Earth

    NASA Technical Reports Server (NTRS)

    Falkowski, Paul G.; Behrenfeld, Michael J.; Esaias, Wayne E.; Balch, William; Campbell, Janet W.; Iverson, Richard L.; Kiefer, Dale A.; Morel, Andre; Yoder, James A.; Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor)

    1998-01-01

    Two issues regarding primary productivity, as it pertains to the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Program and the National Aeronautics and Space Administration (NASA) Mission to Planet Earth (MTPE) are presented in this volume. Chapter 1 describes the development of a science plan for deriving primary production for the world ocean using satellite measurements, by the Ocean Primary Productivity Working Group (OPPWG). Chapter 2 presents discussions by the same group, of algorithm classification, algorithm parameterization and data availability, algorithm testing and validation, and the benefits of a consensus primary productivity algorithm.

  1. Active diffraction gratings: Development and tests

    SciTech Connect

    Bonora, S.; Frassetto, F.; Poletto, L.; Zanchetta, E.; Della Giustina, G.; Brusatin, G.

    2012-12-15

    We present the realization and characterization of an active spherical diffraction grating with variable radius of curvature to be used in grazing-incidence monochromators. The device consists of a bimorph deformable mirror on the top of which a diffraction grating with laminar profile is realized by UV lithography. The experimental results show that the active grating can optimize the beam focalization of visible wavelengths through its rotation and focus accommodation.

  2. Active diffraction gratings: development and tests.

    PubMed

    Bonora, S; Frassetto, F; Zanchetta, E; Della Giustina, G; Brusatin, G; Poletto, L

    2012-12-01

    We present the realization and characterization of an active spherical diffraction grating with variable radius of curvature to be used in grazing-incidence monochromators. The device consists of a bimorph deformable mirror on the top of which a diffraction grating with laminar profile is realized by UV lithography. The experimental results show that the active grating can optimize the beam focalization of visible wavelengths through its rotation and focus accommodation.

  3. Stream-reach Identification for New Run-of-River Hydropower Development through a Merit Matrix Based Geospatial Algorithm

    SciTech Connect

    Pasha, M. Fayzul K.; Yeasmin, Dilruba; Kao, Shih-Chieh; Hadjerioua, Boualem; Wei, Yaxing; Smith, Brennan T

    2014-01-01

    Even after a century of development, the total hydropower potential from undeveloped rivers is still considered to be abundant in the United States. However, unlike evaluating hydropower potential at existing hydropower plants or non-powered dams, locating a feasible new hydropower plant involves many unknowns, and hence the total undeveloped potential is harder to quantify. In light of the rapid development of multiple national geospatial datasets for topography, hydrology, and environmental characteristics, a merit matrix based geospatial algorithm is proposed to help identify possible hydropower stream-reaches for future development. These hydropower stream-reaches sections of natural streams with suitable head, flow, and slope for possible future development are identified and compared using three different scenarios. A case study was conducted in the Alabama-Coosa-Tallapoosa (ACT) and Apalachicola-Chattahoochee-Flint (ACF) hydrologic subregions. It was found that a merit matrix based algorithm, which is based on the product of hydraulic head, annual mean flow, and average channel slope, can help effectively identify stream-reaches with high power density and small surface inundation. The identified stream-reaches can then be efficiently evaluated for their potential environmental impact, land development cost, and other competing water usage in detailed feasibility studies . Given that the selected datasets are available nationally (at least within the conterminous US), the proposed methodology will have wide applicability across the country.

  4. Development and Integration of Hardware and Software for Active-Sensors in Structural Monitoring

    SciTech Connect

    Overly, Timothy G.S.

    2007-01-01

    Structural Health Monitoring (SHM) promises to deliver great benefits to many industries. Primarily among them is a potential for large cost savings in maintenance of complex structures such as aircraft and civil infrastructure. However, several large obstacles remain before widespread use on structures can be accomplished. The development of three components would address many of these obstacles: a robust sensor validation procedure, a low-cost active-sensing hardware and an integrated software package for transition to field deployment. The research performed in this thesis directly addresses these three needs and facilitates the adoption of SHM on a larger scale, particularly in the realm of SHM based on piezoelectric (PZT) materials. The first obstacle addressed in this thesis is the validation of the SHM sensor network. PZT materials are used for sensor/actuators because of their unique properties, but their functionality also needs to be validated for meaningful measurements to be recorded. To allow for a robust sensor validation algorithm, the effect of temperature change on sensor diagnostics and the effect of sensor failure on SHM measurements were classified. This classification allowed for the development of a sensor diagnostic algorithm that is temperature invariant and can indicate the amount and type of sensor failure. Secondly, the absence of a suitable commercially-available active-sensing measurement node is addressed in this thesis. A node is a small compact measurement device used in a complete system. Many measurement nodes exist for conventional passive sensing, which does not actively excite the structure, but there are no measurement nodes available that both meet the active-sensing requirements and are useable outside the laboratory. This thesis develops hardware that is low-power, active-sensing and field-deployable. This node uses the impedance method for SHM measurements, and can run the sensor diagnostic algorithm also developed here

  5. A practical fan-beam design and reconstruction algorithm for Active and Passive Computed Tomography of radioactive waste barrels

    NASA Astrophysics Data System (ADS)

    Roy, Tushar; More, M. R.; Ratheesh, Jilju; Sinha, Amar

    2015-09-01

    Active and Passive CT (A&PCT) of waste barrels is mostly carried out in parallel beam configuration due to its relative ease of implementation. This necessitates either using a single detector-source pair and translating the barrel or using multiple detector-source pairs for increasing the scanning speed. Additionally, because the use of bulky HPGe detectors may limit the number of detectors used in both active and passive modes, we propose to use 1″×1″ LaBr3(Ce) scintillators. This paper describes a practical fan-beam reconstruction for A&PCT imaging of waste barrels. A fan beam system model has been computed analytically and reconstruction done using MLEM algorithm. The results are compared with analytical reconstruction.

  6. Wavelet-based algorithm for auto-detection of daily living activities of older adults captured by multiple inertial measurement units (IMUs).

    PubMed

    Ayachi, Fouaz S; Nguyen, Hung P; Lavigne-Pelletier, Catherine; Goubault, Etienne; Boissy, Patrick; Duval, Christian

    2016-03-01

    A recent trend in human motion capture is the use of inertial measurement units (IMUs) for monitoring and performance evaluation of mobility in the natural living environment. Although the use of such systems have grown significantly, the development of methods and algorithms to process IMU data for clinical purposes is still limited. The aim of this work is to develop algorithms based on wavelet transform and discrete-time detection of events for the automatic segmentation of tasks related activities of daily living (ADL) from body worn IMUs. Seven healthy older adults (73  ±  4 years old) performed 10 ADL tasks in a simulated apartment during trials of different durations (3, 4, and 5 min). They wore a suit (Synertial UK Ltd IGS-180) comprised of 17 IMUs positioned strategically on body segments to capture full body motion. The proposed method automatically detected the number of template waveforms (representing each movement separately) using discrete wavelet transform (DWT) and discrete-time detection of events based on angular velocity, linear acceleration and 3D orientation data of pertinent IMUs. The sensitivity (Se.) and specificity (Sp.) of detection for the proposed method was established using time stamps of10tasks obtained from visual segmentation of each trial using the video records and the avatar provided by the system's software. At first, we identified six pertinent sensors that were strongly associated to different activities (at most two sensors/task) that allowed detection of tasks with high accuracy. The proposed algorithm exhibited significant global accuracy (N events  =  1999, Se.  =  97.5%, Sp.  =  94%), despite the variation in the occurrences of the performed tasks (free living). The Se. varied from 94% to 100% for all the detected ADL tasks and Sp. ranged from 90% to 100% with the worst Sp.  =  85 and 87% for Release_mid (reaching for object held just beyond reach at chest height) and Turning

  7. Education for National Development: World Bank Activities.

    ERIC Educational Resources Information Center

    Habte, Aklilu; Heyneman, Stephen

    1983-01-01

    The goals of education in developing nations are changing from interest in simply acquiring knowledge and fostering economic development to a greater understanding of the complexity of the relationship between schooling and larger national goals. The World Bank's role in these educational changes is covered. (IS)

  8. United Nations geothermal activities in developing countries

    SciTech Connect

    Beredjick, N.

    1987-07-01

    The United Nations implements technical cooperation projects in developing countries through its Department of Technical Cooperation for Development (DTCD). The DTCD is mandated to explore for and develop natural resources (water, minerals, and relevant infrastructure) and energy - both conventional and new and renewable energy sources. To date, the United Nations has been involved in over 30 geothermal exploration projects (completed or underway) in 20 developing countries: 8 in Africa (Djibouti, Ethiopia, Kenya, Madagascar); 8 in Asia (China, India, Jordan, Philippines, Thailand); 9 in Latin America (Bolivia, Chile, El Salvador, Honduras, Mexico, Nicaragua, Panama) and 6 in Europe (Greece, Romania, Turkey, Yugoslavia). Today, the DTCD has seven UNDP geothermal projects in 6 developing countries. Four of these (Bolivia, China, Honduras, and Kenya) are major exploration projects whose formulation and execution has been possible thanks to the generous contributions under cost-sharing arrangements from the government of Italy. These four projects are summarized.

  9. Development of Interpretation Algorithm for Optical Fiber Bragg Grating Sensors for Composite Structures

    NASA Astrophysics Data System (ADS)

    Peters, Kara

    2002-12-01

    Increasingly, optical fiber sensors, and in particular Bragg grating sensors, are being used in aerospace structures due to their immunity to electrical noise and the ability to multiplex hundreds of sensors into a single optical fiber. This significantly reduces the cost per sensor as the number of fiber connections and demodulation systems required is also reduced. The primary objective of this project is to study the effects of mounting issues such as adhesion, surface roughness, and high strain gradients on the interpretation of the measured strain. This is performed through comparison with electrical strain gage benchmark data. The long-term goal is to integrate such optical fiber Bragg grating sensors into a structural integrity monitoring system for the 2nd Generation Reusable Launch Vehicle. Previously, researchers at NASA Langley instrumented a composite wingbox with both optical fiber Bragg grating sensors and electrical strain gages during laboratory load-to-failure testing. A considerable amount of data was collected during these tests. For this project, data from two of the sensing optical fibers (each containing 800 Bragg grating sensors) were analyzed in detail. The first fiber studied was mounted in a straight line on the upper surface of the wingbox far from any structural irregularities. The results from these sensors showed a relatively large amount of noise compared to the electrical strain gages, but measured the same averaged strain curve. It was shown that the noise could be varied through the choice of input parameters in the data interpretation algorithm. Based upon the assumption that the strain remains constant along the gage length (a valid assumption for this fiber as confirmed by the measured grating spectra) this noise was significantly reduced. The second fiber was mounted on the lower surface of the wingbox in a pattern that circled surface cutouts and ran close to sites of impact damage, induced before the loading tests. As

  10. MUlti-Dimensional Spline-Based Estimator (MUSE) for motion estimation: algorithm development and initial results.

    PubMed

    Viola, Francesco; Coe, Ryan L; Owen, Kevin; Guenther, Drake A; Walker, William F

    2008-12-01

    Image registration and motion estimation play central roles in many fields, including RADAR, SONAR, light microscopy, and medical imaging. Because of its central significance, estimator accuracy, precision, and computational cost are of critical importance. We have previously presented a highly accurate, spline-based time delay estimator that directly determines sub-sample time delay estimates from sampled data. The algorithm uses cubic splines to produce a continuous representation of a reference signal and then computes an analytical matching function between this reference and a delayed signal. The location of the minima of this function yields estimates of the time delay. In this paper we describe the MUlti-dimensional Spline-based Estimator (MUSE) that allows accurate and precise estimation of multi-dimensional displacements/strain components from multi-dimensional data sets. We describe the mathematical formulation for two- and three-dimensional motion/strain estimation and present simulation results to assess the intrinsic bias and standard deviation of this algorithm and compare it to currently available multi-dimensional estimators. In 1000 noise-free simulations of ultrasound data we found that 2D MUSE exhibits maximum bias of 2.6 x 10(-4) samples in range and 2.2 x 10(-3) samples in azimuth (corresponding to 4.8 and 297 nm, respectively). The maximum simulated standard deviation of estimates in both dimensions was comparable at roughly 2.8 x 10(-3) samples (corresponding to 54 nm axially and 378 nm laterally). These results are between two and three orders of magnitude better than currently used 2D tracking methods. Simulation of performance in 3D yielded similar results to those observed in 2D. We also present experimental results obtained using 2D MUSE on data acquired by an Ultrasonix Sonix RP imaging system with an L14-5/38 linear array transducer operating at 6.6 MHz. While our validation of the algorithm was performed using ultrasound data, MUSE is

  11. Javascript Library for Developing Interactive Micro-Level Animations for Teaching and Learning Algorithms on One-Dimensional Arrays

    ERIC Educational Resources Information Center

    Végh, Ladislav

    2016-01-01

    The first data structure that first-year undergraduate students learn during the programming and algorithms courses is the one-dimensional array. For novice programmers, it might be hard to understand different algorithms on arrays (e.g. searching, mirroring, sorting algorithms), because the algorithms dynamically change the values of elements. In…

  12. Using Hybrid Modeling to Develop Innovative Activities

    ERIC Educational Resources Information Center

    Lichtman, Brenda; Avans, Diana

    2005-01-01

    This article describes a hybrid activities model that physical educators can use with students in grades four and above to create virtually a limitless array of novel games. A brief introduction to the basic theory is followed by descriptions of some hybrid games. Hybrid games are typically the result of merging two traditional sports or other…

  13. Physical Activity and Adolescent Female Psychological Development.

    ERIC Educational Resources Information Center

    Covey, Linda A.; Feltz, Deborah L.

    1991-01-01

    Relationships between self-reported past and present physical activity levels and self-image, sense of mastery, gender role identity, self-perceived physical ability, and self-perceived attractiveness were studied for 149 female high school sophomores, juniors, and seniors. Results are discussed in terms of adolescent emotional health. (SLD)

  14. Developing Metacognition: A Basis for Active Learning

    ERIC Educational Resources Information Center

    Vos, Henk; de Graaff, E.

    2004-01-01

    The reasons to introduce formats of active learning in engineering (ALE) such as project work, problem-based learning, use of cases, etc. are mostly based on practical experience, and sometimes from applied research on teaching and learning. Such research shows that students learn more and different abilities than in traditional formats of…

  15. Development of a Genetic Algorithm to Automate Clustering of a Dependency Structure Matrix

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; Korte, John J.; Bilardo, Vincent J.

    2006-01-01

    Much technology assessment and organization design data exists in Microsoft Excel spreadsheets. Tools are needed to put this data into a form that can be used by design managers to make design decisions. One need is to cluster data that is highly coupled. Tools such as the Dependency Structure Matrix (DSM) and a Genetic Algorithm (GA) can be of great benefit. However, no tool currently combines the DSM and a GA to solve the clustering problem. This paper describes a new software tool that interfaces a GA written as an Excel macro with a DSM in spreadsheet format. The results of several test cases are included to demonstrate how well this new tool works.

  16. Development of a Multiview Time Domain Imaging Algorithm (MTDI) with a Fermat Correction

    SciTech Connect

    Fisher, K A; Lehman, S K; Chambers, D H

    2004-09-22

    An imaging algorithm is presented based on the standard assumption that the total scattered field can be separated into an elastic component with monopole like dependence and an inertial component with a dipole like dependence. The resulting inversion generates two separate image maps corresponding to the monopole and dipole terms of the forward model. The complexity of imaging flaws and defects in layered elastic media is further compounded by the existence of high contrast gradients in either sound speed and/or density from layer to layer. To compensate for these gradients, we have incorporated Fermat's method of least time into our forward model to determine the appropriate delays between individual source-receiver pairs. Preliminary numerical and experimental results are in good agreement with each other.

  17. Development of an Interval Management Algorithm Using Ground Speed Feedback for Delayed Traffic

    NASA Technical Reports Server (NTRS)

    Barmore, Bryan E.; Swieringa, Kurt A.; Underwood, Matthew C.; Abbott, Terence; Leonard, Robert D.

    2016-01-01

    One of the goals of NextGen is to enable frequent use of Optimized Profile Descents (OPD) for aircraft, even during periods of peak traffic demand. NASA is currently testing three new technologies that enable air traffic controllers to use speed adjustments to space aircraft during arrival and approach operations. This will allow an aircraft to remain close to their OPD. During the integration of these technologies, it was discovered that, due to a lack of accurate trajectory information for the leading aircraft, Interval Management aircraft were exhibiting poor behavior. NASA's Interval Management algorithm was modified to address the impact of inaccurate trajectory information and a series of studies were performed to assess the impact of this modification. These studies show that the modification provided some improvement when the Interval Management system lacked accurate trajectory information for the leading aircraft.

  18. A novel sensing method and sensing algorithm development for a ubiquitous network.

    PubMed

    Jabbar, Hamid; Lee, Sungju; Choi, Seunghwan; Baek, Seunghyun; Yu, Sungwook; Jeong, Taikyeong

    2010-01-01

    This paper proposes a novel technique which provides energy efficient circuit design for sensors networks. The overall system presented requires a minimum number of independently communicating sensors and sub-circuits which enable it to reduce the power consumption by setting unused sensors to idle. This technique reduces hardware requirements, time and interconnection problems with a supervisory control. Our proposed algorithm, which hands over the controls to two software mangers for the sensing and moving subsystems can greatly improve the overall system performance. Based on the experimental results, we observed that our system, which is using sensing and moving managers, the four sensors required only 3.4 mW power consumption when a robot arm is moved a total distance of 17 cm. This system is designed for robot applications but could be implemented to many other human environments such as "ubiquitous cities", "smart homes", etc.

  19. Development of a SiPM-based PET detector using a digital positioning algorithm

    NASA Astrophysics Data System (ADS)

    Lee, Jin Hyung; Lee, Seung-Jae; An, Su Jung; Kim, Hyun-Il; Chung, Yong Hyun

    2016-05-01

    A decreased number of readout method is investigated here to provide precise pixel information for small-animal positron emission tomography (PET). Small-animal PET consists of eight modules, each being composed of a 3 × 3 array of 2 mm × 2 mm × 20 mm lutetium yttrium orthosilicate (LYSO) crystals optically coupled to a 2 × 2 array of 3 mm × 3 mm silicon photomultipliers (SiPMs). The number of readout channels is reduced by one-quarter that of the conventional method by applying a simplified pixel-determination algorithm. The performances of the PET system and detector module were evaluated with experimental verifications. In the results, all pixels of the 3 × 3 LYSO array were decoded well, and the performances of the PET detector module were measured.

  20. System design and algorithmic development for computational steering in distributed environments

    SciTech Connect

    Wu, Qishi; Zhu, Mengxia; Gu, Yi; Rao, Nageswara S

    2010-03-01

    Supporting visualization pipelines over wide-area networks is critical to enabling large-scale scientific applications that require visual feedback to interactively steer online computations. We propose a remote computational steering system that employs analytical models to estimate the cost of computing and communication components and optimizes the overall system performance in distributed environments with heterogeneous resources. We formulate and categorize the visualization pipeline configuration problems for maximum frame rate into three classes according to the constraints on node reuse or resource sharing, namely no, contiguous, and arbitrary reuse. We prove all three problems to be NP-complete and present heuristic approaches based on a dynamic programming strategy. The superior performance of the proposed solution is demonstrated with extensive simulation results in comparison with existing algorithms and is further evidenced by experimental results collected on a prototype implementation deployed over the Internet.

  1. Development of a prototype algorithm for the operational retrieval of height-resolved products from GOME

    NASA Technical Reports Server (NTRS)

    Spurr, Robert J. D.

    1997-01-01

    Global ozone monitoring experiment (GOME) level 2 products of total ozone column amounts have been generated on a routine operational basis since July 1996. These products and the level 1 radiance products are the major outputs from the ERS-2 ground segment GOME data processor (GDP) at DLR in Germany. Off-line scientific work has already shown the feasibility of ozone profile retrieval from GOME. It is demonstrated how the retrievals can be performed in an operational context. Height-resolved retrieval is based on the optimal estimation technique, #and cloud-contaminated scenes are treated in an equivalent reflecting surface approximation. The prototype must be able to handle GOME measurements routinely on a global basis. Requirements for the major components of the algorithm are described: this incorporates an overall strategy for operational height-resolved retrieval from GOME.

  2. Developing an Algorithm for Finding Deep-Sea Corals on Seamounts Using Bathymetry and Photographic Data

    NASA Astrophysics Data System (ADS)

    Fernandez, D. P.; Adkins, J. F.; Scheirer, D. P.

    2006-12-01

    Over the last three years we have conducted several cruises on seamounts in the North Atlantic to sample and characterize the distribution of deep-sea corals in space and time. Using the deep submergence vehicle Alvin and the ROV Hercules we have spent over 80 hours on the seafloor. With the autonomous vehicle ABE and a towed camera sled, we collected over 10,000 bottom photographs and over 60 hours of micro- bathymetry over 120 km of seafloor. While there are very few living scleractinia (Desmophyllum dianthus, Solenosmilia sp. and, Lophilia sp.), we recovered over 5,000 fossil D. dianthus and over 60 kg of fossil Solenosmilia sp. The large numbers of fossil corals mean that a perceived lack of material does not have to limit the use of this new archive of the deep ocean. However, we need a better strategy for finding and returning samples to the lab. Corals clearly prefer to grow on steep slopes and at the tops of scarps of all scales. They are preferentially found along ridges and on small knolls flanking a larger edifice. There is also a clear preference for D. dianthus to recruit onto carbonate substrate. Overall, our sample collection, bathymetry and bottom photographs allow us to create an algorithm for finding corals based only on knowledge of the seafloor topography. We can test this algorithm against known sampling locations and visual surveys of the seafloor. Similar to the way seismic data are used to locate ideal coring locations, we propose that high-resolution bathymetry can be used to predict the most likely locations for finding fossil deep-sea corals.

  3. Development of active-transport membrane devices

    SciTech Connect

    Laciak, D.V.

    1994-07-01

    This report introduces the concept of Air Products` AT membranes for the separation of NH{sub 3} and CO{sub 2} from process gas streams and presents results from the first year fabrication concept development studies.

  4. Development of response models for the Earth Radiation Budget Experiment (ERBE) sensors. Part 4: Preliminary nonscanner models and count conversion algorithms

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Choi, Sang H.

    1987-01-01

    Two count conversion algorithms and the associated dynamic sensor model for the M/WFOV nonscanner radiometers are defined. The sensor model provides and updates the constants necessary for the conversion algorithms, though the frequency with which these updates were needed was uncertain. This analysis therefore develops mathematical models for the conversion of irradiance at the sensor field of view (FOV) limiter into data counts, derives from this model two algorithms for the conversion of data counts to irradiance at the sensor FOV aperture and develops measurement models which account for a specific target source together with a sensor. The resulting algorithms are of the gain/offset and Kalman filter types. The gain/offset algorithm was chosen since it provided sufficient accuracy using simpler computations.

  5. Do You Read Me? Service Supplement: Reading Development Activities Guide.

    ERIC Educational Resources Information Center

    Kendall, Elizabeth L.; Chenoweth, Roberta

    This activity guide is one of four supplements to be used with "Do You Read Me? Prevocational-Vocational Reading Development Activities" (ED 210 454). Each supplement deals with a different occupational category. Games, puzzles, and other activities are offered to aid in developing the word recognition, vocabulary, and comprehension skills of…

  6. Development of a Pedestrian Indoor Navigation System Based on Multi-Sensor Fusion and Fuzzy Logic Estimation Algorithms

    NASA Astrophysics Data System (ADS)

    Lai, Y. C.; Chang, C. C.; Tsai, C. M.; Lin, S. Y.; Huang, S. C.

    2015-05-01

    This paper presents a pedestrian indoor navigation system based on the multi-sensor fusion and fuzzy logic estimation algorithms. The proposed navigation system is a self-contained dead reckoning navigation that means no other outside signal is demanded. In order to achieve the self-contained capability, a portable and wearable inertial measure unit (IMU) has been developed. Its adopted sensors are the low-cost inertial sensors, accelerometer and gyroscope, based on the micro electro-mechanical system (MEMS). There are two types of the IMU modules, handheld and waist-mounted. The low-cost MEMS sensors suffer from various errors due to the results of manufacturing imperfections and other effects. Therefore, a sensor calibration procedure based on the scalar calibration and the least squares methods has been induced in this study to improve the accuracy of the inertial sensors. With the calibrated data acquired from the inertial sensors, the step length and strength of the pedestrian are estimated by multi-sensor fusion and fuzzy logic estimation algorithms. The developed multi-sensor fusion algorithm provides the amount of the walking steps and the strength of each steps in real-time. Consequently, the estimated walking amount and strength per step are taken into the proposed fuzzy logic estimation algorithm to estimates the step lengths of the user. Since the walking length and direction are both the required information of the dead reckoning navigation, the walking direction is calculated by integrating the angular rate acquired by the gyroscope of the developed IMU module. Both the walking length and direction are calculated on the IMU module and transmit to a smartphone with Bluetooth to perform the dead reckoning navigation which is run on a self-developed APP. Due to the error accumulating of dead reckoning navigation, a particle filter and a pre-loaded map of indoor environment have been applied to the APP of the proposed navigation system to extend its

  7. Development of an algorithm as an implementation model for a wound management formulary across a UK health economy.

    PubMed

    Stephen-Haynes, J

    2013-12-01

    This article outlines a strategic process for the evaluation of wound management products and the development of an algorithm as an implementation model for wound management. Wound management is an increasingly complex process given the variety of interactive dressings and other devices available. This article discusses the procurement process, access to wound management dressings and the use of wound management formularies within the UK. We conclude that the current commissioners of tissue viability within healthcare organisations need to adopt a proactive approach to ensure appropriate formulary evaluation and product selection, in order to achieve the most beneficial clinical and financial outcomes.

  8. Development and investigation of a semi-active polar planar haptic interface using the digital resistance map concept

    NASA Astrophysics Data System (ADS)

    Asadi, Ehsan; Arzanpour, Siamak

    2014-05-01

    The growing demand for haptic technologies in recent years has motivated novel approaches in developing haptic interfaces and control algorithms. Semi-active haptic interfaces, in general, have the advantage of addressing safety concerns which adversely affect their active counterparts. This paper presents the development of a planar semi-active haptic interface using magnetorheological (MR) dampers. The ability of MR dampers to produce controllable resistance forces is the key reason for their utilization in the proposed haptic interface. The proposed planar semi-active haptic interface consists of linear and rotary MR dampers. Each of the MR dampers is modeled experimentally using the Bouc-Wen model. A haptic rendering algorithm called the digital resistance map (DRM) is also developed to control MR dampers. DRM is a high-fidelity haptic rendering algorithm, and proved to be effective to create comprehensive force feedback for operators. MATLAB/Simulink® is used for implementing several DRM scenarios for generating haptic enabled virtual environments. The experimental results demonstrate the effectiveness of the proposed haptic interface and rendering algorithm.

  9. Active and intelligent inhaler device development.

    PubMed

    Tobyn, Mike; Staniforth, John N; Morton, David; Harmer, Quentin; Newton, Mike E

    2004-06-11

    The dry powder inhaler, which has traditionally relied on the patient's inspiratory force to deaggregate and deliver the active agent to the target region of the lung, has been a successful delivery device for the provision of locally active agents for the treatment of conditions such as asthma and chronic obstructive pulmonary disease (COPD). However, such devices can suffer from poor delivery characteristics and/or poor reproducibility. More recently, drugs for systemic delivery and more high value compounds have been put into DPI devices. Regulatory, dosing, manufacturing and economic concerns have demanded that a more efficient and reproducible performance is achieved by these devices. Recently strategies have been put in place to produce a more efficient DPI device/formulation combination. Using one novel device as an example the paper will examine which features are important in such a device and some of the strategies required to implement these features. All of these technological advances are invisible, and may be irrelevant, to the patient. However, their inability to use an inhaler device properly has significant implications for their therapy. Use of active device mechanisms, which reduce the dependence on patient inspiratory flow, and sensible industrial design, which give the patient the right clues to use, are important determinants of performance here.

  10. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    SciTech Connect

    Cheung, Howard; Braun, James E.

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the