Science.gov

Sample records for algorithm originally developed

  1. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  2. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  3. Parallel algorithm development

    SciTech Connect

    Adams, T.F.

    1996-06-01

    Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.

  4. [Algorithm for percutaneous origin of irreversible icterus ].

    PubMed

    Marković, Z; Milićević, M; Masulović, D; Saranović, Dj; Stojanović, V; Marković, B; Kovacević, S

    2007-01-01

    It is retrospective analysis of all percutaneous billiary dranage typs used in 600 patients with opstructive icterus in last 10 years.The procedure technics is analysed. It had positiv therapeutical result in about 75% cases. The most frequent complication are showed. The most coressponding percutaneous derivation algorithm is discussed. As initial method is suggested the usage of externo-internal derivation which, in dependence of the procedure, continue by internal derivation-catheteral endoprosthesys or matelic stent. The covered metalic stents usage is suggested as method of choise in metalic endoprosthesys application.

  5. [Origin and language development].

    PubMed

    Segovia de Arana, José María

    2010-01-01

    Since its inception the language of humanity was spoken to communicate with those who were coming in the immediate environment. When writing appeared, there was a great evolution since the ideas could be passed at a distance, which made possible the organization of communities, cities, empires, and so on. And the development of literature, science, and the arts. The progress of humanity became more apparent with the discovery of printing and distributing books that were saved ideas, words written by different authors that could be known by reading. Another major advance came with the chance to hear the human voice, spoken language, not just those in the vicinity of the speaker itself is not remotely over the telephone, radio or television. Even it is possible to hear the words of dead people. The last and extraordinary step in spreading the language they are giving the latest computer technology over the Internet in which the possibilities of information and collection of written ideas are virtually endless. In this situation, which has recently started not stop thinking about the danger to the book as a depository and jealous guardian of culture, art, science and history and has personally been targeted by every human being his books or his lack thereof. From the fundamental discoveries of Broca's and Wernicke, progress has been made in the brain language processing. The knowledge and measurement of brain activity in normal subjects has advanced gracais the incorporation of modern methods of diagnostic imaging: PET, functional magnetic resonance imaging (MRI) and magnetoencephalography (MEG). It is expected to go much further with the application of these techniques in experimental models of various neurological diseases and more sophisticated linguistic analysis.

  6. Messy genetic algorithms: Recent developments

    SciTech Connect

    Kargupta, H.

    1996-09-01

    Messy genetic algorithms define a rare class of algorithms that realize the need for detecting appropriate relations among members of the search domain in optimization. This paper reviews earlier works in messy genetic algorithms and describes some recent developments. It also describes the gene expression messy GA (GEMGA)--an {Omicron}({Lambda}{sup {kappa}}({ell}{sup 2} + {kappa})) sample complexity algorithm for the class of order-{kappa} delineable problems (problems that can be solved by considering no higher than order-{kappa} relations) of size {ell} and alphabet size {Lambda}. Experimental results are presented to demonstrate the scalability of the GEMGA.

  7. Multisensor data fusion algorithm development

    SciTech Connect

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  8. Developing Scoring Algorithms

    Cancer.gov

    We developed scoring procedures to convert screener responses to estimates of individual dietary intake for fruits and vegetables, dairy, added sugars, whole grains, fiber, and calcium using the What We Eat in America 24-hour dietary recall data from the 2003-2006 NHANES.

  9. Algorithms for Software Development

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1985-01-01

    Management aid makes changes obvious. One key element in scheme for software development control is check summing. If check sum for given line in source file is different from previous version, it is evident change has been made. Subsequent editing of file creates new lines, deletes old ones, modifies characters, moves lines, or copies (reuse) existing lines. Combination of three elements of line code permits all transactions to be detected.

  10. Historical development of origins research.

    PubMed

    Lazcano, Antonio

    2010-11-01

    Following the publication of the Origin of Species in 1859, many naturalists adopted the idea that living organisms were the historical outcome of gradual transformation of lifeless matter. These views soon merged with the developments of biochemistry and cell biology and led to proposals in which the origin of protoplasm was equated with the origin of life. The heterotrophic origin of life proposed by Oparin and Haldane in the 1920s was part of this tradition, which Oparin enriched by transforming the discussion of the emergence of the first cells into a workable multidisciplinary research program. On the other hand, the scientific trend toward understanding biological phenomena at the molecular level led authors like Troland, Muller, and others to propose that single molecules or viruses represented primordial living systems. The contrast between these opposing views on the origin of life represents not only contrasting views of the nature of life itself, but also major ideological discussions that reached a surprising intensity in the years following Stanley Miller's seminal result which showed the ease with which organic compounds of biochemical significance could be synthesized under putative primitive conditions. In fact, during the years following the Miller experiment, attempts to understand the origin of life were strongly influenced by research on DNA replication and protein biosynthesis, and, in socio-political terms, by the atmosphere created by Cold War tensions. The catalytic versatility of RNA molecules clearly merits a critical reappraisal of Muller's viewpoint. However, the discovery of ribozymes does not imply that autocatalytic nucleic acid molecules ready to be used as primordial genes were floating in the primitive oceans, or that the RNA world emerged completely assembled from simple precursors present in the prebiotic soup. The evidence supporting the presence of a wide range of organic molecules on the primitive Earth, including membrane

  11. Historical Development of Origins Research

    PubMed Central

    Lazcano, Antonio

    2010-01-01

    Following the publication of the Origin of Species in 1859, many naturalists adopted the idea that living organisms were the historical outcome of gradual transformation of lifeless matter. These views soon merged with the developments of biochemistry and cell biology and led to proposals in which the origin of protoplasm was equated with the origin of life. The heterotrophic origin of life proposed by Oparin and Haldane in the 1920s was part of this tradition, which Oparin enriched by transforming the discussion of the emergence of the first cells into a workable multidisciplinary research program. On the other hand, the scientific trend toward understanding biological phenomena at the molecular level led authors like Troland, Muller, and others to propose that single molecules or viruses represented primordial living systems. The contrast between these opposing views on the origin of life represents not only contrasting views of the nature of life itself, but also major ideological discussions that reached a surprising intensity in the years following Stanley Miller’s seminal result which showed the ease with which organic compounds of biochemical significance could be synthesized under putative primitive conditions. In fact, during the years following the Miller experiment, attempts to understand the origin of life were strongly influenced by research on DNA replication and protein biosynthesis, and, in socio-political terms, by the atmosphere created by Cold War tensions. The catalytic versatility of RNA molecules clearly merits a critical reappraisal of Muller’s viewpoint. However, the discovery of ribozymes does not imply that autocatalytic nucleic acid molecules ready to be used as primordial genes were floating in the primitive oceans, or that the RNA world emerged completely assembled from simple precursors present in the prebiotic soup. The evidence supporting the presence of a wide range of organic molecules on the primitive Earth, including membrane

  12. ALGORITHM DEVELOPMENT FOR SPATIAL OPERATORS.

    USGS Publications Warehouse

    Claire, Robert W.

    1984-01-01

    An approach is given that develops spatial operators about the basic geometric elements common to spatial data structures. In this fashion, a single set of spatial operators may be accessed by any system that reduces its operands to such basic generic representations. Algorithms based on this premise have been formulated to perform operations such as separation, overlap, and intersection. Moreover, this generic approach is well suited for algorithms that exploit concurrent properties of spatial operators. The results may provide a framework for a geometry engine to support fundamental manipulations within a geographic information system.

  13. On the origin of synthetic life: attribution of output to a particular algorithm

    NASA Astrophysics Data System (ADS)

    Yampolskiy, Roman V.

    2017-01-01

    With unprecedented advances in genetic engineering we are starting to see progressively more original examples of synthetic life. As such organisms become more common it is desirable to gain an ability to distinguish between natural and artificial life forms. In this paper, we address this challenge as a generalized version of Darwin’s original problem, which he so brilliantly described in On the Origin of Species. After formalizing the problem of determining the samples’ origin, we demonstrate that the problem is in fact unsolvable. In the general case, if computational resources of considered originator algorithms have not been limited and priors for such algorithms are known to be equal, both explanations are equality likely. Our results should attract attention of astrobiologists and scientists interested in developing a more complete theory of life, as well as of AI-Safety researchers.

  14. STAR Algorithm Integration Team - Facilitating operational algorithm development

    NASA Astrophysics Data System (ADS)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  15. Algorithm development for hyperspectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Rosario, Dalton S.

    2008-10-01

    This dissertation proposes and evaluates a novel anomaly detection algorithm suite for ground-to-ground, or air-to-ground, applications requiring automatic target detection using hyperspectral (HS) data. Targets are manmade objects in natural background clutter under unknown illumination and atmospheric conditions. The use of statistical models herein is purely for motivation of particular formulas for calculating anomaly output surfaces. In particular, formulas from semiparametrics are utilized to obtain novel forms for output surfaces, and alternative scoring algorithms are proposed to calculate output surfaces that are comparable to those of semiparametrics. Evaluation uses both simulated data and real HS data from a joint data collection effort between the Army Research Laboratory and the Army Armament Research Development & Engineering Center. A data transformation method is presented for use by the two-sample data structure univariate semiparametric and nonparametric scoring algorithms, such that, the two-sample data are mapped from their original multivariate space to an univariate domain, where the statistical power of the univariate scoring algorithms is shown to be improved relative to existing multivariate scoring algorithms testing the same two-sample data. An exhaustive simulation experimental study is conducted to assess the performance of different HS anomaly detection techniques, where the null and alternative hypotheses are completely specified, including all parameters, using multivariate normal and mixtures of multivariate normal distributions. Finally, for ground-to-ground anomaly detection applications, where the unknown scales of targets add to the problem complexity, a novel global anomaly detection algorithm suite is introduced, featuring autonomous partial random sampling (PRS) of the data cube. The PRS method is proposed to automatically sample the unknown background clutter in the test HS imagery, and by repeating multiple times this

  16. Principles for Developing Algorithmic Instruction.

    DTIC Science & Technology

    1978-12-01

    information-processing theories to test their applicability with instruction directed by learning algorithms. A version of a logical, or familiar, and a...intent of our research was to borrow~ from information-processing theory factors which are known to affect learning in a predictable manner and to apply... learning studies where processing theories are tested by minute performance or latency differences. -~ It is not surprising that differences are seldom found

  17. Passive microwave algorithm development and evaluation

    NASA Technical Reports Server (NTRS)

    Petty, Grant W.

    1995-01-01

    The scientific objectives of this grant are: (1) thoroughly evaluate, both theoretically and empirically, all available Special Sensor Microwave Imager (SSM/I) retrieval algorithms for column water vapor, column liquid water, and surface wind speed; (2) where both appropriate and feasible, develop, validate, and document satellite passive microwave retrieval algorithms that offer significantly improved performance compared with currently available algorithms; and (3) refine and validate a novel physical inversion scheme for retrieving rain rate over the ocean. This report summarizes work accomplished or in progress during the first year of a three year grant. The emphasis during the first year has been on the validation and refinement of the rain rate algorithm published by Petty and on the analysis of independent data sets that can be used to help evaluate the performance of rain rate algorithms over remote areas of the ocean. Two articles in the area of global oceanic precipitation are attached.

  18. Developing Scoring Algorithms (Earlier Methods)

    Cancer.gov

    We developed scoring procedures to convert screener responses to estimates of individual dietary intake for fruits and vegetables, dairy, added sugars, whole grains, fiber, and calcium using the What We Eat in America 24-hour dietary recall data from the 2003-2006 NHANES.

  19. Infrared algorithm development for ocean observations

    NASA Technical Reports Server (NTRS)

    Brown, Otis B.

    1995-01-01

    Efforts continue under this contract to develop algorithms for the computation of sea surface temperature (SST) from MODIS infrared retrievals. This effort includes radiative transfer modeling, comparison of in situ and satellite observations, development and evaluation of processing and networking methodologies for algorithm computation and data accession, evaluation of surface validation approaches for IR radiances, and participation in MODIS (project) related activities. Efforts in this contract period have focused on radiative transfer modeling, evaluation of atmospheric correction methodologies, involvement in field studies, production and evaluation of new computer networking strategies, and objective analysis approaches.

  20. JPSS Cryosphere Algorithms: Integration and Testing in Algorithm Development Library (ADL)

    NASA Astrophysics Data System (ADS)

    Tsidulko, M.; Mahoney, R. L.; Meade, P.; Baldwin, D.; Tschudi, M. A.; Das, B.; Mikles, V. J.; Chen, W.; Tang, Y.; Sprietzer, K.; Zhao, Y.; Wolf, W.; Key, J.

    2014-12-01

    JPSS is a next generation satellite system that is planned to be launched in 2017. The satellites will carry a suite of sensors that are already on board the Suomi National Polar-orbiting Partnership (S-NPP) satellite. The NOAA/NESDIS/STAR Algorithm Integration Team (AIT) works within the Algorithm Development Library (ADL) framework which mimics the operational JPSS Interface Data Processing Segment (IDPS). The AIT contributes in development, integration and testing of scientific algorithms employed in the IDPS. This presentation discusses cryosphere related activities performed in ADL. The addition of a new ancillary data set - NOAA Global Multisensor Automated Snow/Ice data (GMASI) - with ADL code modifications is described. Preliminary GMASI impact on the gridded Snow/Ice product is estimated. Several modifications to the Ice Age algorithm that demonstrates mis-classification of ice type for certain areas/time periods are tested in the ADL. Sensitivity runs for day time, night time and terminator zone are performed and presented. Comparisons between the original and modified versions of the Ice Age algorithm are also presented.

  1. CDRD and PNPR satellite passive microwave precipitation retrieval algorithms: EuroTRMM/EURAINSAT origins and H-SAF operations

    NASA Astrophysics Data System (ADS)

    Mugnai, A.; Smith, E. A.; Tripoli, G. J.; Bizzarri, B.; Casella, D.; Dietrich, S.; Di Paola, F.; Panegrossi, G.; Sanò, P.

    2013-04-01

    including a few examples of their performance. This aspect of the development of the two algorithms is placed in the context of what we refer to as the TRMM era, which is the era denoting the active and ongoing period of the Tropical Rainfall Measuring Mission (TRMM) that helped inspire their original development. In 2015, the ISAC-Rome precipitation algorithms will undergo a transformation beginning with the upcoming Global Precipitation Measurement (GPM) mission, particularly the GPM Core Satellite technologies. A few years afterward, the first pair of imaging and sounding Meteosat Third Generation (MTG) satellites will be launched, providing additional technological advances. Various of the opportunities presented by the GPM Core and MTG satellites for improving the current CDRD and PNPR precipitation retrieval algorithms, as well as extending their product capability, are discussed.

  2. Origin and development of muscle cramps.

    PubMed

    Minetto, Marco Alessandro; Holobar, Aleš; Botter, Alberto; Farina, Dario

    2013-01-01

    Cramps are sudden, involuntary, painful muscle contractions. Their pathophysiology remains poorly understood. One hypothesis is that cramps result from changes in motor neuron excitability (central origin). Another hypothesis is that they result from spontaneous discharges of the motor nerves (peripheral origin). The central origin hypothesis has been supported by recent experimental findings, whose implications for understanding cramp contractions are discussed.

  3. Multi-spectral image enhancement algorithm based on keeping original gray level

    NASA Astrophysics Data System (ADS)

    Wang, Tian; Xu, Linli; Yang, Weiping

    2016-11-01

    Characteristics of multi-spectral imaging system and the image enhancement algorithm are introduced.Because histogram equalization and some other image enhancement will change the original gray level,a new image enhancement algorithm is proposed to maintain the gray level.For this paper, we have chosen 6 narrow-bands multi-spectral images to compare,the experimental results show that the proposed method is better than those histogram equalization and other algorithm to multi-spectral images.It also insures that histogram information contained in original features is preserved and guarantees to make use of data class information.What's more,on the combination of subjective and objective sharpness evaluation,details of the images are enhanced and noise is weaken.

  4. Computed Tomography Image Origin Identification based on Original Sensor Pattern Noise and 3D Image Reconstruction Algorithm Footprints.

    PubMed

    Duan, Yuping; Bouslimi, Dalel; Yang, Guanyu; Shu, Huazhong; Coatrieux, Gouenou

    2016-06-08

    In this paper, we focus on the "blind" identification of the Computed Tomography (CT) scanner that has produced a CT image. To do so, we propose a set of noise features derived from the image chain acquisition and which can be used as CT-Scanner footprint. Basically, we propose two approaches. The first one aims at identifying a CT-Scanner based on an Original Sensor Pattern Noise (OSPN) that is intrinsic to the X-ray detectors. The second one identifies an acquisition system based on the way this noise is modified by its 3D image reconstruction algorithm. As these reconstruction algorithms are manufacturer dependent and kept secret, our features are used as input to train an SVM based classifier so as to discriminate acquisition systems. Experiments conducted on images issued from 15 different CT-Scanner models of 4 distinct manufacturers demonstrate that our system identifies the origin of one CT image with a detection rate of at least 94% and that it achieves better performance than Sensor Pattern Noise (SPN) based strategy proposed for general public camera devices.

  5. Cosmic Origins (COR) Technology Development Program Overview

    NASA Astrophysics Data System (ADS)

    Werneth, Russell; Pham, B.; Clampin, M.

    2014-01-01

    The Cosmic Origins (COR) Program Office was established in FY11 and resides at the NASA Goddard Space Flight Center (GSFC). The office serves as the implementation arm for the Astrophysics Division at NASA Headquarters for COR Program related matters. We present an overview of the Program’s technology management activities and the Program’s technology development portfolio. We discuss the process for addressing community-provided technology needs and the Technology Management Board (TMB)-vetted prioritization and investment recommendations. This process improves the transparency and relevance of technology investments, provides the community a voice in the process, and leverages the technology investments of external organizations by defining a need and a customer. Goals for the COR Program envisioned by the National Research Council’s (NRC) “New Worlds, New Horizons in Astronomy and Astrophysics” (NWNH) Decadal Survey report includes a 4m-class UV/optical telescope that would conduct imaging and spectroscopy as a post-Hubble observatory with significantly improved sensitivity and capability, a near-term investigation of NASA participation in the Japanese Aerospace Exploration Agency/Institute of Space and Astronautical Science (JAXA/ISAS) Space Infrared Telescope for Cosmology and Astrophysics (SPICA) mission, and future Explorers.

  6. Algorithm Development Library for Environmental Satellite Missions

    NASA Astrophysics Data System (ADS)

    Smith, D. C.; Grant, K. D.; Miller, S. W.; Jamilkowski, M. L.

    2012-12-01

    science will need to migrate into the operational system. In addition, as new techniques are found to improve, supplement, or replace existing products, these changes will also require implementation into the operational system. In the past, operationalizing science algorithms and integrating them into active systems often required months of work. In order to significantly shorten the time and effort required for this activity, Raytheon has developed the Algorithm Development Library (ADL). The ADL enables scientist and researchers to develop algorithms on their own platforms, and provide these to Raytheon in a form that can be rapidly integrated directly into the operational baseline. As the JPSS CGS is a multi-mission ground system, algorithms are not restricted to Suomi NPP or JPSS missions. The ADL provides a development environment that any environmental remote sensing mission scientist can use to create algorithms that will plug into a JPSS CGS instantiation. This paper describes the ADL and how scientists and researchers can use it in their own environments.

  7. Development of Improved Algorithms and Multiscale Modeling Capability with SUNTANS

    DTIC Science & Technology

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Development of Improved Algorithms and Multiscale...a wide range of scales through use of accurate numerical methods and high- performance computational algorithms . The tool will be applied to study...dissipation. OBJECTIVES The primary objective is to enhance the capabilities of the SUNTANS model through development of algorithms to study

  8. Global Precipitation Measurement: GPM Microwave Imager (GMI) Algorithm Development Approach

    NASA Technical Reports Server (NTRS)

    Stocker, Erich Franz

    2009-01-01

    This slide presentation reviews the approach to the development of the Global Precipitation Measurement algorithm. This presentation includes information about the responsibilities for the development of the algorithm, and the calibration. Also included is information about the orbit, and the sun angle. The test of the algorithm code will be done with synthetic data generated from the Precipitation Processing System (PPS).

  9. Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms

    NASA Technical Reports Server (NTRS)

    Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)

    2000-01-01

    In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.

  10. Assembled sequence contigs by SOAPdenova and Volvet algorithms from metagenomic short reads of a new bacterial isolate of gut origin

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Assembled sequence contigs by SOAPdenova and Volvet algorithms from metagenomic short reads of a new bacterial isolate of gut origin. This study included 2 submissions with a total of 9.8 million bp of assembled contigs....

  11. [Academic origin, development and characteristic of Xujiang acupuncture school].

    PubMed

    Xie, Yufeng; Yang Zongbao; Chen, Yun; Wang, Ling; Wang, Shuhui; Yang, Lixia

    2016-03-01

    The origin time, representative physicians and medical works of Xujiang acupuncture school were traced, so as to explore the academic origin and development and summarize the academic characteristic of Xujiang acupuncture school, which could make a better inheritance of academic essence and prompt the innovation and development of Xujiang acupuncture school.

  12. A kinetic model-based algorithm to classify NGS short reads by their allele origin.

    PubMed

    Marinoni, Andrea; Rizzo, Ettore; Limongelli, Ivan; Gamba, Paolo; Bellazzi, Riccardo

    2015-02-01

    Genotyping Next Generation Sequencing (NGS) data of a diploid genome aims to assign the zygosity of identified variants through comparison with a reference genome. Current methods typically employ probabilistic models that rely on the pileup of bases at each locus and on a priori knowledge. We present a new algorithm, called Kimimila (KInetic Modeling based on InforMation theory to Infer Labels of Alleles), which is able to assign reads to alleles by using a distance geometry approach and to infer the variant genotypes accurately, without any kind of assumption. The performance of the model has been assessed on simulated and real data of the 1000 Genomes Project and the results have been compared with several commonly used genotyping methods, i.e., GATK, Samtools, VarScan, FreeBayes and Atlas2. Despite our algorithm does not make use of a priori knowledge, the percentage of correctly genotyped variants is comparable to these algorithms. Furthermore, our method allows the user to split the reads pool depending on the inferred allele origin.

  13. Event-by-event PET image reconstruction using list-mode origin ensembles algorithm

    NASA Astrophysics Data System (ADS)

    Andreyev, Andriy

    2016-03-01

    There is a great demand for real time or event-by-event (EBE) image reconstruction in emission tomography. Ideally, as soon as event has been detected by the acquisition electronics, it needs to be used in the image reconstruction software. This would greatly speed up the image reconstruction since most of the data will be processed and reconstructed while the patient is still undergoing the scan. Unfortunately, the current industry standard is that the reconstruction of the image would not start until all the data for the current image frame would be acquired. Implementing an EBE reconstruction for MLEM family of algorithms is possible, but not straightforward as multiple (computationally expensive) updates to the image estimate are required. In this work an alternative Origin Ensembles (OE) image reconstruction algorithm for PET imaging is converted to EBE mode and is investigated whether it is viable alternative for real-time image reconstruction. In OE algorithm all acquired events are seen as points that are located somewhere along the corresponding line-of-responses (LORs), together forming a point cloud. Iteratively, with a multitude of quasi-random shifts following the likelihood function the point cloud converges to a reflection of an actual radiotracer distribution with the degree of accuracy that is similar to MLEM. New data can be naturally added into the point cloud. Preliminary results with simulated data show little difference between regular reconstruction and EBE mode, proving the feasibility of the proposed approach.

  14. Origin of texture development in orthorhombic uranium

    DOE PAGES

    Zecevic, Miroslav; Knezevic, Marko; Beyerlein, Irene Jane; ...

    2016-04-09

    We study texture evolution of alpha-uranium (α-U) during plane strain compression and uniaxial compression to high strains at different temperatures. We combine a multiscale polycrystal constitutive model and detailed analysis of texture data to uncover the slip and twinning modes responsible for the formation of individual texture components. The analysis indicates that during plane strain compression, floor slip (001)[100] results in the formation of two pronounced {001}{001} texture peaks tilted 10–15° away from the normal toward the rolling direction. During both high-temperature (573 K) through-thickness compression and plane strain compression, the active slip modes are floor slip (001)[100] and chimneymore » slip 1/2{110} <11¯0> with slightly different ratios. {130} <31¯0> deformation twinning is profuse during rolling and in-plane compression and decreases with increasing temperature, but is not as active for through-thickness compression. Lastly, we comment on some similarities between rolling textures of α-U, which has a c/a ratio of 1.734, and those that develop in hexagonal close packed metals with similarly high c/a ratios like Zn (1.856) and Cd (1.885) and are dominated by basal slip.« less

  15. Origin of texture development in orthorhombic uranium

    SciTech Connect

    Zecevic, Miroslav; Knezevic, Marko; Beyerlein, Irene Jane; McCabe, Rodney James

    2016-04-09

    We study texture evolution of alpha-uranium (α-U) during plane strain compression and uniaxial compression to high strains at different temperatures. We combine a multiscale polycrystal constitutive model and detailed analysis of texture data to uncover the slip and twinning modes responsible for the formation of individual texture components. The analysis indicates that during plane strain compression, floor slip (001)[100] results in the formation of two pronounced {001}{001} texture peaks tilted 10–15° away from the normal toward the rolling direction. During both high-temperature (573 K) through-thickness compression and plane strain compression, the active slip modes are floor slip (001)[100] and chimney slip 1/2{110} <11¯0> with slightly different ratios. {130} <31¯0> deformation twinning is profuse during rolling and in-plane compression and decreases with increasing temperature, but is not as active for through-thickness compression. Lastly, we comment on some similarities between rolling textures of α-U, which has a c/a ratio of 1.734, and those that develop in hexagonal close packed metals with similarly high c/a ratios like Zn (1.856) and Cd (1.885) and are dominated by basal slip.

  16. A Two-Stage Algorithm for Origin-Destination Matrices Estimation Considering Dynamic Dispersion Parameter for Route Choice

    PubMed Central

    Wang, Yong; Ma, Xiaolei; Liu, Yong; Gong, Ke; Henricakson, Kristian C.; Xu, Maozeng; Wang, Yinhai

    2016-01-01

    This paper proposes a two-stage algorithm to simultaneously estimate origin-destination (OD) matrix, link choice proportion, and dispersion parameter using partial traffic counts in a congested network. A non-linear optimization model is developed which incorporates a dynamic dispersion parameter, followed by a two-stage algorithm in which Generalized Least Squares (GLS) estimation and a Stochastic User Equilibrium (SUE) assignment model are iteratively applied until the convergence is reached. To evaluate the performance of the algorithm, the proposed approach is implemented in a hypothetical network using input data with high error, and tested under a range of variation coefficients. The root mean squared error (RMSE) of the estimated OD demand and link flows are used to evaluate the model estimation results. The results indicate that the estimated dispersion parameter theta is insensitive to the choice of variation coefficients. The proposed approach is shown to outperform two established OD estimation methods and produce parameter estimates that are close to the ground truth. In addition, the proposed approach is applied to an empirical network in Seattle, WA to validate the robustness and practicality of this methodology. In summary, this study proposes and evaluates an innovative computational approach to accurately estimate OD matrices using link-level traffic flow data, and provides useful insight for optimal parameter selection in modeling travelers’ route choice behavior. PMID:26761209

  17. A Two-Stage Algorithm for Origin-Destination Matrices Estimation Considering Dynamic Dispersion Parameter for Route Choice.

    PubMed

    Wang, Yong; Ma, Xiaolei; Liu, Yong; Gong, Ke; Henrickson, Kristian C; Henricakson, Kristian C; Xu, Maozeng; Wang, Yinhai

    2016-01-01

    This paper proposes a two-stage algorithm to simultaneously estimate origin-destination (OD) matrix, link choice proportion, and dispersion parameter using partial traffic counts in a congested network. A non-linear optimization model is developed which incorporates a dynamic dispersion parameter, followed by a two-stage algorithm in which Generalized Least Squares (GLS) estimation and a Stochastic User Equilibrium (SUE) assignment model are iteratively applied until the convergence is reached. To evaluate the performance of the algorithm, the proposed approach is implemented in a hypothetical network using input data with high error, and tested under a range of variation coefficients. The root mean squared error (RMSE) of the estimated OD demand and link flows are used to evaluate the model estimation results. The results indicate that the estimated dispersion parameter theta is insensitive to the choice of variation coefficients. The proposed approach is shown to outperform two established OD estimation methods and produce parameter estimates that are close to the ground truth. In addition, the proposed approach is applied to an empirical network in Seattle, WA to validate the robustness and practicality of this methodology. In summary, this study proposes and evaluates an innovative computational approach to accurately estimate OD matrices using link-level traffic flow data, and provides useful insight for optimal parameter selection in modeling travelers' route choice behavior.

  18. Modified multiscale sample entropy computation of laser speckle contrast images and comparison with the original multiscale entropy algorithm

    NASA Astrophysics Data System (ADS)

    Humeau-Heurtier, Anne; Mahé, Guillaume; Abraham, Pierre

    2015-12-01

    Laser speckle contrast imaging (LSCI) enables a noninvasive monitoring of microvascular perfusion. Some studies have proposed to extract information from LSCI data through their multiscale entropy (MSE). However, for reaching a large range of scales, the original MSE algorithm may require long recordings for reliability. Recently, a novel approach to compute MSE with shorter data sets has been proposed: the short-time MSE (sMSE). Our goal is to apply, for the first time, the sMSE algorithm in LSCI data and to compare results with those given by the original MSE. Moreover, we apply the original MSE algorithm on data of different lengths and compare results with those given by longer recordings. For this purpose, synthetic signals and 192 LSCI regions of interest (ROIs) of different sizes are processed. Our results show that the sMSE algorithm is valid to compute the MSE of LSCI data. Moreover, with time series shorter than those initially proposed, the sMSE and original MSE algorithms give results with no statistical difference from those of the original MSE algorithm with longer data sets. The minimal acceptable length depends on the ROI size. Comparisons of MSE from healthy and pathological subjects can be performed with shorter data sets than those proposed until now.

  19. Motion Cueing Algorithm Development: Piloted Performance Testing of the Cueing Algorithms

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    The relative effectiveness in simulating aircraft maneuvers with both current and newly developed motion cueing algorithms was assessed with an eleven-subject piloted performance evaluation conducted on the NASA Langley Visual Motion Simulator (VMS). In addition to the current NASA adaptive algorithm, two new cueing algorithms were evaluated: the optimal algorithm and the nonlinear algorithm. The test maneuvers included a straight-in approach with a rotating wind vector, an offset approach with severe turbulence and an on/off lateral gust that occurs as the aircraft approaches the runway threshold, and a takeoff both with and without engine failure after liftoff. The maneuvers were executed with each cueing algorithm with added visual display delay conditions ranging from zero to 200 msec. Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. Piloted performance parameters for the approach maneuvers, the vertical velocity upon touchdown and the runway touchdown position, were also analyzed but did not show any noticeable difference among the cueing algorithms. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach were less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.

  20. SSME structural computer program development: BOPACE theoretical manual, addendum. [algorithms

    NASA Technical Reports Server (NTRS)

    1975-01-01

    An algorithm developed and incorporated into BOPACE for improving the convergence and accuracy of the inelastic stress-strain calculations is discussed. The implementation of separation of strains in the residual-force iterative procedure is defined. The elastic-plastic quantities used in the strain-space algorithm are defined and compared with previous quantities.

  1. Development and application of multispectral algorithms for defect apple inspection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This research developed and evaluated the multispectral algorithm derived from hyperspectral line-scan imaging system which equipped with an electron-multiplying-charge-coupled-device camera and an imaging spectrograph for the detection of defect Red Delicious apples. The algorithm utilized the fluo...

  2. Using the erroneous data clustering to improve the feature extraction weights of original image algorithms

    NASA Astrophysics Data System (ADS)

    Wu, Tin-Yu; Chang, Tse; Chu, Teng-Hao

    2017-02-01

    Many data mining adopts the form of Artificial Neural Network (ANN) to solve many problems, many problems will be involved in the process of training Artificial Neural Network, such as the number of samples with volume label, the time and performance of training, the number of hidden layers and Transfer function, if the compared data results are not expected, it cannot be known clearly that which dimension causes the deviation, the main reason is that Artificial Neural Network trains compared results through the form of modifying weight, and it is not a kind of training to improve the original algorithm for the extraction algorithm of image, but tend to obtain correct value aimed at the result plus the weigh; in terms of these problems, this paper will mainly put forward a method to assist in the image data analysis of Artificial Neural Network; normally, a parameter will be set as the value to extract feature vector during processing the image, which will be considered by us as weight, the experiment will use the value extracted from feature point of Speeded Up Robust Features (SURF) Image as the basis for training, SURF itself can extract different feature points according to extracted values, we will make initial semi-supervised clustering according to these values, and use Modified K - on his Neighbors (MFKNN) as training and classification, the matching mode of unknown images is not one-to-one complete comparison, but only compare group Centroid, its main purpose is to save its efficiency and speed up, and its retrieved data results will be observed and analyzed eventually; the method is mainly to make clustering and classification with the use of the nature of image feature point to give values to groups with high error rate to produce new feature points and put them into Input Layer of Artificial Neural Network for training, and finally comparative analysis is made with Back-Propagation Neural Network (BPN) of Genetic Algorithm-Artificial Neural Network

  3. Translanguaging: Origins and Development from School to Street and beyond

    ERIC Educational Resources Information Center

    Lewis, Gwyn; Jones, Bryn; Baker, Colin

    2012-01-01

    The article traces the Welsh origins of "translanguaging" from the 1980s to the recent global use, analysing the development and extension of the term. It suggests that the growing popularity of the term relates to a change in the way bilingualism and multilingualism have ideologically developed not only among academics but also amid…

  4. Novel non-invasive algorithm to identify the origins of re-entry and ectopic foci in the atria from 64-lead ECGs: A computational study

    PubMed Central

    Langley, Philip

    2017-01-01

    Atrial tachy-arrhytmias, such as atrial fibrillation (AF), are characterised by irregular electrical activity in the atria, generally associated with erratic excitation underlain by re-entrant scroll waves, fibrillatory conduction of multiple wavelets or rapid focal activity. Epidemiological studies have shown an increase in AF prevalence in the developed world associated with an ageing society, highlighting the need for effective treatment options. Catheter ablation therapy, commonly used in the treatment of AF, requires spatial information on atrial electrical excitation. The standard 12-lead electrocardiogram (ECG) provides a method for non-invasive identification of the presence of arrhythmia, due to irregularity in the ECG signal associated with atrial activation compared to sinus rhythm, but has limitations in providing specific spatial information. There is therefore a pressing need to develop novel methods to identify and locate the origin of arrhythmic excitation. Invasive methods provide direct information on atrial activity, but may induce clinical complications. Non-invasive methods avoid such complications, but their development presents a greater challenge due to the non-direct nature of monitoring. Algorithms based on the ECG signals in multiple leads (e.g. a 64-lead vest) may provide a viable approach. In this study, we used a biophysically detailed model of the human atria and torso to investigate the correlation between the morphology of the ECG signals from a 64-lead vest and the location of the origin of rapid atrial excitation arising from rapid focal activity and/or re-entrant scroll waves. A focus-location algorithm was then constructed from this correlation. The algorithm had success rates of 93% and 76% for correctly identifying the origin of focal and re-entrant excitation with a spatial resolution of 40 mm, respectively. The general approach allows its application to any multi-lead ECG system. This represents a significant extension to

  5. Origin, Development, and Homeostasis of Tissue-resident Macrophages

    PubMed Central

    Haldar, Malay; Murphy, Kenneth M.

    2014-01-01

    Summary Macrophages are versatile cells of the hematopoietic system that display remarkable functional diversity encompassing innate immune responses, tissue development, and tissue homeostasis. Macrophages are present in almost all tissues of the body and display distinct location-specific phenotypes and gene expression profiles. Recent studies also demonstrate distinct origins of tissue-resident macrophages. This emerging picture of ontological, functional, and phenotypic heterogeneity within tissue macrophages has altered our understanding of these cells, which play important roles in many human diseases. In this review, we discuss the different origins of tissue macrophages, the transcription factors regulating their development, and the mechanisms underlying their homeostasis at steady state. PMID:25319325

  6. Developer Tools for Evaluating Multi-Objective Algorithms

    NASA Technical Reports Server (NTRS)

    Giuliano, Mark E.; Johnston, Mark D.

    2011-01-01

    Multi-objective algorithms for scheduling offer many advantages over the more conventional single objective approach. By keeping user objectives separate instead of combined, more information is available to the end user to make trade-offs between competing objectives. Unlike single objective algorithms, which produce a single solution, multi-objective algorithms produce a set of solutions, called a Pareto surface, where no solution is strictly dominated by another solution for all objectives. From the end-user perspective a Pareto-surface provides a tool for reasoning about trade-offs between competing objectives. From the perspective of a software developer multi-objective algorithms provide an additional challenge. How can you tell if one multi-objective algorithm is better than another? This paper presents formal and visual tools for evaluating multi-objective algorithms and shows how the developer process of selecting an algorithm parallels the end-user process of selecting a solution for execution out of the Pareto-Surface.

  7. Algorithm development for Maxwell's equations for computational electromagnetism

    NASA Technical Reports Server (NTRS)

    Goorjian, Peter M.

    1990-01-01

    A new algorithm has been developed for solving Maxwell's equations for the electromagnetic field. It solves the equations in the time domain with central, finite differences. The time advancement is performed implicitly, using an alternating direction implicit procedure. The space discretization is performed with finite volumes, using curvilinear coordinates with electromagnetic components along those directions. Sample calculations are presented of scattering from a metal pin, a square and a circle to demonstrate the capabilities of the new algorithm.

  8. Development of multigrid algorithms for problems from fluid dynamics

    NASA Astrophysics Data System (ADS)

    Becker, K.; Trottenberg, U.

    Multigrid algorithms are developed to demonstrate multigrid technique efficiency for complicated fluid dynamics problems regarding error reduction and discretization accuracy. Subsonic potential 2-D flow around a profile is studied as well as rotation-symmetric flow in a slot between two rotating spheres and the flow in the combustion chamber of Otto engines. The study of the 2-D subsonic potential flow around a profile with the multigrid algorithm is discussed.

  9. Algorithmic Basics of Search Engine Development

    NASA Astrophysics Data System (ADS)

    Tregubov, A. A.; Kononova, T. S.

    The basics of search engines development are reviewed in this report. A structure of search engine as a part of an electronic library is offered. Methods of smart search of relevant information based on multi-agent systems and document processing methods are reviewed in the report. Analysis of major problems of processing, indexing and relevance evaluation is carried out. Statistical indexing, algebraic relevance evaluation and linguistic automaton construction for effective document processing and understanding are considered.

  10. Infrared Algorithm Development for Ocean Observations with EOS/MODIS

    NASA Technical Reports Server (NTRS)

    Brown, Otis B.

    1997-01-01

    Efforts continue under this contract to develop algorithms for the computation of sea surface temperature (SST) from MODIS infrared measurements. This effort includes radiative transfer modeling, comparison of in situ and satellite observations, development and evaluation of processing and networking methodologies for algorithm computation and data accession, evaluation of surface validation approaches for IR radiances, development of experimental instrumentation, and participation in MODIS (project) related activities. Activities in this contract period have focused on radiative transfer modeling, evaluation of atmospheric correction methodologies, undertake field campaigns, analysis of field data, and participation in MODIS meetings.

  11. System development of the Screwworm Eradication Data System (SEDS) algorithm

    NASA Technical Reports Server (NTRS)

    Arp, G.; Forsberg, F.; Giddings, L.; Phinney, D.

    1976-01-01

    The use of remotely sensed data is reported in the eradication of the screwworm and in the study of the role of the weather in the activity and development of the screwworm fly. As a result, the Screwworm Eradication Data System (SEDS) algorithm was developed.

  12. Development and Testing of Data Mining Algorithms for Earth Observation

    NASA Technical Reports Server (NTRS)

    Glymour, Clark

    2005-01-01

    The new algorithms developed under this project included a principled procedure for classification of objects, events or circumstances according to a target variable when a very large number of potential predictor variables is available but the number of cases that can be used for training a classifier is relatively small. These "high dimensional" problems require finding a minimal set of variables -called the Markov Blanket-- sufficient for predicting the value of the target variable. An algorithm, the Markov Blanket Fan Search, was developed, implemented and tested on both simulated and real data in conjunction with a graphical model classifier, which was also implemented. Another algorithm developed and implemented in TETRAD IV for time series elaborated on work by C. Granger and N. Swanson, which in turn exploited some of our earlier work. The algorithms in question learn a linear time series model from data. Given such a time series, the simultaneous residual covariances, after factoring out time dependencies, may provide information about causal processes that occur more rapidly than the time series representation allow, so called simultaneous or contemporaneous causal processes. Working with A. Monetta, a graduate student from Italy, we produced the correct statistics for estimating the contemporaneous causal structure from time series data using the TETRAD IV suite of algorithms. Two economists, David Bessler and Kevin Hoover, have independently published applications using TETRAD style algorithms to the same purpose. These implementations and algorithmic developments were separately used in two kinds of studies of climate data: Short time series of geographically proximate climate variables predicting agricultural effects in California, and longer duration climate measurements of temperature teleconnections.

  13. Tactical weapons algorithm development for unitary and fused systems

    NASA Astrophysics Data System (ADS)

    Talele, Sunjay E.; Watson, John S.; Williams, Bradford D.; Amphay, Sengvieng A.

    1996-06-01

    A much needed capability in today's tactical Air Force is weapons systems capable of precision guidance in all weather conditions against targets in high clutter backgrounds. To achieve this capability, the Armament Directorate of Wright Laboratory, WL/MN, has been exploring various seeker technologies, including multi-sensor fusion, that may yield cost effective systems capable of operating under these conditions. A critical component of these seeker systems is their autonomous acquisition and tracking algorithms. It is these algorithms which will enable the autonomous operation of the weapons systems in the battlefield. In the past, a majority of the tactical weapon algorithms were developed in a manner which resulted in codes that were not releasable to the community, either because they were considered company proprietary or competition sensitive. As a result, the knowledge gained from these efforts was not transitioning through the technical community, thereby inhibiting the evolution of their development. In order to overcome this limitation, WL/MN has embarked upon a program to develop non-proprietary multi-sensor acquisition and tracking algorithms. To facilitate this development, a testbed has been constructed consisting of the Irma signature prediction model, data analysis workstations, and the modular algorithm concept evaluation tool (MACET) algorithm. All three of these components have been enhanced to accommodate both multi-spectral sensor fusion systems and the there dimensional signal processing techniques characteristic of ladar. MACET is a graphical interface driven system for rapid prototyping and evaluation of both unitary and fused sensor algorithms. This paper describes the MACET system and specifically elaborates on the three-dimensional capabilities recently incorporated into it.

  14. Implementation on Landsat Data of a Simple Cloud Mask Algorithm Developed for MODIS Land Bands

    NASA Technical Reports Server (NTRS)

    Oreopoulos, Lazaros; Wilson, Michael J.; Varnai, Tamas

    2010-01-01

    This letter assesses the performance on Landsat-7 images of a modified version of a cloud masking algorithm originally developed for clear-sky compositing of Moderate Resolution Imaging Spectroradiometer (MODIS) images at northern mid-latitudes. While data from recent Landsat missions include measurements at thermal wavelengths, and such measurements are also planned for the next mission, thermal tests are not included in the suggested algorithm in its present form to maintain greater versatility and ease of use. To evaluate the masking algorithm we take advantage of the availability of manual (visual) cloud masks developed at USGS for the collection of Landsat scenes used here. As part of our evaluation we also include the Automated Cloud Cover Assesment (ACCA) algorithm that includes thermal tests and is used operationally by the Landsat-7 mission to provide scene cloud fractions, but no cloud masks. We show that the suggested algorithm can perform about as well as ACCA both in terms of scene cloud fraction and pixel-level cloud identification. Specifically, we find that the algorithm gives an error of 1.3% for the scene cloud fraction of 156 scenes, and a root mean square error of 7.2%, while it agrees with the manual mask for 93% of the pixels, figures very similar to those from ACCA (1.2%, 7.1%, 93.7%).

  15. On the origins of novelty in development and evolution.

    PubMed

    Moczek, Armin P

    2008-05-01

    The origin of novel traits is what draws many to evolutionary biology, yet our understanding of the mechanisms that underlie the genesis of novelty remains limited. Here I review definitions of novelty including its relationship to homology. I then discuss how ontogenetic perspectives may allow us to move beyond current roadblocks in our understanding of the mechanics of innovation. Specifically, I explore the roles of canalization, plasticity and threshold responses during development in generating a reservoir of cryptic genetic variation free to drift and accumulate in natural populations. Environmental or genetic perturbations that exceed the buffering capacity of development can then release this variation, and, through evolution by genetic accommodation, result in rapid diversification, recurrence of lost phenotypes as well as the origins of novel features. I conclude that, in our quest to understand the nature of innovation, the nature of development deserves to take center stage.

  16. Infrared algorithm development for ocean observations with EOS/MODIS

    NASA Technical Reports Server (NTRS)

    Brown, Otis B.

    1994-01-01

    Efforts continue under this contract to develop algorithms for the computation of sea surface temperature (SST) from MODIS infrared retrievals. This effort includes radiative transfer modeling, comparison of in situ and satellite observations, development and evaluation of processing and networking methodologies for algorithm computation and data accession, evaluation of surface validation approaches for IR radiances, and participation in MODIS (project) related activities. Efforts in this contract period have focused on radiative transfer modeling and evaluation of atmospheric path radiance efforts on SST estimation, exploration of involvement in ongoing field studies, evaluation of new computer networking strategies, and objective analysis approaches.

  17. On the development of protein pKa calculation algorithms

    SciTech Connect

    Carstensen, Tommy; Farrell, Damien; Huang, Yong; Baker, Nathan A.; Nielsen, Jens E.

    2011-12-01

    Protein pKa calculation algorithms are typically developed to reproduce experimental pKa values and provide us with a better understanding of the fundamental importance of electrostatics for protein structure and function. However, the approximations and adjustable parameters employed in almost all pKa calculation methods means that there is the risk that pKa calculation algorithms are 'over-fitted' to the available datasets, and that these methods therefore do not model protein physics realistically. We employ simulations of the protein pKa calculation algorithm development process to show that careful optimization procedures and non-biased experimental datasets must be applied to ensure a realistic description of the underlying physical terms. We furthermore investigate the effect of experimental noise and find a significant effect on the pKa calculation algorithm optimization landscape. Finally, we comment on strategies for ensuring the physical realism of protein pKa calculation algorithms and we assess the overall state of the field with a view to predicting future directions of development.

  18. [Origin and development of umbilical therapy in traditional Chinese medicine].

    PubMed

    Zhang, Xue-Wei; Jia, Hong-Ling

    2014-06-01

    The origin and development of umbilical therapy in traditional Chinese medicine is explored from related literature in the history. As a result, the Shang period is regarded as initial period of umbilical therapy, while periods from Han Dynasty, Jin Dynasty and Southern-Northern Dynasties to Sui Dynasty and Tang Dynasty could be taken as stage of primary development. Time from Song Dynasty, Jin Dynasty and Yuan Dynasty to Ming and Qing Dynasties is believed as mature stage. Also the manipulation, application principle, indications and contraindications of umbilical therapy are explained. A brief overview of modern development of umbilical therapy is also described.

  19. Development and Application of a Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Fulton, Christopher E.; Maul, William A.; Sowers, T. Shane

    2007-01-01

    This paper describes the development and initial demonstration of a Portable Health Algorithms Test (PHALT) System that is being developed by researchers at the NASA Glenn Research Center (GRC). The PHALT System was conceived as a means of evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT System allows systems health management algorithms to be developed in a graphical programming environment; to be tested and refined using system simulation or test data playback; and finally, to be evaluated in a real-time hardware-in-the-loop mode with a live test article. In this paper, PHALT System development is described through the presentation of a functional architecture, followed by the selection and integration of hardware and software. Also described is an initial real-time hardware-in-the-loop demonstration that used sensor data qualification algorithms to diagnose and isolate simulated sensor failures in a prototype Power Distribution Unit test-bed. Success of the initial demonstration is highlighted by the correct detection of all sensor failures and the absence of any real-time constraint violations.

  20. Using Hypertext To Develop an Algorithmic Approach to Teaching Statistics.

    ERIC Educational Resources Information Center

    Halavin, James; Sommer, Charles

    Hypertext and its more advanced form Hypermedia represent a powerful authoring tool with great potential for allowing statistics teachers to develop documents to assist students in an algorithmic fashion. An introduction to the use of Hypertext is presented, with an example of its use. Hypertext is an approach to information management in which…

  1. Development, Comparisons and Evaluation of Aerosol Retrieval Algorithms

    NASA Astrophysics Data System (ADS)

    de Leeuw, G.; Holzer-Popp, T.; Aerosol-cci Team

    2011-12-01

    The Climate Change Initiative (cci) of the European Space Agency (ESA) has brought together a team of European Aerosol retrieval groups working on the development and improvement of aerosol retrieval algorithms. The goal of this cooperation is the development of methods to provide the best possible information on climate and climate change based on satellite observations. To achieve this, algorithms are characterized in detail as regards the retrieval approaches, the aerosol models used in each algorithm, cloud detection and surface treatment. A round-robin intercomparison of results from the various participating algorithms serves to identify the best modules or combinations of modules for each sensor. Annual global datasets including their uncertainties will then be produced and validated. The project builds on 9 existing algorithms to produce spectral aerosol optical depth (AOD and Ångström exponent) as well as other aerosol information; two instruments are included to provide the absorbing aerosol index (AAI) and stratospheric aerosol information. The algorithms included are: - 3 for ATSR (ORAC developed by RAL / Oxford university, ADV developed by FMI and the SU algorithm developed by Swansea University ) - 2 for MERIS (BAER by Bremen university and the ESA standard handled by HYGEOS) - 1 for POLDER over ocean (LOA) - 1 for synergetic retrieval (SYNAER by DLR ) - 1 for OMI retreival of the absorbing aerosol index with averaging kernel information (KNMI) - 1 for GOMOS stratospheric extinction profile retrieval (BIRA) The first seven algorithms aim at the retrieval of the AOD. However, each of the algorithms used differ in their approach, even for algorithms working with the same instrument such as ATSR or MERIS. To analyse the strengths and weaknesses of each algorithm several tests are made. The starting point for comparison and measurement of improvements is a retrieval run for 1 month, September 2008. The data from the same month are subsequently used for

  2. Algorithm integration using ADL (Algorithm Development Library) for improving CrIMSS EDR science product quality

    NASA Astrophysics Data System (ADS)

    Das, B.; Wilson, M.; Divakarla, M. G.; Chen, W.; Barnet, C.; Wolf, W.

    2013-05-01

    Algorithm Development Library (ADL) is a framework that mimics the operational system IDPS (Interface Data Processing Segment) that is currently being used to process data from instruments aboard Suomi National Polar-orbiting Partnership (S-NPP) satellite. The satellite was launched successfully in October 2011. The Cross-track Infrared and Microwave Sounder Suite (CrIMSS) consists of the Advanced Technology Microwave Sounder (ATMS) and Cross-track Infrared Sounder (CrIS) instruments that are on-board of S-NPP. These instruments will also be on-board of JPSS (Joint Polar Satellite System) that will be launched in early 2017. The primary products of the CrIMSS Environmental Data Record (EDR) include global atmospheric vertical temperature, moisture, and pressure profiles (AVTP, AVMP and AVPP) and Ozone IP (Intermediate Product from CrIS radiances). Several algorithm updates have recently been proposed by CrIMSS scientists that include fixes to the handling of forward modeling errors, a more conservative identification of clear scenes, indexing corrections for daytime products, and relaxed constraints between surface temperature and air temperature for daytime land scenes. We have integrated these improvements into the ADL framework. This work compares the results from ADL emulation of future IDPS system incorporating all the suggested algorithm updates with the current official processing results by qualitative and quantitative evaluations. The results prove these algorithm updates improve science product quality.

  3. A new algorithm to diagnose atrial ectopic origin from multi lead ECG systems--insights from 3D virtual human atria and torso.

    PubMed

    Alday, Erick A Perez; Colman, Michael A; Langley, Philip; Butters, Timothy D; Higham, Jonathan; Workman, Antony J; Hancox, Jules C; Zhang, Henggui

    2015-01-01

    Rapid atrial arrhythmias such as atrial fibrillation (AF) predispose to ventricular arrhythmias, sudden cardiac death and stroke. Identifying the origin of atrial ectopic activity from the electrocardiogram (ECG) can help to diagnose the early onset of AF in a cost-effective manner. The complex and rapid atrial electrical activity during AF makes it difficult to obtain detailed information on atrial activation using the standard 12-lead ECG alone. Compared to conventional 12-lead ECG, more detailed ECG lead configurations may provide further information about spatio-temporal dynamics of the body surface potential (BSP) during atrial excitation. We apply a recently developed 3D human atrial model to simulate electrical activity during normal sinus rhythm and ectopic pacing. The atrial model is placed into a newly developed torso model which considers the presence of the lungs, liver and spinal cord. A boundary element method is used to compute the BSP resulting from atrial excitation. Elements of the torso mesh corresponding to the locations of the placement of the electrodes in the standard 12-lead and a more detailed 64-lead ECG configuration were selected. The ectopic focal activity was simulated at various origins across all the different regions of the atria. Simulated BSP maps during normal atrial excitation (i.e. sinoatrial node excitation) were compared to those observed experimentally (obtained from the 64-lead ECG system), showing a strong agreement between the evolution in time of the simulated and experimental data in the P-wave morphology of the ECG and dipole evolution. An algorithm to obtain the location of the stimulus from a 64-lead ECG system was developed. The algorithm presented had a success rate of 93%, meaning that it correctly identified the origin of atrial focus in 75/80 simulations, and involved a general approach relevant to any multi-lead ECG system. This represents a significant improvement over previously developed algorithms.

  4. A New Algorithm to Diagnose Atrial Ectopic Origin from Multi Lead ECG Systems - Insights from 3D Virtual Human Atria and Torso

    PubMed Central

    Alday, Erick A. Perez; Colman, Michael A.; Langley, Philip; Butters, Timothy D.; Higham, Jonathan; Workman, Antony J.; Hancox, Jules C.; Zhang, Henggui

    2015-01-01

    Rapid atrial arrhythmias such as atrial fibrillation (AF) predispose to ventricular arrhythmias, sudden cardiac death and stroke. Identifying the origin of atrial ectopic activity from the electrocardiogram (ECG) can help to diagnose the early onset of AF in a cost-effective manner. The complex and rapid atrial electrical activity during AF makes it difficult to obtain detailed information on atrial activation using the standard 12-lead ECG alone. Compared to conventional 12-lead ECG, more detailed ECG lead configurations may provide further information about spatio-temporal dynamics of the body surface potential (BSP) during atrial excitation. We apply a recently developed 3D human atrial model to simulate electrical activity during normal sinus rhythm and ectopic pacing. The atrial model is placed into a newly developed torso model which considers the presence of the lungs, liver and spinal cord. A boundary element method is used to compute the BSP resulting from atrial excitation. Elements of the torso mesh corresponding to the locations of the placement of the electrodes in the standard 12-lead and a more detailed 64-lead ECG configuration were selected. The ectopic focal activity was simulated at various origins across all the different regions of the atria. Simulated BSP maps during normal atrial excitation (i.e. sinoatrial node excitation) were compared to those observed experimentally (obtained from the 64-lead ECG system), showing a strong agreement between the evolution in time of the simulated and experimental data in the P-wave morphology of the ECG and dipole evolution. An algorithm to obtain the location of the stimulus from a 64-lead ECG system was developed. The algorithm presented had a success rate of 93%, meaning that it correctly identified the origin of atrial focus in 75/80 simulations, and involved a general approach relevant to any multi-lead ECG system. This represents a significant improvement over previously developed algorithms. PMID

  5. Mechanical origins of rightward torsion in early chick brain development

    NASA Astrophysics Data System (ADS)

    Chen, Zi; Guo, Qiaohang; Dai, Eric; Taber, Larry

    2015-03-01

    During early development, the neural tube of the chick embryo undergoes a combination of progressive ventral bending and rightward torsion. This torsional deformation is one of the major organ-level left-right asymmetry events in development. Previous studies suggested that bending is mainly due to differential growth, however, the mechanism for torsion remains poorly understood. Since the heart almost always loops rightwards that the brain twists, researchers have speculated that heart looping affects the direction of brain torsion. However, direct evidence is lacking, nor is the mechanical origin of such torsion understood. In our study, experimental perturbations show that the bending and torsional deformations in the brain are coupled and that the vitelline membrane applies an external load necessary for torsion to occur. Moreover, the asymmetry of the looping heart gives rise to the chirality of the twisted brain. A computational model and a 3D printed physical model are employed to help interpret these findings. Our work clarifies the mechanical origins of brain torsion and the associated left-right asymmetry, and further reveals that the asymmetric development in one organ can induce the asymmetry of another developing organ through mechanics, reminiscent of D'Arcy Thompson's view of biological form as ``diagram of forces''. Z.C. is supported by the Society in Science - Branco Weiss fellowship, administered by ETH Zurich. L.A.T acknowledges the support from NIH Grants R01 GM075200 and R01 NS070918.

  6. Datasets for radiation network algorithm development and testing

    SciTech Connect

    Rao, Nageswara S; Sen, Satyabrata; Berry, M. L..; Wu, Qishi; Grieme, M.; Brooks, Richard R; Cordone, G.

    2016-01-01

    Domestic Nuclear Detection Office s (DNDO) Intelligence Radiation Sensors Systems (IRSS) program supported the development of networks of commercial-off-the-shelf (COTS) radiation counters for detecting, localizing, and identifying low-level radiation sources. Under this program, a series of indoor and outdoor tests were conducted with multiple source strengths and types, different background profiles, and various types of source and detector movements. Following the tests, network algorithms were replayed in various re-constructed scenarios using sub-networks. These measurements and algorithm traces together provide a rich collection of highly valuable datasets for testing the current and next generation radiation network algorithms, including the ones (to be) developed by broader R&D communities such as distributed detection, information fusion, and sensor networks. From this multiple TeraByte IRSS database, we distilled out and packaged the first batch of canonical datasets for public release. They include measurements from ten indoor and two outdoor tests which represent increasingly challenging baseline scenarios for robustly testing radiation network algorithms.

  7. Developing and Implementing the Data Mining Algorithms in RAVEN

    SciTech Connect

    Sen, Ramazan Sonat; Maljovec, Daniel Patrick; Alfonsi, Andrea; Rabiti, Cristian

    2015-09-01

    The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantification analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.

  8. The integumentary skeleton of tetrapods: origin, evolution, and development

    PubMed Central

    Vickaryous, Matthew K; Sire, Jean-Yves

    2009-01-01

    Although often overlooked, the integument of many tetrapods is reinforced by a morphologically and structurally diverse assemblage of skeletal elements. These elements are widely understood to be derivatives of the once all-encompassing dermal skeleton of stem-gnathostomes but most details of their evolution and development remain confused and uncertain. Herein we re-evaluate the tetrapod integumentary skeleton by integrating comparative developmental and tissue structure data. Three types of tetrapod integumentary elements are recognized: (1) osteoderms, common to representatives of most major taxonomic lineages; (2) dermal scales, unique to gymnophionans; and (3) the lamina calcarea, an enigmatic tissue found only in some anurans. As presently understood, all are derivatives of the ancestral cosmoid scale and all originate from scleroblastic neural crest cells. Osteoderms are plesiomorphic for tetrapods but demonstrate considerable lineage-specific variability in size, shape, and tissue structure and composition. While metaplastic ossification often plays a role in osteoderm development, it is not the exclusive mode of skeletogenesis. All osteoderms share a common origin within the dermis (at or adjacent to the stratum superficiale) and are composed primarily (but not exclusively) of osseous tissue. These data support the notion that all osteoderms are derivatives of a neural crest-derived osteogenic cell population (with possible matrix contributions from the overlying epidermis) and share a deep homology associated with the skeletogenic competence of the dermis. Gymnophionan dermal scales are structurally similar to the elasmoid scales of most teleosts and are not comparable with osteoderms. Whereas details of development are lacking, it is hypothesized that dermal scales are derivatives of an odontogenic neural crest cell population and that skeletogenesis is comparable with the formation of elasmoid scales. Little is known about the lamina calcarea. It is

  9. Development of microwave rainfall retrieval algorithm for climate applications

    NASA Astrophysics Data System (ADS)

    KIM, J. H.; Shin, D. B.

    2014-12-01

    With the accumulated satellite datasets for decades, it is possible that satellite-based data could contribute to sustained climate applications. Level-3 products from microwave sensors for climate applications can be obtained from several algorithms. For examples, the Microwave Emission brightness Temperature Histogram (METH) algorithm produces level-3 rainfalls directly, whereas the Goddard profiling (GPROF) algorithm first generates instantaneous rainfalls and then temporal and spatial averaging process leads to level-3 products. The rainfall algorithm developed in this study follows a similar approach to averaging instantaneous rainfalls. However, the algorithm is designed to produce instantaneous rainfalls at an optimal resolution showing reduced non-linearity in brightness temperature (TB)-rain rate(R) relations. It is found that the resolution tends to effectively utilize emission channels whose footprints are relatively larger than those of scattering channels. This algorithm is mainly composed of a-priori databases (DBs) and a Bayesian inversion module. The DB contains massive pairs of simulated microwave TBs and rain rates, obtained by WRF (version 3.4) and RTTOV (version 11.1) simulations. To improve the accuracy and efficiency of retrieval process, data mining technique is additionally considered. The entire DB is classified into eight types based on Köppen climate classification criteria using reanalysis data. Among these sub-DBs, only one sub-DB which presents the most similar physical characteristics is selected by considering the thermodynamics of input data. When the Bayesian inversion is applied to the selected DB, instantaneous rain rate with 6 hours interval is retrieved. The retrieved monthly mean rainfalls are statistically compared with CMAP and GPCP, respectively.

  10. Oscillation Detection Algorithm Development Summary Report and Test Plan

    SciTech Connect

    Zhou, Ning; Huang, Zhenyu; Tuffner, Francis K.; Jin, Shuangshuang

    2009-10-03

    Small signal stability problems are one of the major threats to grid stability and reliability in California and the western U.S. power grid. An unstable oscillatory mode can cause large-amplitude oscillations and may result in system breakup and large-scale blackouts. There have been several incidents of system-wide oscillations. Of them, the most notable is the August 10, 1996 western system breakup produced as a result of undamped system-wide oscillations. There is a great need for real-time monitoring of small-signal oscillations in the system. In power systems, a small-signal oscillation is the result of poor electromechanical damping. Considerable understanding and literature have been developed on the small-signal stability problem over the past 50+ years. These studies have been mainly based on a linearized system model and eigenvalue analysis of its characteristic matrix. However, its practical feasibility is greatly limited as power system models have been found inadequate in describing real-time operating conditions. Significant efforts have been devoted to monitoring system oscillatory behaviors from real-time measurements in the past 20 years. The deployment of phasor measurement units (PMU) provides high-precision time-synchronized data needed for estimating oscillation modes. Measurement-based modal analysis, also known as ModeMeter, uses real-time phasor measure-ments to estimate system oscillation modes and their damping. Low damping indicates potential system stability issues. Oscillation alarms can be issued when the power system is lightly damped. A good oscillation alarm tool can provide time for operators to take remedial reaction and reduce the probability of a system breakup as a result of a light damping condition. Real-time oscillation monitoring requires ModeMeter algorithms to have the capability to work with various kinds of measurements: disturbance data (ringdown signals), noise probing data, and ambient data. Several measurement

  11. Data inversion algorithm development for the hologen occultation experiment

    NASA Technical Reports Server (NTRS)

    Gordley, Larry L.; Mlynczak, Martin G.

    1986-01-01

    The successful retrieval of atmospheric parameters from radiometric measurement requires not only the ability to do ideal radiometric calculations, but also a detailed understanding of instrument characteristics. Therefore a considerable amount of time was spent in instrument characterization in the form of test data analysis and mathematical formulation. Analyses of solar-to-reference interference (electrical cross-talk), detector nonuniformity, instrument balance error, electronic filter time-constants and noise character were conducted. A second area of effort was the development of techniques for the ideal radiometric calculations required for the Halogen Occultation Experiment (HALOE) data reduction. The computer code for these calculations must be extremely complex and fast. A scheme for meeting these requirements was defined and the algorithms needed form implementation are currently under development. A third area of work included consulting on the implementation of the Emissivity Growth Approximation (EGA) method of absorption calculation into a HALOE broadband radiometer channel retrieval algorithm.

  12. Development of antibiotic regimens using graph based evolutionary algorithms.

    PubMed

    Corns, Steven M; Ashlock, Daniel A; Bryden, Kenneth M

    2013-12-01

    This paper examines the use of evolutionary algorithms in the development of antibiotic regimens given to production animals. A model is constructed that combines the lifespan of the animal and the bacteria living in the animal's gastro-intestinal tract from the early finishing stage until the animal reaches market weight. This model is used as the fitness evaluation for a set of graph based evolutionary algorithms to assess the impact of diversity control on the evolving antibiotic regimens. The graph based evolutionary algorithms have two objectives: to find an antibiotic treatment regimen that maintains the weight gain and health benefits of antibiotic use and to reduce the risk of spreading antibiotic resistant bacteria. This study examines different regimens of tylosin phosphate use on bacteria populations divided into Gram positive and Gram negative types, with a focus on Campylobacter spp. Treatment regimens were found that provided decreased antibiotic resistance relative to conventional methods while providing nearly the same benefits as conventional antibiotic regimes. By using a graph to control the information flow in the evolutionary algorithm, a variety of solutions along the Pareto front can be found automatically for this and other multi-objective problems.

  13. Development of a biomimetic robotic fish and its control algorithm.

    PubMed

    Yu, Junzhi; Tan, Min; Wang, Shuo; Chen, Erkui

    2004-08-01

    This paper is concerned with the design of a robotic fish and its motion control algorithms. A radio-controlled, four-link biomimetic robotic fish is developed using a flexible posterior body and an oscillating foil as a propeller. The swimming speed of the robotic fish is adjusted by modulating joint's oscillating frequency, and its orientation is tuned by different joint's deflections. Since the motion control of a robotic fish involves both hydrodynamics of the fluid environment and dynamics of the robot, it is very difficult to establish a precise mathematical model employing purely analytical methods. Therefore, the fish's motion control task is decomposed into two control systems. The online speed control implements a hybrid control strategy and a proportional-integral-derivative (PID) control algorithm. The orientation control system is based on a fuzzy logic controller. In our experiments, a point-to-point (PTP) control algorithm is implemented and an overhead vision system is adopted to provide real-time visual feedback. The experimental results confirm the effectiveness of the proposed algorithms.

  14. Computational Fluid Dynamics. [numerical methods and algorithm development

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.

  15. SMMR Simulator radiative transfer calibration model. 2: Algorithm development

    NASA Technical Reports Server (NTRS)

    Link, S.; Calhoon, C.; Krupp, B.

    1980-01-01

    Passive microwave measurements performed from Earth orbit can be used to provide global data on a wide range of geophysical and meteorological phenomena. A Scanning Multichannel Microwave Radiometer (SMMR) is being flown on the Nimbus-G satellite. The SMMR Simulator duplicates the frequency bands utilized in the spacecraft instruments through an amalgamate of radiometer systems. The algorithm developed utilizes data from the fall 1978 NASA CV-990 Nimbus-G underflight test series and subsequent laboratory testing.

  16. Development of an Inverse Algorithm for Resonance Inspection

    SciTech Connect

    Lai, Canhai; Xu, Wei; Sun, Xin

    2012-10-01

    Resonance inspection (RI), which employs the natural frequency spectra shift between the good and the anomalous part populations to detect defects, is a non-destructive evaluation (NDE) technique with many advantages such as low inspection cost, high testing speed, and broad applicability to structures with complex geometry compared to other contemporary NDE methods. It has already been widely used in the automobile industry for quality inspections of safety critical parts. Unlike some conventionally used NDE methods, the current RI technology is unable to provide details, i.e. location, dimension, or types, of the flaws for the discrepant parts. Such limitation severely hinders its wide spread applications and further development. In this study, an inverse RI algorithm based on maximum correlation function is proposed to quantify the location and size of flaws for a discrepant part. A dog-bone shaped stainless steel sample with and without controlled flaws are used for algorithm development and validation. The results show that multiple flaws can be accurately pinpointed back using the algorithms developed, and the prediction accuracy decreases with increasing flaw numbers and decreasing distance between flaws.

  17. Determination of origin and sugars of citrus fruits using genetic algorithm, correspondence analysis and partial least square combined with fiber optic NIR spectroscopy

    NASA Astrophysics Data System (ADS)

    Tewari, Jagdish C.; Dixit, Vivechana; Cho, Byoung-Kwan; Malik, Kamal A.

    2008-12-01

    The capacity to confirm the variety or origin and the estimation of sucrose, glucose, fructose of the citrus fruits are major interests of citrus juice industry. A rapid classification and quantification technique was developed and validated for simultaneous and nondestructive quantifying the sugar constituent's concentrations and the origin of citrus fruits using Fourier Transform Near-Infrared (FT-NIR) spectroscopy in conjunction with Artificial Neural Network (ANN) using genetic algorithm, Chemometrics and Correspondences Analysis (CA). To acquire good classification accuracy and to present a wide range of concentration of sucrose, glucose and fructose, we have collected 22 different varieties of citrus fruits from the market during the entire season of citruses. FT-NIR spectra were recorded in the NIR region from 1100 to 2500 nm using the fiber optic probe and three types of data analysis were performed. Chemometrics analysis using Partial Least Squares (PLS) was performed in order to determine the concentration of individual sugars. Artificial Neural Network analysis was performed for classification, origin or variety identification of citrus fruits using genetic algorithm. Correspondence analysis was performed in order to visualize the relationship between the citrus fruits. To compute a PLS model based upon the reference values and to validate the developed method, high performance liquid chromatography (HPLC) was performed. Spectral range and the number of PLS factors were optimized for the lowest standard error of calibration (SEC), prediction (SEP) and correlation coefficient ( R2). The calibration model developed was able to assess the sucrose, glucose and fructose contents in unknown citrus fruit up to an R2 value of 0.996-0.998. Numbers of factors from F1 to F10 were optimized for correspondence analysis for relationship visualization of citrus fruits based on the output values of genetic algorithm. ANN and CA analysis showed excellent classification

  18. Development of clustering algorithms for Compressed Baryonic Matter experiment

    NASA Astrophysics Data System (ADS)

    Kozlov, G. E.; Ivanov, V. V.; Lebedev, A. A.; Vassiliev, Yu. O.

    2015-05-01

    A clustering problem for the coordinate detectors in the Compressed Baryonic Matter (CBM) experiment is discussed. Because of the high interaction rate and huge datasets to be dealt with, clustering algorithms are required to be fast and efficient and capable of processing events with high track multiplicity. At present there are two different approaches to the problem. In the first one each fired pad bears information about its charge, while in the second one a pad can or cannot be fired, thus rendering the separation of overlapping clusters a difficult task. To deal with the latter, two different clustering algorithms were developed, integrated into the CBMROOT software environment, and tested with various types of simulated events. Both of them are found to be highly efficient and accurate.

  19. Developing Information Power Grid Based Algorithms and Software

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This exploratory study initiated our effort to understand performance modeling on parallel systems. The basic goal of performance modeling is to understand and predict the performance of a computer program or set of programs on a computer system. Performance modeling has numerous applications, including evaluation of algorithms, optimization of code implementations, parallel library development, comparison of system architectures, parallel system design, and procurement of new systems. Our work lays the basis for the construction of parallel libraries that allow for the reconstruction of application codes on several distinct architectures so as to assure performance portability. Following our strategy, once the requirements of applications are well understood, one can then construct a library in a layered fashion. The top level of this library will consist of architecture-independent geometric, numerical, and symbolic algorithms that are needed by the sample of applications. These routines should be written in a language that is portable across the targeted architectures.

  20. Development of algorithm for single axis sun tracking system

    NASA Astrophysics Data System (ADS)

    Yi, Lim Zi; Singh, Balbir Singh Mahinder; Ching, Dennis Ling Chuan; Jin, Calvin Low Eu

    2016-11-01

    The output power from a solar panel depends on the amount of sunlight that is intercepted by the photovoltaic (PV) solar panel. The value of solar irradiance varies due to the change of position of sun and the local meteorological conditions. This causes the output power of a PV based solar electricity generating system (SEGS) to fluctuate as well. In this paper, the focus is on the integration of solar tracking system with performance analyzer system through the development of an algorithm for optimizing the performance of SEGS. The proposed algorithm displays real-time processed data that would enable users to understand the trend of the SEGS output for maintenance prediction and optimization purposes.

  1. The development of a whole-body algorithm

    NASA Technical Reports Server (NTRS)

    Kay, F. J.

    1973-01-01

    The whole-body algorithm is envisioned as a mathematical model that utilizes human physiology to simulate the behavior of vital body systems. The objective of this model is to determine the response of selected body parameters within these systems to various input perturbations, or stresses. Perturbations of interest are exercise, chemical unbalances, gravitational changes and other abnormal environmental conditions. This model provides for a study of man's physiological response in various space applications, underwater applications, normal and abnormal workloads and environments, and the functioning of the system with physical impairments or decay of functioning components. Many methods or approaches to the development of a whole-body algorithm are considered. Of foremost concern is the determination of the subsystems to be included, the detail of the subsystems and the interaction between the subsystems.

  2. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels

    NASA Astrophysics Data System (ADS)

    Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.

    2013-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.

  3. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels

    PubMed Central

    Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.

    2013-01-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented. PMID:23814604

  4. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels.

    PubMed

    Kolstein, M; De Lorenzo, G; Mikhaylova, E; Chmeissani, M; Ariño, G; Calderón, Y; Ozsahin, I; Uzun, D

    2013-04-29

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 10(6)). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.

  5. The Big Bang Model: Its Origin and Development

    NASA Astrophysics Data System (ADS)

    Alpher, Ralph A.

    The current Big Bang Model had its origin in Einstein's attempt to model a static cosmos, based on his general theory of relativity. Friedmann and Lemaitre, as well as de Sitter, further developed the model to cover other options, including nonstatic behavior. Lemaitre in the 1930s and, particularly, Gamow in 1946 first put physics into the nonstatic model. By 1946 there had been significant developments in the mathematics of the model due to Robertson, Walker, Tolman and many others. The Hubble law had given an essential observational basis for the Big Bang, as did the attribution of cosmic significance to element abundances by Goldschmidt. Following early suggestions by George Gamow, the first attempt to explain nucleosynthesis in a hot, dense, early universe was done by Alpher, Bethe and Gamow in 1948, a paper whose principal importance was that it suggested that the early universe was in fact hot and dense, and that hydrogen and helium and perhaps other light elements were primeval. In that same year Alpher and Herman first predicted a cosmic background radiation at 5 kelvin as an essential feature of the model. The Hubble expansion rate, the primordial and stellar abundances of the elements, and the cosmic microwave background are major pillars today for the Big Bang model.

  6. Development of a new metal artifact reduction algorithm by using an edge preserving method for CBCT imaging

    NASA Astrophysics Data System (ADS)

    Kim, Juhye; Nam, Haewon; Lee, Rena

    2015-07-01

    CT (computed tomography) images, metal materials such as tooth supplements or surgical clips can cause metal artifact and degrade image quality. In severe cases, this may lead to misdiagnosis. In this research, we developed a new MAR (metal artifact reduction) algorithm by using an edge preserving filter and the MATLAB program (Mathworks, version R2012a). The proposed algorithm consists of 6 steps: image reconstruction from projection data, metal segmentation, forward projection, interpolation, applied edge preserving smoothing filter, and new image reconstruction. For an evaluation of the proposed algorithm, we obtained both numerical simulation data and data for a Rando phantom. In the numerical simulation data, four metal regions were added into the Shepp Logan phantom for metal artifacts. The projection data of the metal-inserted Rando phantom were obtained by using a prototype CBCT scanner manufactured by medical engineering and medical physics (MEMP) laboratory research group in medical science at Ewha Womans University. After these had been adopted the proposed algorithm was performed, and the result were compared with the original image (with metal artifact without correction) and with a corrected image based on linear interpolation. Both visual and quantitative evaluations were done. Compared with the original image with metal artifacts and with the image corrected by using linear interpolation, both the numerical and the experimental phantom data demonstrated that the proposed algorithm reduced the metal artifact. In conclusion, the evaluation in this research showed that the proposed algorithm outperformed the interpolation based MAR algorithm. If an optimization and a stability evaluation of the proposed algorithm can be performed, the developed algorithm is expected to be an effective tool for eliminating metal artifacts even in commercial CT systems.

  7. Development of a validation algorithm for 'present on admission' flagging

    PubMed Central

    2009-01-01

    Background The use of routine hospital data for understanding patterns of adverse outcomes has been limited in the past by the fact that pre-existing and post-admission conditions have been indistinguishable. The use of a 'Present on Admission' (or POA) indicator to distinguish pre-existing or co-morbid conditions from those arising during the episode of care has been advocated in the US for many years as a tool to support quality assurance activities and improve the accuracy of risk adjustment methodologies. The USA, Australia and Canada now all assign a flag to indicate the timing of onset of diagnoses. For quality improvement purposes, it is the 'not-POA' diagnoses (that is, those acquired in hospital) that are of interest. Methods Our objective was to develop an algorithm for assessing the validity of assignment of 'not-POA' flags. We undertook expert review of the International Classification of Diseases, 10th Revision, Australian Modification (ICD-10-AM) to identify conditions that could not be plausibly hospital-acquired. The resulting computer algorithm was tested against all diagnoses flagged as complications in the Victorian (Australia) Admitted Episodes Dataset, 2005/06. Measures reported include rates of appropriate assignment of the new Australian 'Condition Onset' flag by ICD chapter, and patterns of invalid flagging. Results Of 18,418 diagnosis codes reviewed, 93.4% (n = 17,195) reflected agreement on status for flagging by at least 2 of 3 reviewers (including 64.4% unanimous agreement; Fleiss' Kappa: 0.61). In tests of the new algorithm, 96.14% of all hospital-acquired diagnosis codes flagged were found to be valid in the Victorian records analysed. A lower proportion of individual codes was judged to be acceptably flagged (76.2%), but this reflected a high proportion of codes used <5 times in the data set (789/1035 invalid codes). Conclusion An indicator variable about the timing of occurrence of diagnoses can greatly expand the use of routinely

  8. Comparison of Performance Effectiveness of Linear Control Algorithms Developed for a Simplified Ground Vehicle Suspension System

    DTIC Science & Technology

    2011-04-01

    Comparison of Performance Effectiveness of Linear Control Algorithms Developed for a Simplified Ground Vehicle Suspension System by Ross... Linear Control Algorithms Developed for a Simplified Ground Vehicle Suspension System Ross Brown Motile Robotics, Inc, research contractor at U.S... Linear Control Algorithms Developed for a Simplified Ground Vehicle Suspension System 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT

  9. Origins and development of the Cauchy problem in general relativity

    NASA Astrophysics Data System (ADS)

    Ringström, Hans

    2015-06-01

    The seminal work of Yvonne Choquet-Bruhat published in 1952 demonstrates that it is possible to formulate Einstein's equations as an initial value problem. The purpose of this article is to describe the background to and impact of this achievement, as well as the result itself. In some respects, the idea of viewing the field equations of general relativity as a system of evolution equations goes back to Einstein himself; in an argument justifying that gravitational waves propagate at the speed of light, Einstein used a special choice of coordinates to derive a system of wave equations for the linear perturbations on a Minkowski background. Over the following decades, Hilbert, de Donder, Lanczos, Darmois and many others worked to put Einstein's ideas on a more solid footing. In fact, the issue of local uniqueness (giving a rigorous justification for the statement that the speed of propagation of the gravitational field is bounded by that of light) was already settled in the 1930s by the work of Stellmacher. However, the first person to demonstrate both local existence and uniqueness in a setting in which the notion of finite speed of propagation makes sense was Yvonne Choquet-Bruhat. In this sense, her work lays the foundation for the formulation of Einstein's equations as an initial value problem. Following a description of the results of Choquet-Bruhat, we discuss the development of three research topics that have their origin in her work. The first one is local existence. One reason for addressing it is that it is at the heart of the original paper. Moreover, it is still an active and important research field, connected to the problem of characterizing the asymptotic behaviour of solutions that blow up in finite time. As a second topic, we turn to the questions of global uniqueness and strong cosmic censorship. These questions are of fundamental importance to anyone interested in justifying that the Cauchy problem makes sense globally. They are also closely

  10. Development of a generally applicable morphokinetic algorithm capable of predicting the implantation potential of embryos transferred on Day 3

    PubMed Central

    Petersen, Bjørn Molt; Boel, Mikkel; Montag, Markus; Gardner, David K.

    2016-01-01

    STUDY QUESTION Can a generally applicable morphokinetic algorithm suitable for Day 3 transfers of time-lapse monitored embryos originating from different culture conditions and fertilization methods be developed for the purpose of supporting the embryologist's decision on which embryo to transfer back to the patient in assisted reproduction? SUMMARY ANSWER The algorithm presented here can be used independently of culture conditions and fertilization method and provides predictive power not surpassed by other published algorithms for ranking embryos according to their blastocyst formation potential. WHAT IS KNOWN ALREADY Generally applicable algorithms have so far been developed only for predicting blastocyst formation. A number of clinics have reported validated implantation prediction algorithms, which have been developed based on clinic-specific culture conditions and clinical environment. However, a generally applicable embryo evaluation algorithm based on actual implantation outcome has not yet been reported. STUDY DESIGN, SIZE, DURATION Retrospective evaluation of data extracted from a database of known implantation data (KID) originating from 3275 embryos transferred on Day 3 conducted in 24 clinics between 2009 and 2014. The data represented different culture conditions (reduced and ambient oxygen with various culture medium strategies) and fertilization methods (IVF, ICSI). The capability to predict blastocyst formation was evaluated on an independent set of morphokinetic data from 11 218 embryos which had been cultured to Day 5. PARTICIPANTS/MATERIALS, SETTING, METHODS The algorithm was developed by applying automated recursive partitioning to a large number of annotation types and derived equations, progressing to a five-fold cross-validation test of the complete data set and a validation test of different incubation conditions and fertilization methods. The results were expressed as receiver operating characteristics curves using the area under the

  11. Chronic wrist pain: diagnosis and management. Development and use of a new algorithm

    PubMed Central

    van Vugt, R. M; Bijlsma, J.; van Vugt, A. C

    1999-01-01

    OBJECTIVE—Chronic wrist pain can be difficult to manage and the differential diagnosis is extensive. To provide guidelines for assessment of the painful wrist an algorithm was developed to encourage a structured approach to the diagnosis and management of these patients.
METHODS—A review of the literature on causes of chronic wrist pain was undertaken; history taking, physical examination and imaging studies were evaluated systematically to determine which of the many potential conditions was the cause of the wrist pain. Chronic wrist pain was subdivided into pain of probable intra-articular or extra-articular origin. By means of this classification a clinical algorithm was developed to establish a diagnosis and its clinical usefulness was tested in a prospective study of 84 patients presenting to our outpatient clinic.
RESULTS—A definite diagnosis could be established in 59% (49 of 84) of the cases by careful history taking, extensive physical examination, plain radiographs, ultrasound examination and bone scintigraphy. In 19% of the cases (16 of 84) a probable diagnosis was made resulting in a total figure 78% (65 of 84). Additional imaging studies (arthrography, magnetic resonance imaging and computed tomography) increased the definite diagnoses to 70% (59 of 84).
CONCLUSION—The algorithm proved easy to use and by the use of careful history taking, thorough physical examination and simple imaging techniques (ultrasonography and scintigraphy) a diagnosis was made in 78% of cases.

 PMID:10531069

  12. Avian skin development and the evolutionary origin of feathers.

    PubMed

    Sawyer, Roger H; Knapp, Loren W

    2003-08-15

    The discovery of several dinosaurs with filamentous integumentary appendages of different morphologies has stimulated models for the evolutionary origin of feathers. In order to understand these models, knowledge of the development of the avian integument must be put into an evolutionary context. Thus, we present a review of avian scale and feather development, which summarizes the morphogenetic events involved, as well as the expression of the beta (beta) keratin multigene family that characterizes the epidermal appendages of reptiles and birds. First we review information on the evolution of the ectodermal epidermis and its beta (beta) keratins. Then we examine the morphogenesis of scutate scales and feathers including studies in which the extraembryonic ectoderm of the chorion is used to examine dermal induction. We also present studies on the scaleless (sc) mutant, and, because of the recent discovery of "four-winged" dinosaurs, we review earlier studies of a chicken strain, Silkie, that expresses ptilopody (pti), "feathered feet." We conclude that the ability of the ectodermal epidermis to generate discrete cell populations capable of forming functional structural elements consisting of specific members of the beta keratin multigene family was a plesiomorphic feature of the archosaurian ancestor of crocodilians and birds. Evidence suggests that the discrete epidermal lineages that make up the embryonic feather filament of extant birds are homologous with similar embryonic lineages of the developing scutate scales of birds and the scales of alligators. We believe that the early expression of conserved signaling modules in the embryonic skin of the avian ancestor led to the early morphogenesis of the embryonic feather filament, with its periderm, sheath, and barb ridge lineages forming the first protofeather. Invagination of the epidermis of the protofeather led to formation of the follicle providing for feather renewal and diversification. The observations that

  13. A direct phasing method based on the origin-free modulus sum function and the FFT algorithm. XII.

    PubMed

    Rius, Jordi; Crespi, Anna; Torrelles, Xavier

    2007-03-01

    An alternative way of refining phases with the origin-free modulus sum function S is shown that, instead of applying the tangent formula in sequential mode [Rius (1993). Acta Cryst. A49, 406-409], applies it in parallel mode with the help of the fast Fourier transform (FFT) algorithm. The test calculations performed on intensity data of small crystal structures at atomic resolution prove the convergence and hence the viability of the procedure. This new procedure called S-FFT is valid for all space groups and especially competitive for low-symmetry ones. It works well when the charge-density peaks in the crystal structure have the same sign, i.e. either positive or negative.

  14. Development of Topological Correction Algorithms for ADCP Multibeam Bathymetry Measurements

    NASA Astrophysics Data System (ADS)

    Yang, Sung-Kee; Kim, Dong-Su; Kim, Soo-Jeong; Jung, Woo-Yul

    2013-04-01

    Acoustic Doppler Current Profilers (ADCPs) are increasingly popular in the river research and management communities being primarily used for estimation of stream flows. ADCPs capabilities, however, entail additional features that are not fully explored, such as morphologic representation of river or reservoir bed based upon multi-beam depth measurements. In addition to flow velocity, ADCP measurements include river bathymetry information through the depth measurements acquired in individual 4 or 5 beams with a given oblique angle. Such sounding capability indicates that multi-beam ADCPs can be utilized as an efficient depth-sounder to be more capable than the conventional single-beam eco-sounders. The paper introduces the post-processing algorithms required to deal with raw ADCP bathymetry measurements including the following aspects: a) correcting the individual beam depths for tilt (pitch and roll); b) filtering outliers using SMART filters; d) transforming the corrected depths into geographical coordinates by UTM conversion; and, e) tag the beam detecting locations with the concurrent GPS information; f) spatial representation in a GIS package. The developed algorithms are applied for the ADCP bathymetric dataset acquired from Han-Cheon in Juju Island to validate their applicability.

  15. Further development of an improved altimeter wind speed algorithm

    NASA Technical Reports Server (NTRS)

    Chelton, Dudley B.; Wentz, Frank J.

    1986-01-01

    A previous altimeter wind speed retrieval algorithm was developed on the basis of wind speeds in the limited range from about 4 to 14 m/s. In this paper, a new approach which gives a wind speed model function applicable over the range 0 to 21 m/s is used. The method is based on comparing 50 km along-track averages of the altimeter normalized radar cross section measurements with neighboring off-nadir scatterometer wind speed measurements. The scatterometer winds are constructed from 100 km binned measurements of radar cross section and are located approximately 200 km from the satellite subtrack. The new model function agrees very well with earlier versions up to wind speeds of 14 m/s, but differs significantly at higher wind speeds. The relevance of these results to the Geosat altimeter launched in March 1985 is discussed.

  16. Development of computer algorithms for radiation treatment planning.

    PubMed

    Cunningham, J R

    1989-06-01

    As a result of an analysis of data relating tissue response to radiation absorbed dose the ICRU has recommended a target for accuracy of +/- 5 for dose delivery in radiation therapy. This is a difficult overall objective to achieve because of the many steps that make up a course of radiotherapy. The calculation of absorbed dose is only one of the steps and so to achieve an overall accuracy of better than +/- 5% the accuracy in dose calculation must be better yet. The physics behind the problem is sufficiently complicated so that no exact method of calculation has been found and consequently approximate solutions must be used. The development of computer algorithms for this task involves the search for better and better approximate solutions. To achieve the desired target of accuracy a fairly sophisticated calculation procedure must be used. Only when this is done can we hope to further improve our knowledge of the way in which tissues respond to radiation treatments.

  17. QAP collaborates in development of the sick child algorithm.

    PubMed

    1994-01-01

    Algorithms which specify procedures for proper diagnosis and treatment of common diseases have been available to primary health care services in less developed countries for the past decade. Whereas each algorithm has usually been limited to a single ailment, children often present with the need for more comprehensive assessment and treatment. Treating just one illness in these children leads to incomplete treatment or missed opportunities for preventive services. To address this problem, the World Health Organization has recently developed a Sick Child Algorithm (SCA) for children aged 2 months-5 years. In addition to specifying case management procedures for acute respiratory illness, diarrhea/dehydration, fever, otitis, and malnutrition, the SCA prompts a check of the child's immunization status. The specificity and sensitivity of this SCA were field-tested in Kenya and the Gambia. In Kenya, the Malaria Branch of the US Centers for Disease Control and Prevention tested the SCA under typical conditions in Siaya District. The Quality Assurance Project of the Center for Human Services carried out a parallel facility-based systems analysis at the request of the Malaria Branch. The assessment which took place in September-October 1993, took the form of observations of provider/patient interactions, provider interviews, and verification of supplies and equipment in 19 rural health facilities to determine how current practices compare to actions prescribed by the SCA. This will reveal the type and amount of technical support needed to achieve conformity to the SCA's clinical practice recommendations. The data will allow officials to devise the proper training programs and will predict quality improvements likely to be achieved through adoption of the SCA in terms of effective case treatment and fewer missed immunization opportunities. Preliminary analysis indicates that the primary health care delivery in Siya deviates in several significant respects from performance

  18. Developing JSequitur to Study the Hierarchical Structure of Biological Sequences in a Grammatical Inference Framework of String Compression Algorithms.

    PubMed

    Galbadrakh, Bulgan; Lee, Kyung-Eun; Park, Hyun-Seok

    2012-12-01

    Grammatical inference methods are expected to find grammatical structures hidden in biological sequences. One hopes that studies of grammar serve as an appropriate tool for theory formation. Thus, we have developed JSequitur for automatically generating the grammatical structure of biological sequences in an inference framework of string compression algorithms. Our original motivation was to find any grammatical traits of several cancer genes that can be detected by string compression algorithms. Through this research, we could not find any meaningful unique traits of the cancer genes yet, but we could observe some interesting traits in regards to the relationship among gene length, similarity of sequences, the patterns of the generated grammar, and compression rate.

  19. Origin and Development of Multilingual Education in Eritrea

    ERIC Educational Resources Information Center

    Asfaha, Yonas Mesfun

    2015-01-01

    In an attempt to describe the historical origins of multilingual education in Eritrea, Horn of Africa, this paper looks at how missionaries, European colonisers, successive Ethiopian rules in Eritrea and the independence movements that fought Ethiopia defined ethnic, religious and linguistic differences of communities in the country. These…

  20. Origins, Form, and Development of the Son Jarocho: Veracruz, Mexico.

    ERIC Educational Resources Information Center

    Loza, Steven J.

    1982-01-01

    Son Jarocho (specifically from Veracruz) is a song-and-dance form originating in Spain and implanted in Mexico during 17th- and 18th-century colonization. The jarocho style of music today is one of Latin America's most unique forms, using one to four instruments and characterized by its distinctive rhythm. (LC)

  1. Developing Information Power Grid Based Algorithms and Software

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This was an exploratory study to enhance our understanding of problems involved in developing large scale applications in a heterogeneous distributed environment. It is likely that the large scale applications of the future will be built by coupling specialized computational modules together. For example, efforts now exist to couple ocean and atmospheric prediction codes to simulate a more complete climate system. These two applications differ in many respects. They have different grids, the data is in different unit systems and the algorithms for inte,-rating in time are different. In addition the code for each application is likely to have been developed on different architectures and tend to have poor performance when run on an architecture for which the code was not designed, if it runs at all. Architectural differences may also induce differences in data representation which effect precision and convergence criteria as well as data transfer issues. In order to couple such dissimilar codes some form of translation must be present. This translation should be able to handle interpolation from one grid to another as well as construction of the correct data field in the correct units from available data. Even if a code is to be developed from scratch, a modular approach will likely be followed in that standard scientific packages will be used to do the more mundane tasks such as linear algebra or Fourier transform operations. This approach allows the developers to concentrate on their science rather than becoming experts in linear algebra or signal processing. Problems associated with this development approach include difficulties associated with data extraction and translation from one module to another, module performance on different nodal architectures, and others. In addition to these data and software issues there exists operational issues such as platform stability and resource management.

  2. Nonlinear Motion Cueing Algorithm: Filtering at Pilot Station and Development of the Nonlinear Optimal Filters for Pitch and Roll

    NASA Technical Reports Server (NTRS)

    Zaychik, Kirill B.; Cardullo, Frank M.

    2012-01-01

    Telban and Cardullo have developed and successfully implemented the non-linear optimal motion cueing algorithm at the Visual Motion Simulator (VMS) at the NASA Langley Research Center in 2005. The latest version of the non-linear algorithm performed filtering of motion cues in all degrees-of-freedom except for pitch and roll. This manuscript describes the development and implementation of the non-linear optimal motion cueing algorithm for the pitch and roll degrees of freedom. Presented results indicate improved cues in the specified channels as compared to the original design. To further advance motion cueing in general, this manuscript describes modifications to the existing algorithm, which allow for filtering at the location of the pilot's head as opposed to the centroid of the motion platform. The rational for such modification to the cueing algorithms is that the location of the pilot's vestibular system must be taken into account as opposed to the off-set of the centroid of the cockpit relative to the center of rotation alone. Results provided in this report suggest improved performance of the motion cueing algorithm.

  3. Understanding disordered systems through numerical simulation and algorithm development

    NASA Astrophysics Data System (ADS)

    Sweeney, Sean Michael

    Disordered systems arise in many physical contexts. Not all matter is uniform, and impurities or heterogeneities can be modeled by fixed random disorder. Numerous complex networks also possess fixed disorder, leading to applications in transportation systems, telecommunications, social networks, and epidemic modeling, to name a few. Due to their random nature and power law critical behavior, disordered systems are difficult to study analytically. Numerical simulation can help overcome this hurdle by allowing for the rapid computation of system states. In order to get precise statistics and extrapolate to the thermodynamic limit, large systems must be studied over many realizations. Thus, innovative algorithm development is essential in order reduce memory or running time requirements of simulations. This thesis presents a review of disordered systems, as well as a thorough study of two particular systems through numerical simulation, algorithm development and optimization, and careful statistical analysis of scaling properties. Chapter 1 provides a thorough overview of disordered systems, the history of their study in the physics community, and the development of techniques used to study them. Topics of quenched disorder, phase transitions, the renormalization group, criticality, and scale invariance are discussed. Several prominent models of disordered systems are also explained. Lastly, analysis techniques used in studying disordered systems are covered. In Chapter 2, minimal spanning trees on critical percolation clusters are studied, motivated in part by an analytic perturbation expansion by Jackson and Read that I check against numerical calculations. This system has a direct mapping to the ground state of the strongly disordered spin glass. We compute the path length fractal dimension of these trees in dimensions d = {2, 3, 4, 5} and find our results to be compatible with the analytic results suggested by Jackson and Read. In Chapter 3, the random bond Ising

  4. Mars Entry Atmospheric Data System Modelling and Algorithm Development

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Beck, Roger E.; OKeefe, Stephen A.; Siemers, Paul; White, Brady; Engelund, Walter C.; Munk, Michelle M.

    2009-01-01

    The Mars Entry Atmospheric Data System (MEADS) is being developed as part of the Mars Science Laboratory (MSL), Entry, Descent, and Landing Instrumentation (MEDLI) project. The MEADS project involves installing an array of seven pressure transducers linked to ports on the MSL forebody to record the surface pressure distribution during atmospheric entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. In particular, the quantities to be estimated from the MEADS pressure measurements include the total pressure, dynamic pressure, Mach number, angle of attack, and angle of sideslip. Secondary objectives are to estimate atmospheric winds by coupling the pressure measurements with the on-board Inertial Measurement Unit (IMU) data. This paper provides details of the algorithm development, MEADS system performance based on calibration, and uncertainty analysis for the aerodynamic and atmospheric quantities of interest. The work presented here is part of the MEDLI performance pre-flight validation and will culminate with processing flight data after Mars entry in 2012.

  5. Algorithm development for Prognostics and Health Management (PHM).

    SciTech Connect

    Swiler, Laura Painton; Campbell, James E.; Doser, Adele Beatrice; Lowder, Kelly S.

    2003-10-01

    This report summarizes the results of a three-year LDRD project on prognostics and health management. System failure over some future time interval (an alternative definition is the capability to predict the remaining useful life of a system). Prognostics are integrated with health monitoring (through inspections, sensors, etc.) to provide an overall PHM capability that optimizes maintenance actions and results in higher availability at a lower cost. Our goal in this research was to develop PHM tools that could be applied to a wide variety of equipment (repairable, non-repairable, manufacturing, weapons, battlefield equipment, etc.) and require minimal customization to move from one system to the next. Thus, our approach was to develop a toolkit of reusable software objects/components and architecture for their use. We have developed two software tools: an Evidence Engine and a Consequence Engine. The Evidence Engine integrates information from a variety of sources in order to take into account all the evidence that impacts a prognosis for system health. The Evidence Engine has the capability for feature extraction, trend detection, information fusion through Bayesian Belief Networks (BBN), and estimation of remaining useful life. The Consequence Engine involves algorithms to analyze the consequences of various maintenance actions. The Consequence Engine takes as input a maintenance and use schedule, spares information, and time-to-failure data on components, then generates maintenance and failure events, and evaluates performance measures such as equipment availability, mission capable rate, time to failure, and cost. This report summarizes the capabilities we have developed, describes the approach and architecture of the two engines, and provides examples of their use. 'Prognostics' refers to the capability to predict the probability of

  6. Quantification of distention in CT colonography: development and validation of three computer algorithms.

    PubMed

    Hung, Peter W; Paik, David S; Napel, Sandy; Yee, Judy; Jeffrey, R Brooke; Steinauer-Gebauer, Andreas; Min, Juno; Jathavedam, Ashwin; Beaulieu, Christopher F

    2002-02-01

    Three bowel distention-measuring algorithms for use at computed tomographic (CT) colonography were developed, validated in phantoms, and applied to a human CT colonographic data set. The three algorithms are the cross-sectional area method, the moving spheres method, and the segmental volume method. Each algorithm effectively quantified distention, but accuracy varied between methods. Clinical feasibility was demonstrated. Depending on the desired spatial resolution and accuracy, each algorithm can quantitatively depict colonic diameter in CT colonography.

  7. Toward Developing Genetic Algorithms to Aid in Critical Infrastructure Modeling

    SciTech Connect

    Not Available

    2007-05-01

    Today’s society relies upon an array of complex national and international infrastructure networks such as transportation, telecommunication, financial and energy. Understanding these interdependencies is necessary in order to protect our critical infrastructure. The Critical Infrastructure Modeling System, CIMS©, examines the interrelationships between infrastructure networks. CIMS© development is sponsored by the National Security Division at the Idaho National Laboratory (INL) in its ongoing mission for providing critical infrastructure protection and preparedness. A genetic algorithm (GA) is an optimization technique based on Darwin’s theory of evolution. A GA can be coupled with CIMS© to search for optimum ways to protect infrastructure assets. This includes identifying optimum assets to enforce or protect, testing the addition of or change to infrastructure before implementation, or finding the optimum response to an emergency for response planning. This paper describes the addition of a GA to infrastructure modeling for infrastructure planning. It first introduces the CIMS© infrastructure modeling software used as the modeling engine to support the GA. Next, the GA techniques and parameters are defined. Then a test scenario illustrates the integration with CIMS© and the preliminary results.

  8. Evolutionary origin of gastrulation: insights from sponge development

    PubMed Central

    2014-01-01

    Background The evolutionary origin of gastrulation—defined as a morphogenetic event that leads to the establishment of germ layers—remains a vexing question. Central to this debate is the evolutionary relationship between the cell layers of sponges (poriferans) and eumetazoan germ layers. Despite considerable attention, it remains unclear whether sponge cell layers undergo progressive fate determination akin to eumetazoan primary germ layer formation during gastrulation. Results Here we show by cell-labelling experiments in the demosponge Amphimedon queenslandica that the cell layers established during embryogenesis have no relationship to the cell layers of the juvenile. In addition, juvenile epithelial cells can transdifferentiate into a range of cell types and move between cell layers. Despite the apparent lack of cell layer and fate determination and stability in this sponge, the transcription factor GATA, a highly conserved eumetazoan endomesodermal marker, is expressed consistently in the inner layer of A. queenslandica larvae and juveniles. Conclusions Our results are compatible with sponge cell layers not undergoing progressive fate determination and thus not being homologous to eumetazoan germ layers. Nonetheless, the expression of GATA in the sponge inner cell layer suggests a shared ancestry with the eumetazoan endomesoderm, and that the ancestral role of GATA in specifying internalised cells may antedate the origin of germ layers. Together, these results support germ layers and gastrulation evolving early in eumetazoan evolution from pre-existing developmental programs used for the simple patterning of cells in the first multicellular animals. PMID:24678663

  9. Phase 2 development of Great Lakes algorithms for Nimbus-7 coastal zone color scanner

    NASA Technical Reports Server (NTRS)

    Tanis, Fred J.

    1984-01-01

    A series of experiments have been conducted in the Great Lakes designed to evaluate the application of the NIMBUS-7 Coastal Zone Color Scanner (CZCS). Atmospheric and water optical models were used to relate surface and subsurface measurements to satellite measured radiances. Absorption and scattering measurements were reduced to obtain a preliminary optical model for the Great Lakes. Algorithms were developed for geometric correction, correction for Rayleigh and aerosol path radiance, and prediction of chlorophyll-a pigment and suspended mineral concentrations. The atmospheric algorithm developed compared favorably with existing algorithms and was the only algorithm found to adequately predict the radiance variations in the 670 nm band. The atmospheric correction algorithm developed was designed to extract needed algorithm parameters from the CZCS radiance values. The Gordon/NOAA ocean algorithms could not be demonstrated to work for Great Lakes waters. Predicted values of chlorophyll-a concentration compared favorably with expected and measured data for several areas of the Great Lakes.

  10. The Origin and Development of the African Evaluation Guidelines

    ERIC Educational Resources Information Center

    Rouge, Jean-Charles

    2004-01-01

    In May 1990, the first evaluation seminar in Africa took place in Cote d'Ivoire. It was the first in a series of regional seminars on evaluation planned by the Development Assistance Committee (DAC) of the Organization for Economic Cooperation and Development (OECD). The seminar was jointly presented by the DAC and African Development Bank (ADB).…

  11. New developments in astrodynamics algorithms for autonomous rendezvous

    NASA Technical Reports Server (NTRS)

    Klumpp, Allan R.

    1991-01-01

    A the core of any autonomous rendezvous guidance system must be two algorithms for solving Lambert's and Kepler's problems, the two fundamental problems in classical astrodynamics. Lambert's problem is to determine the trajectory connecting specified initial and terminal position vectors in a specified transfer time. The solution is the initial and terminal velocity vectors. Kepler's problem is to determine the trajectory that stems from a given initial state (position and velocity). The solution is the state of an earlier or later specified time. To be suitable for flight software, astrodynamics algorithms must be totally reliable, compact, and fast. Although solving Lambert's and Kepler's problems has challenged some of the world's finest minds for over two centuries, only in the last year have algorithms appeared that satisfy all three requirements just stated. This paper presents an evaluation of the most highly regarded Lambert and Kepler algorithms.

  12. Development of Great Lakes algorithms for the Nimbus-G coastal zone color scanner

    NASA Technical Reports Server (NTRS)

    Tanis, F. J.; Lyzenga, D. R.

    1981-01-01

    A series of experiments in the Great Lakes designed to evaluate the application of the Nimbus G satellite Coastal Zone Color Scanner (CZCS) were conducted. Absorption and scattering measurement data were reduced to obtain a preliminary optical model for the Great Lakes. Available optical models were used in turn to calculate subsurface reflectances for expected concentrations of chlorophyll-a pigment and suspended minerals. Multiple nonlinear regression techniques were used to derive CZCS water quality prediction equations from Great Lakes simulation data. An existing atmospheric model was combined with a water model to provide the necessary simulation data for evaluation of the preliminary CZCS algorithms. A CZCS scanner model was developed which accounts for image distorting scanner and satellite motions. This model was used in turn to generate mapping polynomials that define the transformation from the original image to one configured in a polyconic projection. Four computer programs (FORTRAN IV) for image transformation are presented.

  13. Developing a modified SEBAL algorithm that is responsive to advection by using limited weather data

    NASA Astrophysics Data System (ADS)

    Mkhwanazi, Mcebisi

    The use of Remote Sensing ET algorithms in water management, especially for agricultural purposes is increasing, and there are more models being introduced. The Surface Energy Balance Algorithm for Land (SEBAL) and its variant, Mapping Evapotranspiration with Internalized Calibration (METRIC) are some of the models that are being widely used. While SEBAL has several advantages over other RS models, including that it does not require prior knowledge of soil, crop and other ground details, it has the downside of underestimating evapotranspiration (ET) on days when there is advection, which may be in most cases in arid and semi-arid areas. METRIC, however has been modified to be able to account for advection, but in doing so it requires hourly weather data. In most developing countries, while accurate estimates of ET are required, the weather data necessary to use METRIC may not be available. This research therefore was meant to develop a modified version of SEBAL that would require minimal weather data that may be available in these areas, and still estimate ET accurately. The data that were used to develop this model were minimum and maximum temperatures, wind data, preferably the run of wind in the afternoon, and wet bulb temperature. These were used to quantify the advected energy that would increase ET in the field. This was a two-step process; the first was developing the model for standard conditions, which was described as a healthy cover of alfalfa, 40-60 cm tall and not short of water. Under standard conditions, when estimated ET using modified SEBAL was compared with lysimeter-measured ET, the modified SEBAL model had a Mean Bias Error (MBE) of 2.2 % compared to -17.1 % from the original SEBAL. The Root Mean Square Error (RMSE) was lower for the modified SEBAL model at 10.9 % compared to 25.1 % for the original SEBAL. The modified SEBAL model, developed on an alfalfa field in Rocky Ford, was then tested on other crops; beans and wheat. It was also tested on

  14. Litter-of-origin trait effects on gilt development

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The preweaning litter environment of gilts can affect subsequent development. In a recent experiment designed to test the effects of diet on gilt development, individual birth weights, immunocrits, sow parity, number weaned, and individual weaning weights were collected for gilts during the preweani...

  15. Development of adaptive noise reduction filter algorithm for pediatric body images in a multi-detector CT

    NASA Astrophysics Data System (ADS)

    Nishimaru, Eiji; Ichikawa, Katsuhiro; Okita, Izumi; Ninomiya, Yuuji; Tomoshige, Yukihiro; Kurokawa, Takehiro; Ono, Yutaka; Nakamura, Yuko; Suzuki, Masayuki

    2008-03-01

    Recently, several kinds of post-processing image filters which reduce the noise of computed tomography (CT) images have been proposed. However, these image filters are mostly for adults. Because these are not very effective in small (< 20 cm) display fields of view (FOV), we cannot use them for pediatric body images (e.g., premature babies and infant children). We have developed a new noise reduction filter algorithm for pediatric body CT images. This algorithm is based on a 3D post-processing in which the output pixel values are calculated by nonlinear interpolation in z-directions on original volumetric-data-sets. This algorithm does not need the in-plane (axial plane) processing, so the spatial resolution does not change. From the phantom studies, our algorithm could reduce SD up to 40% without affecting the spatial resolution of x-y plane and z-axis, and improved the CNR up to 30%. This newly developed filter algorithm will be useful for the diagnosis and radiation dose reduction of the pediatric body CT images.

  16. IVF and embryo transfer: historical origin and development.

    PubMed

    Biggers, John D

    2012-08-01

    IVF and embryo transfer for the treatment of human infertility has now resulted in the birth of over 4 million babies. The technique did not arise as a quantum event but was built on the efforts of many earlier workers in the fields of reproductive endocrinology and development. One should remember the famous saying of Isaac Newton: 'If I have seen further than most, it is because I have stood on the shoulder's of giants'. Ethical and moral issues have always arisen when investigators study early mammalian development, particularly human development. This paper documents these earlier studies and also draws attention to the ethical and moral arguments that inevitably arose.

  17. Algorithm development for predicting biodiversity based on phytoplankton absorption

    NASA Astrophysics Data System (ADS)

    Moisan, Tiffany A. H.; Moisan, John R.; Linkswiler, Matthew A.; Steinhardt, Rachel A.

    2013-03-01

    Ocean color remote sensing has provided the scientific community with unprecedented global coverage of chlorophyll a, an indicator of phytoplankton biomass. Together, satellite-derived chlorophyll a and knowledge of Phytoplankton Functional Types (PFTs) will improve our limited understanding of marine ecosystem responses to physiochemical climate drivers involved in carbon cycle dynamics and linkages. Using cruise data from the Gulf of Maine and the Middle Atlantic Bight (N=269 pairs of HPLC and phytoplankton absorption samples), two modeling approaches were utilized to predict phytoplankton absorption and pigments. Algorithm I predicts the chlorophyll-specific absorption coefficient (aph* (m2 mg chl a-1)) using inputs of temperature, light, and chlorophyll a. Modeled r2 values (400-700 nm) ranged from 0.79 to 0.99 when compared to in situ observations with ˜25% lower r2 values in the UV region. Algorithm II-a utilizes matrix inversion analysis to predict a(m-1, 400-700 nm) and r2 values ranged from 0.89 to 0.99. The prediction of phytoplankton pigments with Algorithm II-b produced r2 values that ranged from 0.40 to 0.93. When used in combination, Algorithm I, and Algorithm II-a are able to use satellite products of SST, PAR, and chlorophyll a (Algorithm I) to predict pigment concentrations and ratios to describe the phytoplankton community. The results of this study demonstrate that the spatial variation in modeled pigment ratios differ significantly from the 10-year SeaWiFS average chlorophyll a data set. Contiguous observations of chlorophyll a and phytoplankton biodiversity will elucidate ecosystem responses with unprecedented complexity.

  18. Extended precision data types for the development of the original computer aided engineering applications

    NASA Astrophysics Data System (ADS)

    Pescaru, A.; Oanta, E.; Axinte, T.; Dascalescu, A.-D.

    2015-11-01

    Computer aided engineering is based on models of the phenomena which are expressed as algorithms. The implementations of the algorithms are usually software applications which are processing a large volume of numerical data, regardless the size of the input data. In this way, the finite element method applications used to have an input data generator which was creating the entire volume of geometrical data, starting from the initial geometrical information and the parameters stored in the input data file. Moreover, there were several data processing stages, such as: renumbering of the nodes meant to minimize the size of the band length of the system of equations to be solved, computation of the equivalent nodal forces, computation of the element stiffness matrix, assemblation of system of equations, solving the system of equations, computation of the secondary variables. The modern software application use pre-processing and post-processing programs to easily handle the information. Beside this example, CAE applications use various stages of complex computation, being very interesting the accuracy of the final results. Along time, the development of CAE applications was a constant concern of the authors and the accuracy of the results was a very important target. The paper presents the various computing techniques which were imagined and implemented in the resulting applications: finite element method programs, finite difference element method programs, applied general numerical methods applications, data generators, graphical applications, experimental data reduction programs. In this context, the use of the extended precision data types was one of the solutions, the limitations being imposed by the size of the memory which may be allocated. To avoid the memory-related problems the data was stored in files. To minimize the execution time, part of the file was accessed using the dynamic memory allocation facilities. One of the most important consequences of the

  19. Developing a Learning Algorithm-Generated Empirical Relaxer

    SciTech Connect

    Mitchell, Wayne; Kallman, Josh; Toreja, Allen; Gallagher, Brian; Jiang, Ming; Laney, Dan

    2016-03-30

    One of the main difficulties when running Arbitrary Lagrangian-Eulerian (ALE) simulations is determining how much to relax the mesh during the Eulerian step. This determination is currently made by the user on a simulation-by-simulation basis. We present a Learning Algorithm-Generated Empirical Relaxer (LAGER) which uses a regressive random forest algorithm to automate this decision process. We also demonstrate that LAGER successfully relaxes a variety of test problems, maintains simulation accuracy, and has the potential to significantly decrease both the person-hours and computational hours needed to run a successful ALE simulation.

  20. Online Library Tutorials at Antelope Valley College: Origin and Development

    ERIC Educational Resources Information Center

    Burrell, Carolyn; Lee, Scott

    2005-01-01

    Antelope Valley College Library has provided online library instruction tutorials for students since 2000. These tutorials were developed in three phases during the last five years. This article describes the evolution of the tutorials including how to create a web based tutorial and accompanying interactive quiz. Hardware and software…

  1. Craniofacial development in marsupial mammals: developmental origins of evolutionary change.

    PubMed

    Smith, Kathleen K

    2006-05-01

    Biologists have long studied the evolutionary consequences of the differences in reproductive and life history strategies of marsupial and eutherian mammals. Over the past few decades, the impact of these strategies on the development of the marsupial embryo and neonate has received attention. In this review, the differences in development in the craniofacial region in marsupial and eutherian mammals will be discussed. The review will highlight differences at the organogenic and cellular levels, and discuss hypotheses for shifts in the expression of important regulatory genes. The major difference in the organogenic period is a whole-scale shift in the relative timing of central nervous system structures, in particular those of the forebrain, which are delayed in marsupials, relative to the structures of the oral-facial apparatus. Correlated with the delay in development of nervous system structures, the ossification of the bones of the neurocranium are delayed, while those of the face are accelerated. This study will also review work showing that the neural crest, which provides much of the cellular material to the facial skeleton and may also carry important patterning information, is notably accelerated in its development in marsupials. Potential consequences of these observations for hypotheses on constraint, evolutionary integration, and the existence of developmental modules is discussed. Finally, the implications of these results for hypotheses on the genetic modulation of craniofacial patterning are presented.

  2. Community Psychology in South Africa: Origins, Developments, and Manifestations

    ERIC Educational Resources Information Center

    Seedat, Mohamed; Lazarus, Sandy

    2011-01-01

    This article represents a South African contribution to the growing international body of knowledge on histories of community psychology. We trace the early antecedents of social-community psychology interventions and describe the social forces and academic influences that provided the impetus for the emergence and development of community…

  3. Development and Evaluation of the GCOM-W1 AMSR2 Snow Depth and Snow Water Equivalent Algorithm

    NASA Astrophysics Data System (ADS)

    Kelly, R. E. J.; Saberi, N.; Li, Q.

    2015-12-01

    An evaluation is presented of snow depth (SD) and snow water equivalent (SWE) estimates from recent developments to the standard snow product algorithm for the Advanced Microwave Scanning Radiometer - 2 (AMSR2) aboard the Global Change Observation Mission - Water. AMSR2 is designed as a follow-on from the successful Advanced Microwave Scanning Radiometer - EOS that ceased formal operations in 2011. The standard SD product for AMSR2 has been updated in two ways. First, the detection algorithm identifies various observable geophysical targets that can confound SD / SWE estimation (water bodies [including freeze/thaw state], rainfall, high altitude plateau regions [e.g. Tibetan plateau]) before detecting moderate and shallow snow. Second, the implementation of the Dense Media Radiative Transfer model (DMRT) originally developed by Tsang et al. (2000) and more recently adapted by Picard et al. (2011) is used to estimate SWE and SD. The implementation combines snow grain size and density parameterizations originally developed by Kelly et al. (2003). Snow grain size is estimated from the tracking of estimated air temperatures that are used to drive an empirical grain growth model. Snow density is estimated from the Sturm et al. (2010) scheme. Efforts have been made to keep the approach tractable while reducing uncertainty in these input variables. Results are presented from the recent winter seasons since 2012 to illustrate the performance of the new approach in comparison with the initial AMSR2 algorithm.

  4. Artificial lightning data as proxy data for the algorithm development for the geostationary lightning imager

    NASA Astrophysics Data System (ADS)

    Finke, U.

    2009-12-01

    The geostationary Meteosat Third Generation (MTG) will carry the Lightning Imager (LI) for the detection and location of the total lightning by optical means. The Lightning Imager will continuously observe the full visible disk and provide lightning data with high uniformity over land and ocean during day and night. Its main operational applications are the nowcasting of severe storms and the warning of lightning strike threat. For the development of the data processor prototype a proxy data set is necessary as a reference data set in order to prove the function of the algorithms under the expected observation conditions. Additionally, a set of proxy data simulating the optical pulses originating from lightning can be used to optimize the performance of the detecting instrument. This contribution presents the methodology and the results of the generation of artificial lightning data. The artificial data set is created by random number generators which produces data obeying the same statistical distribution characteristics as real data. The generator bases on the empirical distribution density functions of the lightning characteristics which were derived from optical lightning observations by low orbit satellites (LIS) and ground based observations of lightning. The resulting artificial data represent optical lightning pulses as seen on the upper cloud surface. They are characterized by their distribution on three scales: the distribution of photons in a single lightning pulse, the distribution of lightning flashes in a single storm and the distribution of storms on the globe. The artificial data are used as input for the data processing and product generating algorithms. The elementary product of the lightning imager are the detected lightning pulses with their time, location and optical energy. These data are the basis for the generation of the various meteorological products such as lightning densities in geographical areas, storm cells with their motion

  5. Origins and Hallmarks of Macrophages: Development, Homeostasis, and Disease

    PubMed Central

    Wynn, Thomas A.; Chawla, Ajay; Pollard, Jeffrey W.

    2013-01-01

    Preface Macrophages the most plastic cells of the hematopoietic system are found in all tissues and exhibit great functional diversity. They have roles in development, homeostasis, tissue repair, and immunity. While anatomically distinct, resident tissue macrophages exhibit different transcriptional profiles, and functional capabilities, they are all required for the maintenance of homeostasis. However, these reparative and homeostatic functions can be subverted by chronic insults, resulting in a causal association of macrophages with disease states. In this review, we discuss how macrophages regulate normal physiology and development and provide several examples of their pathophysiologic roles in disease. We define the “hallmarks” of macrophages performing particular functions, taking into account novel insights into the diversity of their lineages, identity, and regulation. This diversity is essential to understand because macrophages have emerged as important therapeutic targets in many important human diseases. PMID:23619691

  6. Multilocular hydrocephalus: ultrasound studies of origin and development.

    PubMed

    Prats, J M; López-Heredia, J; Gener, B; Freijo, M M; Garaizar, C

    2001-02-01

    Multilocular hydrocephalus is a complication of neonatal hydrocephalus. Its main feature is the presence of multiple cysts inside the ventricles, which requires a specific therapeutic approach. The case of a preterm infant with intracranial hemorrhage grade II-III and central nervous system infection is reported. The cysts developed at the subependymal layer in the posterior area of the patient's thalamus. Their growth and development were charted by ultrasound imaging for several weeks. These types of cysts may grow to occupy the totality of the lateral ventricles, isolating the temporal horns. Of all the reviewed pathogenic mechanisms, we support the hypothesis of an inflammatory vasculitis at the subependymal level, with the subsequent infarct giving rise to the cysts. The osmotic pressure within the cavities, rather than intraventricular fluid, would account for the enlargement of the cysts.

  7. Evolutionary Processes in the Development of Errors in Subtraction Algorithms

    ERIC Educational Resources Information Center

    Fernandez, Ricardo Lopez; Garcia, Ana B. Sanchez

    2008-01-01

    The study of errors made in subtraction is a research subject approached from different theoretical premises that affect different components of the algorithmic process as triggers of their generation. In the following research an attempt has been made to investigate the typology and nature of errors which occur in subtractions and their evolution…

  8. Item Selection for the Development of Short Forms of Scales Using an Ant Colony Optimization Algorithm

    ERIC Educational Resources Information Center

    Leite, Walter L.; Huang, I-Chan; Marcoulides, George A.

    2008-01-01

    This article presents the use of an ant colony optimization (ACO) algorithm for the development of short forms of scales. An example 22-item short form is developed for the Diabetes-39 scale, a quality-of-life scale for diabetes patients, using a sample of 265 diabetes patients. A simulation study comparing the performance of the ACO algorithm and…

  9. Notes on quantitative structure-property relationships (QSPR), part 3: density functions origin shift as a source of quantum QSPR algorithms in molecular spaces.

    PubMed

    Carbó-Dorca, Ramon

    2013-04-05

    A general algorithm implementing a useful variant of quantum quantitative structure-property relationships (QQSPR) theory is described. Based on quantum similarity framework and previous theoretical developments on the subject, the present QQSPR procedure relies on the possibility to perform geometrical origin shifts over molecular density function sets. In this way, molecular collections attached to known properties can be easily used over other quantum mechanically well-described molecular structures for the estimation of their unknown property values. The proposed procedure takes quantum mechanical expectation value as provider of causal relation background and overcomes the dimensionality paradox, which haunts classical descriptor space QSPR. Also, contrarily to classical procedures, which are also attached to heavy statistical gear, the present QQSPR approach might use a geometrical assessment only or just some simple statistical outline or both. From an applied point of view, several easily reachable computational levels can be set up. A Fortran 95 program: QQSPR-n is described with two versions, which might be downloaded from a dedicated web site. Various practical examples are provided, yielding excellent results. Finally, it is also shown that an equivalent molecular space classical QSPR formalism can be easily developed.

  10. Origins and development of adult education innovations in Tanzania

    NASA Astrophysics Data System (ADS)

    Mushi, Philemon A. K.

    1991-09-01

    A number of adult education innovations were introduced in Tanzania in the late 1960s and early 1970s. This article analyzes the context of three innovations, namely functional literacy, workers' education and the programme of the Folk Development Colleges. The analysis reveals that these innovations had firm roots within the socio-economic conditions prevailing in the country in the 1960s and 1970s, Nyerere's influence as President and Party leader, Tanzania's ideology of development, the policy of popular participation, the roots of educational policy in a humanistic philosophy of education, and indigenous education. Some of the factors which affected their implementation included lack of trained educators, inadequate financial resources, ineffective evaluation mechanisms, and a mis-match between participants' needs and actual programmes. It is suggested that there is a need to introduce economic innovations alongside educational innovations, to involve participants in determining their training needs, and to train and retain adult educators with a view to improving adult education initiatives in the country.

  11. Origins and early development of human body knowledge.

    PubMed

    Slaughter, Virginia; Heron, Michelle

    2004-01-01

    As a knowable object, the human body is highly complex. Evidence from several converging lines of research, including psychological studies, neuroimaging and clinical neuropsychology, indicates that human body knowledge is widely distributed in the adult brain, and is instantiated in at least three partially independent levels of representation. Sensorimotor body knowledge is responsible for on-line control and movement of one's own body and may also contribute to the perception of others' moving bodies; visuo-spatial body knowledge specifies detailed structural descriptions of the spatial attributes of the human body; and lexical-semantic body knowledge contains language-based knowledge about the human body. In the first chapter of this Monograph, we outline the evidence for these three hypothesized levels of human body knowledge, then review relevant literature on infants' and young children's human body knowledge in terms of the three-level framework. In Chapters II and III, we report two complimentary series of studies that specifically investigate the emergence of visuo-spatial body knowledge in infancy. Our technique is to compare infants'responses to typical and scrambled human bodies, in order to evaluate when and how infants acquire knowledge about the canonical spatial layout of the human body. Data from a series of visual habituation studies indicate that infants first discriminate scrambled from typical human body picture sat 15 to 18 months of age. Data from object examination studies similarly indicate that infants are sensitive to violations of three-dimensional human body stimuli starting at 15-18 months of age. The overall pattern of data supports several conclusions about the early development of human body knowledge: (a) detailed visuo-spatial knowledge about the human body is first evident in the second year of life, (b) visuo-spatial knowledge of human faces and human bodies are at least partially independent in infancy and (c) infants' initial

  12. An organismal perspective on C. intestinalis development, origins and diversification

    PubMed Central

    Kourakis, Matthew J; Smith, William C

    2015-01-01

    The ascidian Ciona intestinalis, commonly known as a ‘sea squirt’, has become an important model for embryological studies, offering a simple blueprint for chordate development. As a model organism, it offers the following: a small, compact genome; a free swimming larva with only about 2600 cells; and an embryogenesis that unfolds according to a predictable program of cell division. Moreover, recent phylogenies reveal that C. intestinalis occupies a privileged branch in the tree of life: it is our nearest invertebrate relative. Here, we provide an organismal perspective of C. intestinalis, highlighting aspects of its life history and habitat—from its brief journey as a larva to its radical metamorphosis into adult form—and relate these features to its utility as a laboratory model. DOI: http://dx.doi.org/10.7554/eLife.06024.001 PMID:25807088

  13. Origin and development of the germ line in sea stars

    PubMed Central

    Wessel, Gary M.; Fresques, Tara; Kiyomoto, Masato; Yajima, Mamiko; Zazueta, Vanesa

    2014-01-01

    This review summarizes and integrates our current understanding of how sea stars make gametes. Although little is known of the mechanism of germ line formation in these animals, recent results point to specific cells and to cohorts of molecules in the embryos and larvae that may lay the ground work for future research efforts. A coelomic outpocketing forms in the posterior of the gut in larvae, referred to as the posterior enterocoel (PE), that when removed, significantly reduces the number of germ cell later in larval growth. This same PE structure also selectively accumulates several germ-line associated factors – vasa, nanos, piwi – and excludes factors involved in somatic cell fate. Since its formation is relatively late in development, these germ cells may form by inductive mechanisms. When integrated into the morphological observations of germ cells and gonad development in larvae, juveniles, and adults, the field of germ line determination appears to have a good model system to study inductive germ line determination to complement the recent work on the molecular mechanisms in mice. We hope this review will also guide investigators interested in germ line determination and regulation of the germ line in how these animals can help in this research field. The review is not intended to be comprehensive – sea star reproduction has been studied over 100 years and many reviews are comprehensive in their coverage of, for example, seasonal growth of the gonads in response to light, nutrient, and temperature. Rather the intent of this review is to help the reader focus on new experimental results attached to the historical underpinnings of how the germ cell functions in sea stars with particular emphasis to clarify the important areas of priority for future research. PMID:24648114

  14. [ICNP- International Classification of Nursing Practice: origin, structure and development].

    PubMed

    Marucci, Anna Rita; De Caro, Walter; Petrucci, Cristina; Lancia, Loreto; Sansoni, Julita

    2015-01-01

    ICNP is a standardized nursing terminology included within acknowledged terminologies by WHO, it is a relevant aspect of ICN programs and strategies. This paper aims to describe structure and characteristics of ICNP terminology as well as to highlight how this tool can be useful both in practice and in terms of nursing professional development. This version looks like a pyramid with seven axes describing different areas of nursing and related interventions, enriched by two special axes related to pre-coordinated Diagnosis / Outcomes (DC) and Operations (IC) which facilitate daily use in practice. In order to clarify how this tool can be actually be used in daily nursing practice some examples are provided, clarifying how adopting the current version of ICNP terminology (2015 release) Diagnosis/Outcomes and Interventions can be built. The ICNP Italian Centre is committed to introduce it to Italian nurses as a tool for sharing and disseminating terminology in our Country, having as main final aim to achieve even in Italy, professional visibility objectives promoted in different ways by the International Council of Nurses.

  15. The development and evaluation of numerical algorithms for MIMD computers

    NASA Technical Reports Server (NTRS)

    Voigt, Robert G.

    1990-01-01

    Two activities were pursued under this grant. The first was a visitor program to conduct research on numerical algorithms for MIMD computers. The program is summarized in the following attachments. Attachment A - List of Researchers Supported; Attachment B - List of Reports Completed; and Attachment C - Reports. The second activity was a workshop on the Control of fluid Dynamic Systems held on March 28 to 29, 1989. The workshop is summarized in attachments. Attachment D - Workshop Summary; and Attachment E - List of Workshop Participants.

  16. Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.

    2005-01-01

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.

  17. Motion Cueing Algorithm Development: New Motion Cueing Program Implementation and Tuning

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    A computer program has been developed for the purpose of driving the NASA Langley Research Center Visual Motion Simulator (VMS). This program includes two new motion cueing algorithms, the optimal algorithm and the nonlinear algorithm. A general description of the program is given along with a description and flowcharts for each cueing algorithm, and also descriptions and flowcharts for subroutines used with the algorithms. Common block variable listings and a program listing are also provided. The new cueing algorithms have a nonlinear gain algorithm implemented that scales each aircraft degree-of-freedom input with a third-order polynomial. A description of the nonlinear gain algorithm is given along with past tuning experience and procedures for tuning the gain coefficient sets for each degree-of-freedom to produce the desired piloted performance. This algorithm tuning will be needed when the nonlinear motion cueing algorithm is implemented on a new motion system in the Cockpit Motion Facility (CMF) at the NASA Langley Research Center.

  18. Swedish Upper Secondary Students' Views of the Origin and Development of the Universe

    ERIC Educational Resources Information Center

    Hansson, Lena; Redfors, Andreas

    2006-01-01

    The article is addressing how students reason about the origin and development of the universe. Students' own views as well as their descriptions of physical models are analysed. Data consists of written surveys, and interviews of a subset of the students. Most of the students relate to the Big Bang model when describing the origin of the…

  19. Design requirements and development of an airborne descent path definition algorithm for time navigation

    NASA Technical Reports Server (NTRS)

    Izumi, K. H.; Thompson, J. L.; Groce, J. L.; Schwab, R. W.

    1986-01-01

    The design requirements for a 4D path definition algorithm are described. These requirements were developed for the NASA ATOPS as an extension of the Local Flow Management/Profile Descent algorithm. They specify the processing flow, functional and data architectures, and system input requirements, and recommended the addition of a broad path revision (reinitialization) function capability. The document also summarizes algorithm design enhancements and the implementation status of the algorithm on an in-house PDP-11/70 computer. Finally, the requirements for the pilot-computer interfaces, the lateral path processor, and guidance and steering function are described.

  20. Update on Development of Mesh Generation Algorithms in MeshKit

    SciTech Connect

    Jain, Rajeev; Vanderzee, Evan; Mahadevan, Vijay

    2015-09-30

    MeshKit uses a graph-based design for coding all its meshing algorithms, which includes the Reactor Geometry (and mesh) Generation (RGG) algorithms. This report highlights the developmental updates of all the algorithms, results and future work. Parallel versions of algorithms, documentation and performance results are reported. RGG GUI design was updated to incorporate new features requested by the users; boundary layer generation and parallel RGG support were added to the GUI. Key contributions to the release, upgrade and maintenance of other SIGMA1 libraries (CGM and MOAB) were made. Several fundamental meshing algorithms for creating a robust parallel meshing pipeline in MeshKit are under development. Results and current status of automated, open-source and high quality nuclear reactor assembly mesh generation algorithms such as trimesher, quadmesher, interval matching and multi-sweeper are reported.

  1. Developments in Human Centered Cueing Algorithms for Control of Flight Simulator Motion Systems

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A.; Telban, Robert J.; Cardullo, Frank M.

    1997-01-01

    The authors conducted further research with cueing algorithms for control of flight simulator motion systems. A variation of the so-called optimal algorithm was formulated using simulated aircraft angular velocity input as a basis. Models of the human vestibular sensation system, i.e. the semicircular canals and otoliths, are incorporated within the algorithm. Comparisons of angular velocity cueing responses showed a significant improvement over a formulation using angular acceleration input. Results also compared favorably with the coordinated adaptive washout algorithm, yielding similar results for angular velocity cues while eliminating false cues and reducing the tilt rate for longitudinal cues. These results were confirmed in piloted tests on the current motion system at NASA-Langley, the Visual Motion Simulator (VMS). Proposed future developments by the authors in cueing algorithms are revealed. The new motion system, the Cockpit Motion Facility (CMF), where the final evaluation of the cueing algorithms will be conducted, is also described.

  2. Development of Online Cognitive and Algorithm Tests as Assessment Tools in Introductory Computer Science Courses

    ERIC Educational Resources Information Center

    Avancena, Aimee Theresa; Nishihara, Akinori; Vergara, John Paul

    2012-01-01

    This paper presents the online cognitive and algorithm tests, which were developed in order to determine if certain cognitive factors and fundamental algorithms correlate with the performance of students in their introductory computer science course. The tests were implemented among Management Information Systems majors from the Philippines and…

  3. Collaborative Research Developing, Testing and Validating Brain Alignment Algorithm using Geometric Analysis

    DTIC Science & Technology

    2013-11-13

    This is the final report by the University of Southern California on a AFSOR grant, part of a joint program with Harvard University (PI, Shing-Tung...the algorithm was the task assigned to Harvard University ). Finally, we were to test and validate the algorithm once it had been developed.

  4. Algorithm Development and Application of High Order Numerical Methods for Shocked and Rapid Changing Solutions

    DTIC Science & Technology

    2007-12-06

    problems studied in this project involve numerically solving partial differential equations with either discontinuous or rapidly changing solutions ...REPORT Algorithm Development and Application of High Order Numerical Methods for Shocked and Rapid Changing Solutions 14. ABSTRACT 16. SECURITY...discontinuous Galerkin finite element methods, for solving partial differential equations with discontinuous or rapidly changing solutions . Algorithm

  5. Clustering algorithm evaluation and the development of a replacement for procedure 1. [for crop inventories

    NASA Technical Reports Server (NTRS)

    Lennington, R. K.; Johnson, J. K.

    1979-01-01

    An efficient procedure which clusters data using a completely unsupervised clustering algorithm and then uses labeled pixels to label the resulting clusters or perform a stratified estimate using the clusters as strata is developed. Three clustering algorithms, CLASSY, AMOEBA, and ISOCLS, are compared for efficiency. Three stratified estimation schemes and three labeling schemes are also considered and compared.

  6. Evaluating Knowledge Structure-Based Adaptive Testing Algorithms and System Development

    ERIC Educational Resources Information Center

    Wu, Huey-Min; Kuo, Bor-Chen; Yang, Jinn-Min

    2012-01-01

    In recent years, many computerized test systems have been developed for diagnosing students' learning profiles. Nevertheless, it remains a challenging issue to find an adaptive testing algorithm to both shorten testing time and precisely diagnose the knowledge status of students. In order to find a suitable algorithm, four adaptive testing…

  7. Development of a Behavioural Algorithm for Autonomous Spacecraft

    NASA Astrophysics Data System (ADS)

    Radice, G.

    manner with the environment through the use of sensors and actuators. As such, there is little computational effort required to implement such an approach, which is clearly of great benefit for limited micro-satellites. Rather than using complex world models, which have to be updated, the agent is allowed to exploit the dynamics of its environment for cues as to appropriate actions to take to achieve mission goals. The particular artificial agent implementation used here has been borrowed from studies of biological systems, where it has been used successfully to provide models of motivation and opportunistic behaviour. The so called "cue-deficit" action selection algorithm considers the micro-spacecraft to be a non linear dynamical system with a number of observable states. Using optimal control theory rules are derived which determine which of a finite repertoire of behaviours the satellite should select and perform. It will also be shown that in the event of hardware failures the algorithm will resequence the spacecraft actions to ensure survival while still meeting the mission goals, albeit in a degraded manner.

  8. Algorithm and simulation development in support of response strategies for contamination events in air and water systems.

    SciTech Connect

    Waanders, Bart Van Bloemen

    2006-01-01

    Chemical/Biological/Radiological (CBR) contamination events pose a considerable threat to our nation's infrastructure, especially in large internal facilities, external flows, and water distribution systems. Because physical security can only be enforced to a limited degree, deployment of early warning systems is being considered. However to achieve reliable and efficient functionality, several complex questions must be answered: (1) where should sensors be placed, (2) how can sparse sensor information be efficiently used to determine the location of the original intrusion, (3) what are the model and data uncertainties, (4) how should these uncertainties be handled, and (5) how can our algorithms and forward simulations be sufficiently improved to achieve real time performance? This report presents the results of a three year algorithmic and application development to support the identification, mitigation, and risk assessment of CBR contamination events. The main thrust of this investigation was to develop (1) computationally efficient algorithms for strategically placing sensors, (2) identification process of contamination events by using sparse observations, (3) characterization of uncertainty through developing accurate demands forecasts and through investigating uncertain simulation model parameters, (4) risk assessment capabilities, and (5) reduced order modeling methods. The development effort was focused on water distribution systems, large internal facilities, and outdoor areas.

  9. METHOD DEVELOPMENT FOR THE DETERMINATION OF FORMALDEHYDE IN SAMPLES OF ENVIRONMENTAL ORIGIN

    EPA Science Inventory

    An analytical method was developed for the determination of formaldehyde in samples of environmental origin. After a review of the current literature, five candidate methods involving chemical derivatization were chosen for evaluation. The five derivatization reagents studied wer...

  10. A model-based parallel origin and orientation refinement algorithm for cryoTEM and its application to the study of virus structures

    PubMed Central

    Ji, Yongchang; Marinescu, Dan C.; Zhang, Wei; Zhang, Xing; Yan, Xiaodong; Baker, Timothy S.

    2014-01-01

    We present a model-based parallel algorithm for origin and orientation refinement for 3D reconstruction in cryoTEM. The algorithm is based upon the Projection Theorem of the Fourier Transform. Rather than projecting the current 3D model and searching for the best match between an experimental view and the calculated projections, the algorithm computes the Discrete Fourier Transform (DFT) of each projection and searches for the central section (“cut”) of the 3D DFT that best matches the DFT of the projection. Factors that affect the efficiency of a parallel program are first reviewed and then the performance and limitations of the proposed algorithm are discussed. The parallel program that implements this algorithm, called PO2R, has been used for the refinement of several virus structures, including those of the 500 Å diameter dengue virus (to 9.5 Å resolution), the 850 Å mammalian reovirus (to better than 7 Å), and the 1800 Å paramecium bursaria chlorella virus (to 15 Å). PMID:16459100

  11. Developing a paradigm of drug innovation: an evaluation algorithm.

    PubMed

    Caprino, Luciano; Russo, Pierluigi

    2006-11-01

    Assessment of drug innovation is a burning issue because it involves so many different perspectives, mainly those of patients, decision- and policy-makers, regulatory authorities and pharmaceutical companies. Moreover, the innovative value of a new medicine is usually an intrinsic property of the compound, but it also depends on the specific context in which the medicine is introduced and the availability of other medicines for treating the same clinical condition. Thus, a model designed to assess drug innovation should be able to capture the intrinsic properties of a compound (which usually emerge during R&D) and/or modification of its innovative value with time. Here we describe the innovation assessment algorithm (IAA), a simulation model for assessing drug innovation. IAA provides a score of drug innovation by assessing information generated during both the pre-marketing and the post-marketing authorization phase.

  12. Ice classification algorithm development and verification for the Alaska SAR Facility using aircraft imagery

    NASA Technical Reports Server (NTRS)

    Holt, Benjamin; Kwok, Ronald; Rignot, Eric

    1989-01-01

    The Alaska SAR Facility (ASF) at the University of Alaska, Fairbanks is a NASA program designed to receive, process, and archive SAR data from ERS-1 and to support investigations that will use this regional data. As part of ASF, specialized subsystems and algorithms to produce certain geophysical products from the SAR data are under development. Of particular interest are ice motion, ice classification, and ice concentration. This work focuses on the algorithm under development for ice classification, and the verification of the algorithm using C-band aircraft SAR imagery recently acquired over the Alaskan arctic.

  13. AeroADL: applying the integration of the Suomi-NPP science algorithms with the Algorithm Development Library to the calibration and validation task

    NASA Astrophysics Data System (ADS)

    Houchin, J. S.

    2014-09-01

    A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.

  14. Development of an automatic identification algorithm for antibiogram analysis.

    PubMed

    Costa, Luan F R; da Silva, Eduardo S; Noronha, Victor T; Vaz-Moreira, Ivone; Nunes, Olga C; Andrade, Marcelino M de

    2015-12-01

    Routinely, diagnostic and microbiology laboratories perform antibiogram analysis which can present some difficulties leading to misreadings and intra and inter-reader deviations. An Automatic Identification Algorithm (AIA) has been proposed as a solution to overcome some issues associated with the disc diffusion method, which is the main goal of this work. AIA allows automatic scanning of inhibition zones obtained by antibiograms. More than 60 environmental isolates were tested using susceptibility tests which were performed for 12 different antibiotics for a total of 756 readings. Plate images were acquired and classified as standard or oddity. The inhibition zones were measured using the AIA and results were compared with reference method (human reading), using weighted kappa index and statistical analysis to evaluate, respectively, inter-reader agreement and correlation between AIA-based and human-based reading. Agreements were observed in 88% cases and 89% of the tests showed no difference or a <4mm difference between AIA and human analysis, exhibiting a correlation index of 0.85 for all images, 0.90 for standards and 0.80 for oddities with no significant difference between automatic and manual method. AIA resolved some reading problems such as overlapping inhibition zones, imperfect microorganism seeding, non-homogeneity of the circumference, partial action of the antimicrobial, and formation of a second halo of inhibition. Furthermore, AIA proved to overcome some of the limitations observed in other automatic methods. Therefore, AIA may be a practical tool for automated reading of antibiograms in diagnostic and microbiology laboratories.

  15. Developing a computer algorithm to identify epilepsy cases in managed care organizations.

    PubMed

    Holden, E Wayne; Grossman, Elizabeth; Nguyen, Hoang Thanh; Gunter, Margaret J; Grebosky, Becky; Von Worley, Ann; Nelson, Leila; Robinson, Scott; Thurman, David J

    2005-02-01

    The goal of this study was to develop an algorithm for detecting epilepsy cases in managed care organizations (MCOs). A data set of potential epilepsy cases was constructed from an MCO's administrative data system for all health plan members continuously enrolled in the MCO for at least 1 year within the study period of July 1, 1996 through June 30, 1998. Epilepsy status was determined using medical record review for a sample of 617 cases. The best algorithm for detecting epilepsy cases was developed by examining combinations of diagnosis, diagnostic procedures, and medication use. The best algorithm derived in the exploratory phase was then applied to a new set of data from the same MCO covering the period of July 1, 1998 through June 30, 2000. A stratified sample based on ethnicity and age was drawn from the preliminary algorithm-identified epilepsy cases and non-cases. Medical record review was completed for 644 cases to determine the accuracy of the algorithm. Data from both phases were combined to permit refinement of logistic regression models and to provide more stable estimates of the parameters. The best model used diagnoses and antiepileptic drugs as predictors and had a positive predictive value of 84% (sensitivity 82%, specificity 94%). The best model correctly classified 90% of the cases. A stable algorithm that can be used to identify epilepsy patients within MCOs was developed. Implications for use of the algorithm in other health care settings are discussed.

  16. Generic architecture for real-time multisensor fusion tracking algorithm development and evaluation

    NASA Astrophysics Data System (ADS)

    Queeney, Tom; Woods, Edward

    1994-10-01

    Westinghouse has developed and demonstrated a system for the rapid prototyping of Sensor Fusion Tracking (SFT) algorithms. The system provides an object-oriented envelope with three sets of generic software objects to aid in the development and evaluation of SFT algorithms. The first is a generic tracker model that encapsulates the idea of a tracker being a series of SFT algorithms along with the data manipulated by those algorithms and is capable of simultaneously supporting multiple, independent trackers. The second is a set of flexible, easily extensible sensor and target models which allows many types of sensors and targets to be used. Live, recorded and simulated sensors and combinations thereof can be utilized as sources for the trackers. The sensor models also provide an easily extensible interface to the generic tracker model so that all sensors provide input to the SFT algorithms in the same fashion. The third is a highly versatile display and user interface that allows easy access to many of the performance measures for sensors and trackers for easy evaluation and debugging of the SFT algorithms. The system is an object-oriented design programmed in C++. This system with several of the SFT algorithms developed for it has been used with live sensors as a real-time tracking system. This paper outlines the salient features of the sensor fusion architecture and programming environment.

  17. Analysis and Classification of Stride Patterns Associated with Children Development Using Gait Signal Dynamics Parameters and Ensemble Learning Algorithms

    PubMed Central

    Wu, Meihong; Liao, Lifang; Luo, Xin; Ye, Xiaoquan; Yao, Yuchen; Chen, Pinnan; Shi, Lei; Huang, Hui

    2016-01-01

    Measuring stride variability and dynamics in children is useful for the quantitative study of gait maturation and neuromotor development in childhood and adolescence. In this paper, we computed the sample entropy (SampEn) and average stride interval (ASI) parameters to quantify the stride series of 50 gender-matched children participants in three age groups. We also normalized the SampEn and ASI values by leg length and body mass for each participant, respectively. Results show that the original and normalized SampEn values consistently decrease over the significance level of the Mann-Whitney U test (p < 0.01) in children of 3–14 years old, which indicates the stride irregularity has been significantly ameliorated with the body growth. The original and normalized ASI values are also significantly changing when comparing between any two groups of young (aged 3–5 years), middle (aged 6–8 years), and elder (aged 10–14 years) children. Such results suggest that healthy children may better modulate their gait cadence rhythm with the development of their musculoskeletal and neurological systems. In addition, the AdaBoost.M2 and Bagging algorithms were used to effectively distinguish the children's gait patterns. These ensemble learning algorithms both provided excellent gait classification results in terms of overall accuracy (≥90%), recall (≥0.8), and precision (≥0.8077). PMID:27034952

  18. The development of an algebraic multigrid algorithm for symmetric positive definite linear systems

    SciTech Connect

    Vanek, P.; Mandel, J.; Brezina, M.

    1996-12-31

    An algebraic multigrid algorithm for symmetric, positive definite linear systems is developed based on the concept of prolongation by smoothed aggregation. Coarse levels are generated automatically. We present a set of requirements motivated heuristically by a convergence theory. The algorithm then attempts to satisfy the requirements. Input to the method are the coefficient matrix and zero energy modes, which are determined from nodal coordinates and knowledge of the differential equation. Efficiency of the resulting algorithm is demonstrated by computational results on real world problems from solid elasticity, plate blending, and shells.

  19. Development of a Bayesian recursive algorithm to find free-spaces for an intelligent wheelchair.

    PubMed

    Nguyen, Anh V; Su, Steven; Nguyen, Hung T

    2011-01-01

    This paper introduces a new shared control strategy for an intelligent wheelchair using a Bayesian recursive algorithm. Using the local environment information gathered by a laser range finder sensor and commands acquired through a user interface, a Bayesian recursive algorithm has been developed to find the most appropriate free-space, which corresponds to the highest posterior probability value. Then, an autonomous navigation algorithm will assist to manoeuvre the wheelchair in the chosen free-space. Experiment results demonstrate that the new method provides excellent performance with great flexibility and fast response.

  20. The Origin, Goals, and Development of a Clinical Pharmacy Emphasis in Pharmacy Education and Practice.

    ERIC Educational Resources Information Center

    Smith, Harry A.; Swintosky, Joseph V.

    1983-01-01

    The origin, goals, and development of a clinical emphasis are reviewed, beginning with some fundamental developments in pharmacy practice and education brought about by economic, political, social, scientific, and technological forces. The challenge of fitting the desirable curriculum element into a limited program length is discussed. (MSE)

  1. MODIS algorithm development and data visualization using ACTS

    NASA Technical Reports Server (NTRS)

    Abbott, Mark R.

    1992-01-01

    The study of the Earth as a system will require the merger of scientific and data resources on a much larger scale than has been done in the past. New methods of scientific research, particularly in the development of geographically dispersed, interdisciplinary teams, are necessary if we are to understand the complexity of the Earth system. Even the planned satellite missions themselves, such as the Earth Observing System, will require much more interaction between researchers and engineers if they are to produce scientifically useful data products. A key component in these activities is the development of flexible, high bandwidth data networks that can be used to move large amounts of data as well as allow researchers to communicate in new ways, such as through video. The capabilities of the Advanced Communications Technology Satellite (ACTS) will allow the development of such networks. The Pathfinder global AVHRR data set and the upcoming SeaWiFS Earthprobe mission would serve as a testbed in which to develop the tools to share data and information among geographically distributed researchers. Our goal is to develop a 'Distributed Research Environment' that can be used as a model for scientific collaboration in the EOS era. The challenge is to unite the advances in telecommunications with the parallel advances in computing and networking.

  2. Deciphering the Minimal Algorithm for Development and Information-genesis

    NASA Astrophysics Data System (ADS)

    Li, Zhiyuan; Tang, Chao; Li, Hao

    During development, cells with identical genomes acquires different fates in a highly organized manner. In order to decipher the principles underlining development, we used C.elegans as the model organism. Based on a large set of microscopy imaging, we first constructed a ``standard worm'' in silico: from the single zygotic cell to about 500 cell stage, the lineage, position, cell-cell contact and gene expression dynamics are quantified for each cell in order to investigate principles underlining these intensive data. Next, we reverse-engineered the possible gene-gene/cell-cell interaction rules that are capable of running a dynamic model recapitulating the early fate decisions during C.elegans development. we further formulized the C.elegans embryogenesis in the language of information genesis. Analysis towards data and model uncovered the global landscape of development in the cell fate space, suggested possible gene regulatory architectures and cell signaling processes, revealed diversity and robustness as the essential trade-offs in development, and demonstrated general strategies in building multicellular organisms.

  3. The Development of FPGA-Based Pseudo-Iterative Clustering Algorithms

    NASA Astrophysics Data System (ADS)

    Drueke, Elizabeth; Fisher, Wade; Plucinski, Pawel

    2016-03-01

    The Large Hadron Collider (LHC) in Geneva, Switzerland, is set to undergo major upgrades in 2025 in the form of the High-Luminosity Large Hadron Collider (HL-LHC). In particular, several hardware upgrades are proposed to the ATLAS detector, one of the two general purpose detectors. These hardware upgrades include, but are not limited to, a new hardware-level clustering algorithm, to be performed by a field programmable gate array, or FPGA. In this study, we develop that clustering algorithm and compare the output to a Python-implemented topoclustering algorithm developed at the University of Oregon. Here, we present the agreement between the FPGA output and expected output, with particular attention to the time required by the FPGA to complete the algorithm and other limitations set by the FPGA itself.

  4. Development and validation of an automated operational modal analysis algorithm for vibration-based monitoring and tensile load estimation

    NASA Astrophysics Data System (ADS)

    Rainieri, Carlo; Fabbrocino, Giovanni

    2015-08-01

    In the last few decades large research efforts have been devoted to the development of methods for automated detection of damage and degradation phenomena at an early stage. Modal-based damage detection techniques are well-established methods, whose effectiveness for Level 1 (existence) and Level 2 (location) damage detection is demonstrated by several studies. The indirect estimation of tensile loads in cables and tie-rods is another attractive application of vibration measurements. It provides interesting opportunities for cheap and fast quality checks in the construction phase, as well as for safety evaluations and structural maintenance over the structure lifespan. However, the lack of automated modal identification and tracking procedures has been for long a relevant drawback to the extensive application of the above-mentioned techniques in the engineering practice. An increasing number of field applications of modal-based structural health and performance assessment are appearing after the development of several automated output-only modal identification procedures in the last few years. Nevertheless, additional efforts are still needed to enhance the robustness of automated modal identification algorithms, control the computational efforts and improve the reliability of modal parameter estimates (in particular, damping). This paper deals with an original algorithm for automated output-only modal parameter estimation. Particular emphasis is given to the extensive validation of the algorithm based on simulated and real datasets in view of continuous monitoring applications. The results point out that the algorithm is fairly robust and demonstrate its ability to provide accurate and precise estimates of the modal parameters, including damping ratios. As a result, it has been used to develop systems for vibration-based estimation of tensile loads in cables and tie-rods. Promising results have been achieved for non-destructive testing as well as continuous

  5. Development of Algorithms for Nonlinear Physics on Type-II Quantum Computers

    DTIC Science & Technology

    2007-07-01

    Jan. 31, 2007 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Quantumn Lattice Algorithms for Nonlinear Physics: Optical Solutions and Bose-Eitistein...macroscopic nonlinear derivatives by local moments. Chapman-Enskog asymptotics will then, on projecting back into physical space, yield these nonlinear ...Entropic Lattice Boltzmaim Model will be being strongly pursued in future proposals. AFOSR FINAL REPORT "DEVELOPMENT OF ALGORITHMS For NONLINEAR

  6. Spectral-Based Volume Sensor Prototype, Post-VS4 Test Series Algorithm Development

    DTIC Science & Technology

    2009-04-30

    Å (NIR), solar - blind UV ( UV ), and 4.3 μm (IR)) and five EVENT algorithms (EVENT, PDSMOKE, FIRE, FIRE_FOV, and WELDING) generating alarm events for... detector are not currently used by any algorithm and, where present, are recorded only for future research and development. The UV units (upper unit...in Figure 2-1) are designed around a standard UV -only OFD (Vibrometer, Inc.). The OmniGuard 860 Optical Flame Detector (Vibrometer, Inc.) used in

  7. Applications of feature selection. [development of classification algorithms for LANDSAT data

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.

    1976-01-01

    The use of satellite-acquired (LANDSAT) multispectral scanner (MSS) data to conduct an inventory of some crop of economic interest such as wheat over a large geographical area is considered in relation to the development of accurate and efficient algorithms for data classification. The dimension of the measurement space and the computational load for a classification algorithm is increased by the use of multitemporal measurements. Feature selection/combination techniques used to reduce the dimensionality of the problem are described.

  8. Millimeter-wave imaging radiometer data processing and development of water vapor retrieval algorithms

    NASA Technical Reports Server (NTRS)

    Chang, L. Aron

    1995-01-01

    This document describes the current status of Millimeter-wave Imaging Radiometer (MIR) data processing and the technical development of the first version of a water vapor retrieval algorithm. The algorithm is being used by NASA/GSFC Microwave Sensors Branch, Laboratory for Hydrospheric Processes. It is capable of a three dimensional mapping of moisture fields using microwave data from airborne sensor of MIR and spaceborne instrument of Special Sensor Microwave/T-2 (SSM/T-2).

  9. Unified Framework for Development, Deployment and Robust Testing of Neuroimaging Algorithms

    PubMed Central

    Joshi, Alark; Scheinost, Dustin; Okuda, Hirohito; Belhachemi, Dominique; Murphy, Isabella; Staib, Lawrence H.; Papademetris, Xenophon

    2011-01-01

    Developing both graphical and command-line user interfaces for neuroimaging algorithms requires considerable effort. Neuroimaging algorithms can meet their potential only if they can be easily and frequently used by their intended users. Deployment of a large suite of such algorithms on multiple platforms requires consistency of user interface controls, consistent results across various platforms and thorough testing. We present the design and implementation of a novel object-oriented framework that allows for rapid development of complex image analysis algorithms with many reusable components and the ability to easily add graphical user interface controls. Our framework also allows for simplified yet robust nightly testing of the algorithms to ensure stability and cross platform interoperability. All of the functionality is encapsulated into a software object requiring no separate source code for user interfaces, testing or deployment. This formulation makes our framework ideal for developing novel, stable and easy-to-use algorithms for medical image analysis and computer assisted interventions. The framework has been both deployed at Yale and released for public use in the open source multi-platform image analysis software—BioImage Suite (bioimagesuite.org). PMID:21249532

  10. Development and application of unified algorithms for problems in computational science

    NASA Technical Reports Server (NTRS)

    Shankar, Vijaya; Chakravarthy, Sukumar

    1987-01-01

    A framework is presented for developing computationally unified numerical algorithms for solving nonlinear equations that arise in modeling various problems in mathematical physics. The concept of computational unification is an attempt to encompass efficient solution procedures for computing various nonlinear phenomena that may occur in a given problem. For example, in Computational Fluid Dynamics (CFD), a unified algorithm will be one that allows for solutions to subsonic (elliptic), transonic (mixed elliptic-hyperbolic), and supersonic (hyperbolic) flows for both steady and unsteady problems. The objectives are: development of superior unified algorithms emphasizing accuracy and efficiency aspects; development of codes based on selected algorithms leading to validation; application of mature codes to realistic problems; and extension/application of CFD-based algorithms to problems in other areas of mathematical physics. The ultimate objective is to achieve integration of multidisciplinary technologies to enhance synergism in the design process through computational simulation. Specific unified algorithms for a hierarchy of gas dynamics equations and their applications to two other areas: electromagnetic scattering, and laser-materials interaction accounting for melting.

  11. Unified framework for development, deployment and robust testing of neuroimaging algorithms.

    PubMed

    Joshi, Alark; Scheinost, Dustin; Okuda, Hirohito; Belhachemi, Dominique; Murphy, Isabella; Staib, Lawrence H; Papademetris, Xenophon

    2011-03-01

    Developing both graphical and command-line user interfaces for neuroimaging algorithms requires considerable effort. Neuroimaging algorithms can meet their potential only if they can be easily and frequently used by their intended users. Deployment of a large suite of such algorithms on multiple platforms requires consistency of user interface controls, consistent results across various platforms and thorough testing. We present the design and implementation of a novel object-oriented framework that allows for rapid development of complex image analysis algorithms with many reusable components and the ability to easily add graphical user interface controls. Our framework also allows for simplified yet robust nightly testing of the algorithms to ensure stability and cross platform interoperability. All of the functionality is encapsulated into a software object requiring no separate source code for user interfaces, testing or deployment. This formulation makes our framework ideal for developing novel, stable and easy-to-use algorithms for medical image analysis and computer assisted interventions. The framework has been both deployed at Yale and released for public use in the open source multi-platform image analysis software--BioImage Suite (bioimagesuite.org).

  12. Ocean observations with EOS/MODIS: Algorithm development and post launch studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1995-01-01

    An investigation of the influence of stratospheric aerosol on the performance of the atmospheric correction algorithm was carried out. The results indicate how the performance of the algorithm is degraded if the stratospheric aerosol is ignored. Use of the MODIS 1380 nm band to effect a correction for stratospheric aerosols was also studied. The development of a multi-layer Monte Carlo radiative transfer code that includes polarization by molecular and aerosol scattering and wind-induced sea surface roughness has been completed. Comparison tests with an existing two-layer successive order of scattering code suggests that both codes are capable of producing top-of-atmosphere radiances with errors usually less than 0.1 percent. An initial set of simulations to study the effects of ignoring the polarization of the the ocean-atmosphere light field, in both the development of the atmospheric correction algorithm and the generation of the lookup tables used for operation of the algorithm, have been completed. An algorithm was developed that can be used to invert the radiance exiting the top and bottom of the atmosphere to yield the columnar optical properties of the atmospheric aerosol under clear sky conditions over the ocean, for aerosol optical thicknesses as large as 2. The algorithm is capable of retrievals with such large optical thicknesses because all significant orders of multiple scattering are included.

  13. Development of Fast Algorithms Using Recursion, Nesting and Iterations for Computational Electromagnetics

    NASA Technical Reports Server (NTRS)

    Chew, W. C.; Song, J. M.; Lu, C. C.; Weedon, W. H.

    1995-01-01

    In the first phase of our work, we have concentrated on laying the foundation to develop fast algorithms, including the use of recursive structure like the recursive aggregate interaction matrix algorithm (RAIMA), the nested equivalence principle algorithm (NEPAL), the ray-propagation fast multipole algorithm (RPFMA), and the multi-level fast multipole algorithm (MLFMA). We have also investigated the use of curvilinear patches to build a basic method of moments code where these acceleration techniques can be used later. In the second phase, which is mainly reported on here, we have concentrated on implementing three-dimensional NEPAL on a massively parallel machine, the Connection Machine CM-5, and have been able to obtain some 3D scattering results. In order to understand the parallelization of codes on the Connection Machine, we have also studied the parallelization of 3D finite-difference time-domain (FDTD) code with PML material absorbing boundary condition (ABC). We found that simple algorithms like the FDTD with material ABC can be parallelized very well allowing us to solve within a minute a problem of over a million nodes. In addition, we have studied the use of the fast multipole method and the ray-propagation fast multipole algorithm to expedite matrix-vector multiplication in a conjugate-gradient solution to integral equations of scattering. We find that these methods are faster than LU decomposition for one incident angle, but are slower than LU decomposition when many incident angles are needed as in the monostatic RCS calculations.

  14. Development of a stereo analysis algorithm for generating topographic maps using interactive techniques of the MPP

    NASA Technical Reports Server (NTRS)

    Strong, James P.

    1987-01-01

    A local area matching algorithm was developed on the Massively Parallel Processor (MPP). It is an iterative technique that first matches coarse or low resolution areas and at each iteration performs matches of higher resolution. Results so far show that when good matches are possible in the two images, the MPP algorithm matches corresponding areas as well as a human observer. To aid in developing this algorithm, a control or shell program was developed for the MPP that allows interactive experimentation with various parameters and procedures to be used in the matching process. (This would not be possible without the high speed of the MPP). With the system, optimal techniques can be developed for different types of matching problems.

  15. The development of a scalable parallel 3-D CFD algorithm for turbomachinery. M.S. Thesis Final Report

    NASA Technical Reports Server (NTRS)

    Luke, Edward Allen

    1993-01-01

    Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.

  16. Advancements in the Development of an Operational Lightning Jump Algorithm for GOES-R GLM

    NASA Technical Reports Server (NTRS)

    Shultz, Chris; Petersen, Walter; Carey, Lawrence

    2011-01-01

    Rapid increases in total lightning have been shown to precede the manifestation of severe weather at the surface. These rapid increases have been termed lightning jumps, and are the current focus of algorithm development for the GOES-R Geostationary Lightning Mapper (GLM). Recent lightning jump algorithm work has focused on evaluation of algorithms in three additional regions of the country, as well as, markedly increasing the number of thunderstorms in order to evaluate the each algorithm s performance on a larger population of storms. Lightning characteristics of just over 600 thunderstorms have been studied over the past four years. The 2 lightning jump algorithm continues to show the most promise for an operational lightning jump algorithm, with a probability of detection of 82%, a false alarm rate of 35%, a critical success index of 57%, and a Heidke Skill Score of 0.73 on the entire population of thunderstorms. Average lead time for the 2 algorithm on all severe weather is 21.15 minutes, with a standard deviation of +/- 14.68 minutes. Looking at tornadoes alone, the average lead time is 18.71 minutes, with a standard deviation of +/-14.88 minutes. Moreover, removing the 2 lightning jumps that occur after a jump has been detected, and before severe weather is detected at the ground, the 2 lightning jump algorithm s false alarm rate drops from 35% to 21%. Cold season, low topped, and tropical environments cause problems for the 2 lightning jump algorithm, due to their relative dearth in lightning as compared to a supercellular or summertime airmass thunderstorm environment.

  17. Development of a fire detection algorithm for the COMS (Communication Ocean and Meteorological Satellite)

    NASA Astrophysics Data System (ADS)

    Kim, Goo; Kim, Dae Sun; Lee, Yang-Won

    2013-10-01

    The forest fires do much damage to our life in ecological and economic aspects. South Korea is probably more liable to suffer from the forest fire because mountain area occupies more than half of land in South Korea. They have recently launched the COMS(Communication Ocean and Meteorological Satellite) which is a geostationary satellite. In this paper, we developed forest fire detection algorithm using COMS data. Generally, forest fire detection algorithm uses characteristics of 4 and 11 micrometer brightness temperature. Our algorithm additionally uses LST(Land Surface Temperature). We confirmed the result of our fire detection algorithm using statistical data of Korea Forest Service and ASTER(Advanced Spaceborne Thermal Emission and Reflection Radiometer) images. We used the data in South Korea On April 1 and 2, 2011 because there are small and big forest fires at that time. The detection rate was 80% in terms of the frequency of the forest fires and was 99% in terms of the damaged area. Considering the number of COMS's channels and its low resolution, this result is a remarkable outcome. To provide users with the result of our algorithm, we developed a smartphone application for users JSP(Java Server Page). This application can work regardless of the smartphone's operating system. This study can be unsuitable for other areas and days because we used just two days data. To improve the accuracy of our algorithm, we need analysis using long-term data as future work.

  18. Development of a two wheeled self balancing robot with speech recognition and navigation algorithm

    NASA Astrophysics Data System (ADS)

    Rahman, Md. Muhaimin; Ashik-E-Rasul, Haq, Nowab. Md. Aminul; Hassan, Mehedi; Hasib, Irfan Mohammad Al; Hassan, K. M. Rafidh

    2016-07-01

    This paper is aimed to discuss modeling, construction and development of navigation algorithm of a two wheeled self balancing mobile robot in an enclosure. In this paper, we have discussed the design of two of the main controller algorithms, namely PID algorithms, on the robot model. Simulation is performed in the SIMULINK environment. The controller is developed primarily for self-balancing of the robot and also it's positioning. As for the navigation in an enclosure, template matching algorithm is proposed for precise measurement of the robot position. The navigation system needs to be calibrated before navigation process starts. Almost all of the earlier template matching algorithms that can be found in the open literature can only trace the robot. But the proposed algorithm here can also locate the position of other objects in an enclosure, like furniture, tables etc. This will enable the robot to know the exact location of every stationary object in the enclosure. Moreover, some additional features, such as Speech Recognition and Object Detection, are added. For Object Detection, the single board Computer Raspberry Pi is used. The system is programmed to analyze images captured via the camera, which are then processed through background subtraction, followed by active noise reduction.

  19. A comparison of three self-tuning control algorithms developed for the Bristol-Babcock controller

    SciTech Connect

    Tapp, P.A.

    1992-04-01

    A brief overview of adaptive control methods relating to the design of self-tuning proportional-integral-derivative (PID) controllers is given. The methods discussed include gain scheduling, self-tuning, auto-tuning, and model-reference adaptive control systems. Several process identification and parameter adjustment methods are discussed. Characteristics of the two most common types of self-tuning controllers implemented by industry (i.e., pattern recognition and process identification) are summarized. The substance of the work is a comparison of three self-tuning proportional-plus-integral (STPI) control algorithms developed to work in conjunction with the Bristol-Babcock PID control module. The STPI control algorithms are based on closed-loop cycling theory, pattern recognition theory, and model-based theory. A brief theory of operation of these three STPI control algorithms is given. Details of the process simulations developed to test the STPI algorithms are given, including an integrating process, a first-order system, a second-order system, a system with initial inverse response, and a system with variable time constant and delay. The STPI algorithms` performance with regard to both setpoint changes and load disturbances is evaluated, and their robustness is compared. The dynamic effects of process deadtime and noise are also considered. Finally, the limitations of each of the STPI algorithms is discussed, some conclusions are drawn from the performance comparisons, and a few recommendations are made. 6 refs.

  20. A comparison of three self-tuning control algorithms developed for the Bristol-Babcock controller

    SciTech Connect

    Tapp, P.A.

    1992-04-01

    A brief overview of adaptive control methods relating to the design of self-tuning proportional-integral-derivative (PID) controllers is given. The methods discussed include gain scheduling, self-tuning, auto-tuning, and model-reference adaptive control systems. Several process identification and parameter adjustment methods are discussed. Characteristics of the two most common types of self-tuning controllers implemented by industry (i.e., pattern recognition and process identification) are summarized. The substance of the work is a comparison of three self-tuning proportional-plus-integral (STPI) control algorithms developed to work in conjunction with the Bristol-Babcock PID control module. The STPI control algorithms are based on closed-loop cycling theory, pattern recognition theory, and model-based theory. A brief theory of operation of these three STPI control algorithms is given. Details of the process simulations developed to test the STPI algorithms are given, including an integrating process, a first-order system, a second-order system, a system with initial inverse response, and a system with variable time constant and delay. The STPI algorithms' performance with regard to both setpoint changes and load disturbances is evaluated, and their robustness is compared. The dynamic effects of process deadtime and noise are also considered. Finally, the limitations of each of the STPI algorithms is discussed, some conclusions are drawn from the performance comparisons, and a few recommendations are made. 6 refs.

  1. Bobcat 2013: a hyperspectral data collection supporting the development and evaluation of spatial-spectral algorithms

    NASA Astrophysics Data System (ADS)

    Kaufman, Jason; Celenk, Mehmet; White, A. K.; Stocker, Alan D.

    2014-06-01

    The amount of hyperspectral imagery (HSI) data currently available is relatively small compared to other imaging modalities, and what is suitable for developing, testing, and evaluating spatial-spectral algorithms is virtually nonexistent. In this work, a significant amount of coincident airborne hyperspectral and high spatial resolution panchromatic imagery that supports the advancement of spatial-spectral feature extraction algorithms was collected to address this need. The imagery was collected in April 2013 for Ohio University by the Civil Air Patrol, with their Airborne Real-time Cueing Hyperspectral Enhanced Reconnaissance (ARCHER) sensor. The target materials, shapes, and movements throughout the collection area were chosen such that evaluation of change detection algorithms, atmospheric compensation techniques, image fusion methods, and material detection and identification algorithms is possible. This paper describes the collection plan, data acquisition, and initial analysis of the collected imagery.

  2. TIGER: Development of Thermal Gradient Compensation Algorithms and Techniques

    NASA Technical Reports Server (NTRS)

    Hereford, James; Parker, Peter A.; Rhew, Ray D.

    2004-01-01

    In a wind tunnel facility, the direct measurement of forces and moments induced on the model are performed by a force measurement balance. The measurement balance is a precision-machined device that has strain gages at strategic locations to measure the strain (i.e., deformations) due to applied forces and moments. The strain gages convert the strain (and hence the applied force) to an electrical voltage that is measured by external instruments. To address the problem of thermal gradients on the force measurement balance NASA-LaRC has initiated a research program called TIGER - Thermally-Induced Gradients Effects Research. The ultimate goals of the TIGER program are to: (a) understand the physics of the thermally-induced strain and its subsequent impact on load measurements and (b) develop a robust thermal gradient compensation technique. This paper will discuss the impact of thermal gradients on force measurement balances, specific aspects of the TIGER program (the design of a special-purpose balance, data acquisition and data analysis challenges), and give an overall summary.

  3. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  4. Developing Fire Detection Algorithms by Geostationary Orbiting Platforms and Machine Learning Techniques

    NASA Astrophysics Data System (ADS)

    Salvador, Pablo; Sanz, Julia; Garcia, Miguel; Casanova, Jose Luis

    2016-08-01

    Fires in general and forest fires specific are a major concern in terms of economical and biological loses. Remote sensing technologies have been focusing on developing several algorithms, adapted to a large kind of sensors, platforms and regions in order to obtain hotspots as faster as possible. The aim of this study is to establish an automatic methodology to develop hotspots detection algorithms with Spinning Enhanced Visible and Infrared Imager (SEVIRI) sensor on board Meteosat Second Generation platform (MSG) based on machine learning techniques that can be exportable to others geostationary platforms and sensors and to any area of the Earth. The sensitivity (SE), specificity (SP) and accuracy (AC) parameters have been analyzed in order to develop the final machine learning algorithm taking into account the preferences and final use of the predicted data.

  5. Innovative testbed for developing and assessing air-to-air noncooperative target identification algorithms

    NASA Astrophysics Data System (ADS)

    Knopow, Jeffrey P.

    1992-07-01

    The development and evaluation of multi-source, multi-spectral, all-aspect airborne target identification algorithms has been proven to be cumbersome as well as disjointed. The algorithm development capability under this testbed concept encompasses model-based reasoning, information fusion, airborne target identification, and target/sensor phenomenology analysis. The evaluation capability assembles multiple sensor and target types coupled with all aspect viewing in an operationally representative air-to-air environment. The importance of developing better techniques for establishing positive target identification for beyond visual ranges has increased in tactical importance, as a result of the Persian Gulf War. In addition to supporting the evaluation of algorithms and associated sensors, this testbed will support on- going R&D in the Air-To-Air Non-Cooperative Target Recognition (NCTR) arena.

  6. Migration and Community Development at Origin: The Case of Migrants in Bendel North, Nigeria.

    ERIC Educational Resources Information Center

    Odaman, Odion

    1990-01-01

    A survey of rural-to-urban migrants in Nigeria found that 58.2 percent contributed financially to community development projects in their areas of origin. Concludes that rural out-migration is encouraged by rural inhabitants to combat poverty and suggests government policies to encourage further migrant involvement. (FMW)

  7. Moving from the Inside Out: Further Explorations of the Family of Origin/Career Development Linkage

    ERIC Educational Resources Information Center

    Blustein, David L.

    2004-01-01

    This article provides a reaction to the Whiston and Keller's major contribution on the relationships between family of origin and the career development process. Initially, some of the most noteworthy lessons conveyed in the Whiston and Keller article are highlighted, followed by a description of the next steps in research and theory construction…

  8. Correlation signatures of wet soils and snows. [algorithm development and computer programming

    NASA Technical Reports Server (NTRS)

    Phillips, M. R.

    1972-01-01

    Interpretation, analysis, and development of algorithms have provided the necessary computational programming tools for soil data processing, data handling and analysis. Algorithms that have been developed thus far, are adequate and have been proven successful for several preliminary and fundamental applications such as software interfacing capabilities, probability distributions, grey level print plotting, contour plotting, isometric data displays, joint probability distributions, boundary mapping, channel registration and ground scene classification. A description of an Earth Resources Flight Data Processor, (ERFDP), which handles and processes earth resources data under a users control is provided.

  9. Unified development of multiplicative algorithms for linear and quadratic nonnegative matrix factorization.

    PubMed

    Yang, Zhirong; Oja, Erkki

    2011-12-01

    Multiplicative updates have been widely used in approximative nonnegative matrix factorization (NMF) optimization because they are convenient to deploy. Their convergence proof is usually based on the minimization of an auxiliary upper-bounding function, the construction of which however remains specific and only available for limited types of dissimilarity measures. Here we make significant progress in developing convergent multiplicative algorithms for NMF. First, we propose a general approach to derive the auxiliary function for a wide variety of NMF problems, as long as the approximation objective can be expressed as a finite sum of monomials with real exponents. Multiplicative algorithms with theoretical guarantee of monotonically decreasing objective function sequence can thus be obtained. The solutions of NMF based on most commonly used dissimilarity measures such as α- and β-divergence as well as many other more comprehensive divergences can be derived by the new unified principle. Second, our method is extended to a nonseparable case that includes e.g., γ-divergence and Rényi divergence. Third, we develop multiplicative algorithms for NMF using second-order approximative factorizations, in which each factorizing matrix may appear twice. Preliminary numerical experiments demonstrate that the multiplicative algorithms developed using the proposed procedure can achieve satisfactory Karush-Kuhn-Tucker optimality. We also demonstrate NMF problems where algorithms by the conventional method fail to guarantee descent at each iteration but those by our principle are immune to such violation.

  10. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation

    PubMed Central

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  11. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation.

    PubMed

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  12. Authentication of the botanical origin of unifloral honey by infrared spectroscopy coupled with support vector machine algorithm

    NASA Astrophysics Data System (ADS)

    Lenhardt, L.; Zeković, I.; Dramićanin, T.; Tešić, Ž.; Milojković-Opsenica, D.; Dramićanin, M. D.

    2014-09-01

    In recent years, the potential of Fourier-transform infrared spectroscopy coupled with different chemometric tools in food analysis has been established. This technique is rapid, low cost, and reliable and requires little sample preparation. In this work, 130 Serbian unifloral honey samples (linden, acacia, and sunflower types) were analyzed using attenuated total reflectance infrared spectroscopy (ATR-IR). For each spectrum, 64 scans were recorded in wavenumbers between 4000 and 500 cm-1 and at a spectral resolution of 4 cm-1. These spectra were analyzed using principal component analysis (PCA), and calculated principal components were then used for support vector machine (SVM) training. In this way, the pattern-recognition tool is obtained for building a classification model for determining the botanical origin of honey. The PCA was used to analyze results and to see if the separation between groups of different types of honeys exists. Using the SVM, the classification model was built and classification errors were acquired. It has been observed that this technique is adequate for determining the botanical origin of honey with a success rate of 98.6%. Based on these results, it can be concluded that this technique offers many possibilities for future rapid qualitative analysis of honey.

  13. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  14. Utilization of Ancillary Data Sets for SMAP Algorithm Development and Product Generation

    NASA Technical Reports Server (NTRS)

    ONeill, P.; Podest, E.; Njoku, E.

    2011-01-01

    Algorithms being developed for the Soil Moisture Active Passive (SMAP) mission require a variety of both static and ancillary data. The selection of the most appropriate source for each ancillary data parameter is driven by a number of considerations, including accuracy, latency, availability, and consistency across all SMAP products and with SMOS (Soil Moisture Ocean Salinity). It is anticipated that initial selection of all ancillary datasets, which are needed for ongoing algorithm development activities on the SMAP algorithm testbed at JPL, will be completed within the year. These datasets will be updated as new or improved sources become available, and all selections and changes will be documented for the benefit of the user community. Wise choices in ancillary data will help to enable SMAP to provide new global measurements of soil moisture and freeze/thaw state at the targeted accuracy necessary to tackle hydrologically-relevant societal issues.

  15. Applications and development of new algorithms for displacement analysis using InSAR time series

    NASA Astrophysics Data System (ADS)

    Osmanoglu, Batuhan

    -dimensional (3-D) phase unwrapping. Chapter 4 focuses on the unwrapping path. Unwrapping algorithms can be divided into two groups, path-dependent and path-independent algorithms. Path-dependent algorithms use local unwrapping functions applied pixel-by-pixel to the dataset. In contrast, path-independent algorithms use global optimization methods such as least squares, and return a unique solution. However, when aliasing and noise are present, path-independent algorithms can underestimate the signal in some areas due to global fitting criteria. Path-dependent algorithms do not underestimate the signal, but, as the name implies, the unwrapping path can affect the result. Comparison between existing path algorithms and a newly developed algorithm based on Fisher information theory was conducted. Results indicate that Fisher information theory does indeed produce lower misfit results for most tested cases. Chapter 5 presents a new time series analysis method based on 3-D unwrapping of SAR data using extended Kalman filters. Existing methods for time series generation using InSAR data employ special filters to combine two-dimensional (2-D) spatial unwrapping with one-dimensional (1-D) temporal unwrapping results. The new method, however, combines observations in azimuth, range and time for repeat pass interferometry. Due to the pixel-by-pixel characteristic of the filter, the unwrapping path is selected based on a quality map. This unwrapping algorithm is the first application of extended Kalman filters to the 3-D unwrapping problem. Time series analyses of InSAR data are used in a variety of applications with different characteristics. Consequently, it is difficult to develop a single algorithm that can provide optimal results in all cases, given that different algorithms possess a unique set of strengths and weaknesses. Nonetheless, filter-based unwrapping algorithms such as the one presented in this dissertation have the capability of joining multiple observations into a uniform

  16. Father involvement in Mexican origin families: Preliminary development of culturally-informed measure

    PubMed Central

    Roubinov, Danielle S.; Luecken, Linda J.; Gonzales, Nancy A.; Crnic, Keith A.

    2015-01-01

    Objectives An increasing body of research has documented the significant influence of father involvement on children’s development and overall well-being. However, extant research has predominately focused on middle-class Caucasian samples with little examination of fathering in ethnic minority and low-income families, particularly during the infancy period. The present study evaluated measures of early father involvement (paternal engagement, accessibility, and responsibility) that were adapted to capture important cultural values relevant to the paternal role in Mexican origin families. Methods A sample of 180 Mexican origin mothers (M age = 28.3) and 83 Mexican origin fathers (M age = 31.5) were interviewed during the perinatal period. Results Descriptive analyses indicated that Mexican origin fathers are involved in meaningful levels of direct interaction with their infant. A two-factor model of paternal responsibility was supported by factor analyses, consisting of a behavioral responsibility factor aligned with previous literature and culturally-derived positive machismo factor. Qualities of the romantic relationship, cultural orientation, and maternal employment status were related to indices of father involvement. Conclusions These preliminary results contribute to understanding of the transition to fatherhood among low-income Mexican origin men and bring attention to the demographic, social, and cultural contexts in which varying levels of father involvement may emerge. PMID:26237543

  17. Development of a multi-objective optimization algorithm using surrogate models for coastal aquifer management

    NASA Astrophysics Data System (ADS)

    Kourakos, George; Mantoglou, Aristotelis

    2013-02-01

    SummaryThe demand for fresh water in coastal areas and islands can be very high due to increased local needs and tourism. A multi-objective optimization methodology is developed, involving minimization of economic and environmental costs while satisfying water demand. The methodology considers desalinization of pumped water and injection of treated water into the aquifer. Variable density aquifer models are computationally intractable when integrated in optimization algorithms. In order to alleviate this problem, a multi-objective optimization algorithm is developed combining surrogate models based on Modular Neural Networks [MOSA(MNNs)]. The surrogate models are trained adaptively during optimization based on a genetic algorithm. In the crossover step, each pair of parents generates a pool of offspring which are evaluated using the fast surrogate model. Then, the most promising offspring are evaluated using the exact numerical model. This procedure eliminates errors in Pareto solution due to imprecise predictions of the surrogate model. The method has important advancements compared to previous methods such as precise evaluation of the Pareto set and alleviation of propagation of errors due to surrogate model approximations. The method is applied to an aquifer in the Greek island of Santorini. The results show that the new MOSA(MNN) algorithm offers significant reduction in computational time compared to previous methods (in the case study it requires only 5% of the time required by other methods). Further, the Pareto solution is better than the solution obtained by alternative algorithms.

  18. Development of administrative data algorithms to identify patients with critical limb ischemia.

    PubMed

    Bekwelem, Wobo; Bengtson, Lindsay G S; Oldenburg, Niki C; Winden, Tamara J; Keo, Hong H; Hirsch, Alan T; Duval, Sue

    2014-12-01

    Administrative data have been used to identify patients with various diseases, yet no prior study has determined the utility of International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM)-based codes to identify CLI patients. CLI cases (n=126), adjudicated by a vascular specialist, were carefully defined and enrolled in a hospital registry. Controls were frequency matched to cases on age, sex and admission date in a 2:1 ratio. ICD-9-CM codes for all patients were extracted. Algorithms were developed using frequency distributions of these codes, risk factors and procedures prevalent in CLI. The sensitivity for each algorithm was calculated and applied within the hospital system to identify CLI patients not included in the registry. Sensitivity ranged from 0.29 to 0.92. An algorithm based on diagnosis and procedure codes exhibited the best overall performance (sensitivity of 0.92). Each algorithm had differing CLI identification characteristics based on patient location. Administrative data can be used to identify CLI patients within a health system. The algorithms, developed from these data, can serve as a tool to facilitate clinical care, research, quality improvement, and population surveillance.

  19. Testing and Development of the Onsite Earthquake Early Warning Algorithm to Reduce Event Uncertainties

    NASA Astrophysics Data System (ADS)

    Andrews, J. R.; Cochran, E. S.; Hauksson, E.; Felizardo, C.; Liu, T.; Ross, Z.; Heaton, T. H.

    2015-12-01

    Primary metrics for measuring earthquake early warning (EEW) system and algorithm performance are the rate of false alarms and the uncertainty in earthquake parameters. The Onsite algorithm, currently one of three EEW algorithms implemented in ShakeAlert, uses the ground-motion period parameter (τc) and peak initial displacement parameter (Pd) to estimate the magnitude and expected ground shaking of an ongoing earthquake. It is the only algorithm originally designed to issue single station alerts, necessitating that results from individual stations be as reliable and accurate as possible.The ShakeAlert system has been undergoing testing on continuous real-time data in California for several years, and the latest version of the Onsite algorithm for several months. This permits analysis of the response to a range of signals, from environmental noise to hardware testing and maintenance procedures to moderate or large earthquake signals at varying distances from the networks. We find that our existing discriminator, relying only on τc and Pd, while performing well to exclude large teleseismic events, is less effective for moderate regional events and can also incorrectly exclude data from local events. Motivated by these experiences, we use a collection of waveforms from potentially problematic 'noise' events and real earthquakes to explore methods to discriminate real and false events, using the ground motion and period parameters available in Onsite's processing methodology. Once an event is correctly identified, a magnitude and location estimate is critical to determining the expected ground shaking. Scatter in the measured parameters translates to higher than desired uncertainty in Onsite's current calculations We present an overview of alternative methods, including incorporation of polarization information, to improve parameter determination for a test suite including both large (M4 to M7) events and three years of small to moderate events across California.

  20. Recommendations for Technology Development and Validation Activities in Support of the Origins Program

    NASA Technical Reports Server (NTRS)

    Capps, Richard W. (Editor)

    1996-01-01

    The Office of Space Science (OSS) has initiated mission concept studies and associated technology roadmapping activities for future large space optical systems. The scientific motivation for these systems is the study of the origins of galaxies, stars, planetary systems and, ultimately, life. Collectively, these studies are part of the 'Astronomical Search for Origins and Planetary Systems Program' or 'Origins Program'. A series of at least three science missions and associated technology validation flights is currently envisioned in the time frame between the year 1999 and approximately 2020. These would be the Space Interferometry Mission (SIM), a 10-meter baseline Michelson stellar interferometer; the Next Generation Space Telescope (NGST), a space-based infrared optimized telescope with aperture diameter larger than four meters; and the Terrestrial Planet Finder (TPF), an 80-meter baseline-nulling Michelson interferometer described in the Exploration of Neighboring Planetary Systems (ExNPS) Study. While all of these missions include significant technological challenges, preliminary studies indicate that the technological requirements are achievable. However, immediate and aggressive technology development is needed. The Office of Space Access and Technology (OSAT) is the primary sponsor of NASA-unique technology for missions such as the Origins series. For some time, the OSAT Space Technology Program has been developing technologies for large space optical systems, including both interferometers and large-aperture telescopes. In addition, technology investments have been made by other NASA programs, including OSS; other government agencies, particularly the Department of Defense; and by the aerospace industrial community. This basis of prior technology investment provides much of the rationale for confidence in the feasibility of the advanced Origins missions. In response to the enhanced interest of both the user community and senior NASA management in large

  1. Developments in the Aerosol Layer Height Retrieval Algorithm for the Copernicus Sentinel-4/UVN Instrument

    NASA Astrophysics Data System (ADS)

    Nanda, Swadhin; Sanders, Abram; Veefkind, Pepijn

    2016-04-01

    The Sentinel-4 mission is a part of the European Commission's Copernicus programme, the goal of which is to provide geo-information to manage environmental assets, and to observe, understand and mitigate the effects of the changing climate. The Sentinel-4/UVN instrument design is motivated by the need to monitor trace gas concentrations and aerosols in the atmosphere from a geostationary orbit. The on-board instrument is a high resolution UV-VIS-NIR (UVN) spectrometer system that provides hourly radiance measurements over Europe and northern Africa with a spatial sampling of 8 km. The main application area of Sentinel-4/UVN is air quality. One of the data products that is being developed for Sentinel-4/UVN is the Aerosol Layer Height (ALH). The goal is to determine the height of aerosol plumes with a resolution of better than 0.5 - 1 km. The ALH product thus targets aerosol layers in the free troposphere, such as desert dust, volcanic ash and biomass during plumes. KNMI is assigned with the development of the Aerosol Layer Height (ALH) algorithm. Its heritage is the ALH algorithm developed by Sanders and De Haan (ATBD, 2016) for the TROPOMI instrument on board the Sentinel-5 Precursor mission that is to be launched in June or July 2016 (tentative date). The retrieval algorithm designed so far for the aerosol height product is based on the absorption characteristics of the oxygen-A band (759-770 nm). The algorithm has heritage to the ALH algorithm developed for TROPOMI on the Sentinel 5 precursor satellite. New aspects for Sentinel-4/UVN include the higher resolution (0.116 nm compared to 0.4 for TROPOMI) and hourly observation from the geostationary orbit. The algorithm uses optimal estimation to obtain a spectral fit of the reflectance across absorption band, while assuming a single uniform layer with fixed width to represent the aerosol vertical distribution. The state vector includes amongst other elements the height of this layer and its aerosol optical

  2. Flower development of Meliosma (Sabiaceae): evidence for multiple origins of pentamery in the eudicots.

    PubMed

    Wanntorp, Livia; Ronse De Craene, Louis P

    2007-11-01

    Flower developmental studies are a complement to molecular phylogenetics and a tool to understand the evolution of the angiosperm flower. Buds and mature flowers of Meliosma veitchiorum, M. cuneifolia, and M. dilleniifolia (Sabiaceae) were investigated using scanning electron microscopy to clarify flower developmental patterns and morphology, to understand the origin of the perianth merism, and to discuss the two taxonomic positions proposed for Sabiaceae, among rosids or in the basal grade of eudicots. Flowers in Meliosma appear pentamerous with two of the five sepals and petals strongly reduced, three staminodes alternating with two fertile stamens opposite the small petals, and a two-carpellate gynoecium. The flower development in Meliosma is spiral without distinction between bracteoles and sepals. Because of this development, sepals, petals, and stamens are almost opposite and not alternating as expected in cyclical pentamerous flowers. In four-sepal flowers the direction of petal initiation is reversed. The symmetry of the flower appears to be transversally zygomorphic, although this is hidden by the almost equal size of the larger petals. Evidence points to a unique pentamerous origin of flowers in Meliosma, and not to a trimerous origin, as earlier suggested, and adds support to multiple origins of pentamery in the eudicots.

  3. Algorithms for Developing Test Questions from Sentences in Instructional Materials: an Extension of an Earlier Study

    DTIC Science & Technology

    1980-01-01

    8217.> age were developed using the following procedure; 1. The selected mat -rial was computer-analyzed to identify high information words—those that an...frequencies (keyword and rare singletons), (4) the two foil types (writer’s choice and algorithmic), and (5) the two test occasions (pi etest and

  4. Long term analysis of PALS soil moisture campaign measurements for global soil moisture algorithm development

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An important component of satellite-based soil moisture algorithm development and validation is the comparison of coincident remote sensing and in situ observations that are typically provided by intensive field campaigns. The planned NASA Soil Moisture Active Passive (SMAP) mission has unique requi...

  5. Scheduling language and algorithm development study. Appendix: Study approach and activity summary

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The approach and organization of the study to develop a high level computer programming language and a program library are presented. The algorithm and problem modeling analyses are summarized. The approach used to identify and specify the capabilities required in the basic language is described. Results of the analyses used to define specifications for the scheduling module library are presented.

  6. Development and Evaluation of Model Algorithms to Account for Chemical Transformation in the Nearroad Environment

    EPA Science Inventory

    We describe the development and evaluation of two new model algorithms for NOx chemistry in the R-LINE near-road dispersion model for traffic sources. With increased urbanization, there is increased mobility leading to higher amount of traffic related activity on a global scale. ...

  7. Ocean observations with EOS/MODIS: Algorithm Development and Post Launch Studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1998-01-01

    Significant accomplishments made during the present reporting period: (1) We expanded our "spectral-matching" algorithm (SMA), for identifying the presence of absorbing aerosols and simultaneously performing atmospheric correction and derivation of the ocean's bio-optical parameters, to the point where it could be added as a subroutine to the MODIS water-leaving radiance algorithm; (2) A modification to the SMA that does not require detailed aerosol models has been developed. This is important as the requirement for realistic aerosol models has been a weakness of the SMA; and (3) We successfully acquired micro pulse lidar data in a Saharan dust outbreak during ACE-2 in the Canary Islands.

  8. Experiences on developing digital down conversion algorithms using Xilinx system generator

    NASA Astrophysics Data System (ADS)

    Xu, Chengfa; Yuan, Yuan; Zhao, Lizhi

    2013-07-01

    The Digital Down Conversion (DDC) algorithm is a classical signal processing method which is widely used in radar and communication systems. In this paper, the DDC function is implemented by Xilinx System Generator tool on FPGA. System Generator is an FPGA design tool provided by Xilinx Inc and MathWorks Inc. It is very convenient for programmers to manipulate the design and debug the function, especially for the complex algorithm. Through the developing process of DDC function based on System Generator, the results show that System Generator is a very fast and efficient tool for FPGA design.

  9. Ocean Observations with EOS/MODIS: Algorithm Development and Post Launch Studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1997-01-01

    The following accomplishments were made during the present reporting period: (1) We expanded our new method, for identifying the presence of absorbing aerosols and simultaneously performing atmospheric correction, to the point where it could be added as a subroutine to the MODIS water-leaving radiance algorithm; (2) We successfully acquired micro pulse lidar (MPL) data at sea during a cruise in February; (3) We developed a water-leaving radiance algorithm module for an approximate correction of the MODIS instrument polarization sensitivity; and (4) We participated in one cruise to the Gulf of Maine, a well known region for mesoscale coccolithophore blooms. We measured coccolithophore abundance, production and optical properties.

  10. Development and benefit analysis of a sector design algorithm for terminal dynamic airspace configuration

    NASA Astrophysics Data System (ADS)

    Sciandra, Vincent

    The National Airspace System (NAS) is the vast network of systems enabling safe and efficient air travel in the United States. It consists of a set of static sectors, each controlled by one or more air traffic controllers. Air traffic control is tasked with ensuring that all flights can depart and arrive on time and in a safe and efficient matter. However, skyrocketing demand will only increase the stress on an already inefficient system, causing massive delays. The current, static configuration of the NAS cannot possibly handle the future demand on the system safely and efficiently, especially since it is projected to triple by 2025. To overcome these issues, the Next Generation of Air Transportation System (NextGen) is being enacted to increase the flexibility of the NAS. A major objective of NextGen is to implement Adaptable Dynamic Airspace Configuration (ADAC) which will dynamically allocate the sectors to best fit the traffic in the area. Dynamically allocating sectors will allow resources such as controllers to be better distributed to meet traffic demands. Currently, most DAC research has involved the en route airspace. This leaves the terminal airspace, which accounts for a large amount of the overall NAS complexity, in need of work. Using a combination of methods used in en route sectorization, this thesis has developed an algorithm for the dynamic allocation of sectors in the terminal airspace. This algorithm will be evaluated using metrics common in the evaluation of dynamic density, which is adapted for the unique challenges of the terminal airspace, and used to measure workload on air traffic controllers. These metrics give a better view of the controller workload than the number of aircraft alone. By comparing the test results with sectors currently used in the NAS using real traffic data, the algorithm xv generated sectors can be quantitatively evaluated for improvement of the current sectorizations. This will be accomplished by testing the

  11. Development of a doubly weighted Gerchberg-Saxton algorithm for use in multibeam imaging applications.

    PubMed

    Poland, Simon P; Krstajić, Nikola; Knight, Robert D; Henderson, Robert K; Ameer-Beg, Simon M

    2014-04-15

    We report on the development of a doubly weighted Gerchberg-Saxton algorithm (DWGS) to enable generation of uniform beamlet arrays with a spatial light modulator (SLM) for use in multiphoton multifocal imaging applications. The algorithm incorporates the WGS algorithm as well as feedback of fluorescence signals from the sample measured with a single-photon avalanche diode (SPAD) detector array. This technique compensates for issues associated with nonuniform illumination onto the SLM, the effects due to aberrations and the variability in gain between detectors within the SPAD array to generate a uniformly illuminated multiphoton fluorescence image. We demonstrate the use of the DWGS with a number of beamlet array patterns to image muscle fibers of a 5-day-old fixed zebrafish larvae.

  12. Development of the Landsat Data Continuity Mission Cloud Cover Assessment Algorithms

    USGS Publications Warehouse

    Scaramuzza, Pat; Bouchard, M.A.; Dwyer, J.L.

    2012-01-01

    The upcoming launch of the Operational Land Imager (OLI) will start the next era of the Landsat program. However, the Automated Cloud-Cover Assessment (CCA) (ACCA) algorithm used on Landsat 7 requires a thermal band and is thus not suited for OLI. There will be a thermal instrument on the Landsat Data Continuity Mission (LDCM)-the Thermal Infrared Sensor-which may not be available during all OLI collections. This illustrates a need for CCA for LDCM in the absence of thermal data. To research possibilities for full-resolution OLI cloud assessment, a global data set of 207 Landsat 7 scenes with manually generated cloud masks was created. It was used to evaluate the ACCA algorithm, showing that the algorithm correctly classified 79.9% of a standard test subset of 3.95 109 pixels. The data set was also used to develop and validate two successor algorithms for use with OLI data-one derived from an off-the-shelf machine learning package and one based on ACCA but enhanced by a simple neural network. These comprehensive CCA algorithms were shown to correctly classify pixels as cloudy or clear 88.5% and 89.7% of the time, respectively.

  13. Development of a MELCOR self-initialization algorithm for boiling water reactors

    SciTech Connect

    Chien, C.S.; Wang, S.J.; Cheng, S.K.

    1996-01-01

    The MELCOR code, developed by Sandia National Laboratories, is suitable for calculating source terms and simulating severe accident phenomena of nuclear power plants. Prior to simulating a severe accident transient with MELCOR, the initial steady-state conditions must be generated in advance. The current MELCOR users` manuals do not provide a self-initialization procedure; this is the reason users have to adjust the initial conditions by themselves through a trial-and-error approach. A MELCOR self-initialization algorithm for boiling water reactor plants has been developed, which eliminates the tedious trial-and-error procedures and improves the simulation accuracy. This algorithm adjusts the important plant variable such as the dome pressure, downcomer level, and core flow rate to the desired conditions automatically. It is implemented through input with control functions provided in MELCOR. The reactor power and feedwater temperature are fed as input data. The initialization work of full-power conditions of the Kuosheng nuclear power station is cited as an example. These initial conditions are generated successfully with the developed algorithm. The generated initial conditions can be stored in a restart file and used for transient analysis. The methodology in this study improves the accuracy and consistency of transient calculations. Meanwhile, the algorithm provides all MELCOR users an easy and correct method for establishing the initial conditions.

  14. SPHERES as Formation Flight Algorithm Development and Validation Testbed: Current Progress and Beyond

    NASA Technical Reports Server (NTRS)

    Kong, Edmund M.; Saenz-Otero, Alvar; Nolet, Simon; Berkovitz, Dustin S.; Miller, David W.; Sell, Steve W.

    2004-01-01

    The MIT-SSL SPHERES testbed provides a facility for the development of algorithms necessary for the success of Distributed Satellite Systems (DSS). The initial development contemplated formation flight and docking control algorithms; SPHERES now supports the study of metrology, control, autonomy, artificial intelligence, and communications algorithms and their effects on DSS projects. To support this wide range of topics, the SPHERES design contemplated the need to support multiple researchers, as echoed from both the hardware and software designs. The SPHERES operational plan further facilitates the development of algorithms by multiple researchers, while the operational locations incrementally increase the ability of the tests to operate in a representative environment. In this paper, an overview of the SPHERES testbed is first presented. The SPHERES testbed serves as a model of the design philosophies that allow for the various researches being carried out on such a facility. The implementation of these philosophies are further highlighted in the three different programs that are currently scheduled for testing onboard the International Space Station (ISS) and three that are proposed for a re-flight mission: Mass Property Identification, Autonomous Rendezvous and Docking, TPF Multiple Spacecraft Formation Flight in the first flight and Precision Optical Pointing, Tethered Formation Flight and Mars Orbit Sample Retrieval for the re-flight mission.

  15. Ice surface temperature retrieval from AVHRR, ATSR, and passive microwave satellite data: Algorithm development and application

    NASA Technical Reports Server (NTRS)

    Key, Jeff; Maslanik, James; Steffen, Konrad

    1995-01-01

    During the second phase project year we have made progress in the development and refinement of surface temperature retrieval algorithms and in product generation. More specifically, we have accomplished the following: (1) acquired a new advanced very high resolution radiometer (AVHRR) data set for the Beaufort Sea area spanning an entire year; (2) acquired additional along-track scanning radiometer(ATSR) data for the Arctic and Antarctic now totalling over eight months; (3) refined our AVHRR Arctic and Antarctic ice surface temperature (IST) retrieval algorithm, including work specific to Greenland; (4) developed ATSR retrieval algorithms for the Arctic and Antarctic, including work specific to Greenland; (5) developed cloud masking procedures for both AVHRR and ATSR; (6) generated a two-week bi-polar global area coverage (GAC) set of composite images from which IST is being estimated; (7) investigated the effects of clouds and the atmosphere on passive microwave 'surface' temperature retrieval algorithms; and (8) generated surface temperatures for the Beaufort Sea data set, both from AVHRR and special sensor microwave imager (SSM/I).

  16. Millimeter-Wave Imaging Radiometer (MIR) Data Processing and Development of Water Vapor Retrieval Algorithms

    NASA Technical Reports Server (NTRS)

    Chang, L. Aron

    1998-01-01

    This document describes the final report of the Millimeter-wave Imaging Radiometer (MIR) Data Processing and Development of Water Vapor Retrieval Algorithms. Volumes of radiometric data have been collected using airborne MIR measurements during a series of field experiments since May 1992. Calibrated brightness temperature data in MIR channels are now available for studies of various hydrological parameters of the atmosphere and Earth's surface. Water vapor retrieval algorithms using multichannel MIR data input are developed for the profiling of atmospheric humidity. The retrieval algorithms are also extended to do three-dimensional mapping of moisture field using continuous observation provided by airborne sensor MIR or spaceborne sensor SSM/T-2. Validation studies for water vapor retrieval are carried out through the intercomparison of collocated and concurrent measurements using different instruments including lidars and radiosondes. The developed MIR water vapor retrieval algorithm is capable of humidity profiling under meteorological conditions ranging from clear column to moderately cloudy sky. Simulative water vapor retrieval studies using extended microwave channels near 183 and 557 GHz strong absorption lines indicate feasibility of humidity profiling to layers in the upper troposphere and improve the overall vertical resolution through the atmosphere.

  17. Integrated Graphics Operations and Analysis Lab Development of Advanced Computer Graphics Algorithms

    NASA Technical Reports Server (NTRS)

    Wheaton, Ira M.

    2011-01-01

    The focus of this project is to aid the IGOAL in researching and implementing algorithms for advanced computer graphics. First, this project focused on porting the current International Space Station (ISS) Xbox experience to the web. Previously, the ISS interior fly-around education and outreach experience only ran on an Xbox 360. One of the desires was to take this experience and make it into something that can be put on NASA s educational site for anyone to be able to access. The current code works in the Unity game engine which does have cross platform capability but is not 100% compatible. The tasks for an intern to complete this portion consisted of gaining familiarity with Unity and the current ISS Xbox code, porting the Xbox code to the web as is, and modifying the code to work well as a web application. In addition, a procedurally generated cloud algorithm will be developed. Currently, the clouds used in AGEA animations and the Xbox experiences are a texture map. The desire is to create a procedurally generated cloud algorithm to provide dynamically generated clouds for both AGEA animations and the Xbox experiences. This task consists of gaining familiarity with AGEA and the plug-in interface, developing the algorithm, creating an AGEA plug-in to implement the algorithm inside AGEA, and creating a Unity script to implement the algorithm for the Xbox. This portion of the project was unable to be completed in the time frame of the internship; however, the IGOAL will continue to work on it in the future.

  18. Development of potential methods for testing congestion control algorithm implemented in vehicle to vehicle communications.

    PubMed

    Hsu, Chung-Jen; Fikentscher, Joshua; Kreeb, Robert

    2017-03-21

    Objective A channel congestion problem might occur when the traffic density increases since the number of basic safety messages carried on the communication channel also increases in vehicle-to-vehicle communications. A remedy algorithm proposed in SAE J2945/1 is designed to address the channel congestion issue by decreasing transmission frequency and radiated power. This study is to develop potential test procedures for evaluating or validating the congestion control algorithm. Methods Simulations of a reference unit transmitting at a higher frequency are implemented to emulate a number of Onboard Equipment (OBE) transmitting at the normal interval of 100 milliseconds (10 Hz). When the transmitting interval is reduced to 1.25 milliseconds (800 Hz), the reference unit emulates 80 vehicles transmitting at 10 Hz. By increasing the number of reference units transmitting at 800 Hz in the simulations, the corresponding channel busy percentages are obtained. An algorithm for GPS data generation of virtual vehicles is developed for facilitating the validation of transmission intervals in the congestion control algorithm. Results Channel busy percentage is the channel busy time over a specified period of time. Three or four reference units are needed to generate channel busy percentages between 50% and 80%, and five reference units can generate channel busy percentages above 80%. The proposed test procedures can verify the operation of congestion control algorithm when channel busy percentages are between 50% and 80%, and above 80%. By using GPS data generation algorithm, the test procedures can also verify the transmission intervals when traffic densities are 80 and 200 vehicles in the radius of 100 m. A suite of test tools with functional requirements is also proposed for facilitating the implementation of test procedures. Conclusions The potential test procedures for congestion control algorithm are developed based on the simulation results of channel busy

  19. Developing a synergy algorithm for land surface temperature: the SEN4LST project

    NASA Astrophysics Data System (ADS)

    Sobrino, Jose A.; Jimenez, Juan C.; Ghent, Darren J.

    2013-04-01

    Land surface Temperature (LST) is one of the key parameters in the physics of land-surface processes on regional and global scales, combining the results of all surface-atmosphere interactions and energy fluxes between the surface and the atmosphere. An adequate characterization of LST distribution and its temporal evolution requires measurements with detailed spatial and temporal frequencies. With the advent of the Sentinel 2 (S2) and 3 (S3) series of satellites a unique opportunity exists to go beyond the current state of the art of single instrument algorithms. The Synergistic Use of The Sentinel Missions For Estimating And Monitoring Land Surface Temperature (SEN4LST) project aims at developing techniques to fully utilize synergy between S2 and S3 instruments in order to improve LST retrievals. In the framework of the SEN4LST project, three LST retrieval algorithms were proposed using the thermal infrared bands of the Sea and Land Surface Temperature Retrieval (SLSTR) instrument on board the S3 platform: split-window (SW), dual-angle (DA) and a combined algorithm using both split-window and dual-angle techniques (SW-DA). One of the objectives of the project is to select the best algorithm to generate LST products from the synergy between S2/S3 instruments. In this sense, validation is a critical step in the selection process for the best performing candidate algorithm. A unique match-up database constructed at University of Leicester (UoL) of in situ observations from over twenty ground stations and corresponding brightness temperature (BT) and LST match-ups from multi-sensor overpasses is utilised for validating the candidate algorithms. Furthermore, their performance is also evaluated against the standard ESA LST product and the enhanced offline UoL LST product. In addition, a simulation dataset is constructed using 17 synthetic images of LST and the radiative transfer model MODTRAN carried under 66 different atmospheric conditions. Each candidate LST

  20. Reconstruction of an object from its Fourier modulus: development of the combination algorithm composed of the hybrid input-output algorithm and its converging part

    NASA Astrophysics Data System (ADS)

    Takajo, Hiroaki; Takahashi, Tohru; Itoh, Katsuhiko; Fujisaki, Toshiro

    2002-10-01

    The hybrid input-output algorithm (HIO) used for phase retrieval is in many cases combined with the error-reduction algorithm (ER) to attempt to stabilize the HIO. However, in our previous paper [J. Opt. Soc. Am. A 16, 2163 (1999)], it was demonstrated that this combination makes it more likely that the resultant algorithm will fall into a periodic state before reaching a solution because the values of the input object outside the support, which is imposed as the object-domain constraint, are set to be zero in the intervals in which the ER is implemented. This paper deals with this problem inherent in the combination algorithm. The converging part of the HIO (CPHIO), which is an algorithm we previously developed [J. Opt. Soc. Am. A 15, 2849 (1998)], can be thought of as an extension of the ER for the case in which the input object can have nonzero values outside the support. Keeping this in mind, the algorithm is then constructed by combining the HIO with the CPHIO instead of with the ER. The computer simulation results that demonstrate the effectiveness of the proposed algorithm are given.

  1. Reconstruction of an object from its Fourier modulus: development of the combination algorithm composed of the hybrid input-output algorithm and its converging part.

    PubMed

    Takajo, Hiroaki; Takahashi, Tohru; Itoh, Katsuhiko; Fujisaki, Toshiro

    2002-10-10

    The hybrid input-output algorithm (HIO) used for phase retrieval is in many cases combined with the error-reduction algorithm (ER) to attempt to stabilize the HIO. However, in our previous paper [J. Opt. Soc. Am. A 16, 2163 (1999)], it was demonstrated that this combination makes it more likely that the resultant algorithm will fall into a periodic state before reaching a solution because the values of the input object outside the support, which is imposed as the object-domain constraint, are set to be zero in the intervals in which the ER is implemented. This paper deals with this problem inherent in the combination algorithm. The converging part of the HIO (CPHIO), which is an algorithm we previously developed [J. Opt. Soc. Am. A 15, 2849 (1998)], can be thought of as an extension of the ER for the case in which the input object can have nonzero values outside the support. Keeping this in mind, the algorithm is then constructed by combining the HIO with the CPHIO instead of with the ER. The computer simulation results that demonstrate the effectiveness of the proposed algorithm are given.

  2. Development of sensor-based nitrogen recommendation algorithms for cereal crops

    NASA Astrophysics Data System (ADS)

    Asebedo, Antonio Ray

    Nitrogen (N) management is one of the most recognizable components of farming both within and outside the world of agriculture. Interest over the past decade has greatly increased in improving N management systems in corn (Zea mays) and winter wheat (Triticum aestivum ) to have high NUE, high yield, and be environmentally sustainable. Nine winter wheat experiments were conducted across seven locations from 2011 through 2013. The objectives of this study were to evaluate the impacts of fall-winter, Feekes 4, Feekes 7, and Feekes 9 N applications on winter wheat grain yield, grain protein, and total grain N uptake. Nitrogen treatments were applied as single or split applications in the fall-winter, and top-dressed in the spring at Feekes 4, Feekes 7, and Feekes 9 with applied N rates ranging from 0 to 134 kg ha-1. Results indicate that Feekes 7 and 9 N applications provide more optimal combinations of grain yield, grain protein levels, and fertilizer N recovered in the grain when compared to comparable rates of N applied in the fall-winter or at Feekes 4. Winter wheat N management studies from 2006 through 2013 were utilized to develop sensor-based N recommendation algorithms for winter wheat in Kansas. Algorithm RosieKat v.2.6 was designed for multiple N application strategies and utilized N reference strips for establishing N response potential. Algorithm NRS v1.5 addressed single top-dress N applications and does not require a N reference strip. In 2013, field validations of both algorithms were conducted at eight locations across Kansas. Results show algorithm RK v2.6 consistently provided highly efficient N recommendations for improving NUE, while achieving high grain yield and grain protein. Without the use of the N reference strip, NRS v1.5 performed statistically equal to the KSU soil test N recommendation in regards to grain yield but with lower applied N rates. Six corn N fertigation experiments were conducted at KSU irrigated experiment fields from 2012

  3. Origin and development of the tergotrochanteral muscle in Chironomus (Diptera: Nematocera).

    PubMed

    Lebart-Pedebas, M C

    1992-01-01

    The origin and the development of the tubular tergo-trochanteral muscle (TTD) was studied by light and electron microscopy in Chironomus (Diptera: Nematocera). Unlike the flight muscles, the TTD was found to develop from myoblasts located around a larval axon, without contribution from a larval muscle. The myoblasts fuse together to form myotubes. Innervation of the TTD arises from the larval axon. The myotubes send out sarcoplasmic extensions towards the axon branches issued from the larval axon. The first differentiated synapses are described. The TTD begins to grow later than the flight muscles. The implications of this developmental lag are discussed.

  4. Development of an IMU-based foot-ground contact detection (FGCD) algorithm.

    PubMed

    Kim, Myeongkyu; Lee, Donghun

    2017-03-01

    It is well known that, to locate humans in GPS-denied environments, a lower limb kinematic solution based on Inertial Measurement Unit (IMU), force plate, and pressure insoles is essential. The force plate and pressure insole are used to detect foot-ground contacts. However, the use of multiple sensors is not desirable in most cases. This paper documents the development of an IMU-based FGCD (foot-ground contact detection) algorithm considering the variations of both walking terrain and speed. All IMU outputs showing significant changes on the moments of foot-ground contact phases are fully identified through experiments in five walking terrains. For the experiment on each walking terrain, variations of walking speeds are also examined to confirm the correlations between walking speed and the main parameters in the FGCD algorithm. As experimental results, FGCD algorithm successfully detecting four contact phases is developed, and validation of performance of the FGCD algorithm is also implemented. Practitioner Summary: In this research, it was demonstrated that the four contact phases of Heel strike (or Toe strike), Full contact, Heel off and Toe off can be independently detected regardless of the walking speed and walking terrain based on the detection criteria composed of the ranges and the rates of change of the main parameters measured from the Inertial Measurement Unit sensors.

  5. jClustering, an Open Framework for the Development of 4D Clustering Algorithms

    PubMed Central

    Mateos-Pérez, José María; García-Villalba, Carmen; Pascau, Javier; Desco, Manuel; Vaquero, Juan J.

    2013-01-01

    We present jClustering, an open framework for the design of clustering algorithms in dynamic medical imaging. We developed this tool because of the difficulty involved in manually segmenting dynamic PET images and the lack of availability of source code for published segmentation algorithms. Providing an easily extensible open tool encourages publication of source code to facilitate the process of comparing algorithms and provide interested third parties with the opportunity to review code. The internal structure of the framework allows an external developer to implement new algorithms easily and quickly, focusing only on the particulars of the method being implemented and not on image data handling and preprocessing. This tool has been coded in Java and is presented as an ImageJ plugin in order to take advantage of all the functionalities offered by this imaging analysis platform. Both binary packages and source code have been published, the latter under a free software license (GNU General Public License) to allow modification if necessary. PMID:23990913

  6. jClustering, an open framework for the development of 4D clustering algorithms.

    PubMed

    Mateos-Pérez, José María; García-Villalba, Carmen; Pascau, Javier; Desco, Manuel; Vaquero, Juan J

    2013-01-01

    We present jClustering, an open framework for the design of clustering algorithms in dynamic medical imaging. We developed this tool because of the difficulty involved in manually segmenting dynamic PET images and the lack of availability of source code for published segmentation algorithms. Providing an easily extensible open tool encourages publication of source code to facilitate the process of comparing algorithms and provide interested third parties with the opportunity to review code. The internal structure of the framework allows an external developer to implement new algorithms easily and quickly, focusing only on the particulars of the method being implemented and not on image data handling and preprocessing. This tool has been coded in Java and is presented as an ImageJ plugin in order to take advantage of all the functionalities offered by this imaging analysis platform. Both binary packages and source code have been published, the latter under a free software license (GNU General Public License) to allow modification if necessary.

  7. Review and Analysis of Algorithmic Approaches Developed for Prognostics on CMAPSS Dataset

    NASA Technical Reports Server (NTRS)

    Ramasso, Emannuel; Saxena, Abhinav

    2014-01-01

    Benchmarking of prognostic algorithms has been challenging due to limited availability of common datasets suitable for prognostics. In an attempt to alleviate this problem several benchmarking datasets have been collected by NASA's prognostic center of excellence and made available to the Prognostics and Health Management (PHM) community to allow evaluation and comparison of prognostics algorithms. Among those datasets are five C-MAPSS datasets that have been extremely popular due to their unique characteristics making them suitable for prognostics. The C-MAPSS datasets pose several challenges that have been tackled by different methods in the PHM literature. In particular, management of high variability due to sensor noise, effects of operating conditions, and presence of multiple simultaneous fault modes are some factors that have great impact on the generalization capabilities of prognostics algorithms. More than 70 publications have used the C-MAPSS datasets for developing data-driven prognostic algorithms. The C-MAPSS datasets are also shown to be well-suited for development of new machine learning and pattern recognition tools for several key preprocessing steps such as feature extraction and selection, failure mode assessment, operating conditions assessment, health status estimation, uncertainty management, and prognostics performance evaluation. This paper summarizes a comprehensive literature review of publications using C-MAPSS datasets and provides guidelines and references to further usage of these datasets in a manner that allows clear and consistent comparison between different approaches.

  8. DEVELOPMENT OF PROCESSING ALGORITHMS FOR OUTLIERS AND MISSING VALUES IN CONSTANT OBSERVATION DATA OF TRAFFIC VOLUMES

    NASA Astrophysics Data System (ADS)

    Hashimoto, Hiroyoshi; Kawano, Tomohiko; Momma, Toshiyuki; Uesaka, Katsumi

    Ministry of Land, Infrastructure, Transport and Tourism of Japan is going to make maximum use of vehicle detectors installed at national roads around the country and efficiently gather traffic volume data from wide areas by estimating traffic volumes within adjacent road sections based on the constant observation data obtained from the vehicle detectors. Efficient processing of outliers and missing values in constant observation data are needed in this process. Focusing on the processing of singular and missing values, the authors have developed a series of algorithms to calculate hourly traffic volumes in which a required accuracy is secured based on measurement data obtained from vehicle detectors. The algorithms have been put to practical uses. The main characteristic of these algorithms is that they use data accumulated in the past as well as data from constant observation devices in adjacent road sections. This paper describes the contents of the developed algorithms and clarifies their accuracy using actual observation data and by making comparis on with other methods.

  9. Space-based Doppler lidar sampling strategies: Algorithm development and simulated observation experiments

    NASA Technical Reports Server (NTRS)

    Emmitt, G. D.; Wood, S. A.; Morris, M.

    1990-01-01

    Lidar Atmospheric Wind Sounder (LAWS) Simulation Models (LSM) were developed to evaluate the potential impact of global wind observations on the basic understanding of the Earth's atmosphere and on the predictive skills of current forecast models (GCM and regional scale). Fully integrated top to bottom LAWS Simulation Models for global and regional scale simulations were developed. The algorithm development incorporated the effects of aerosols, water vapor, clouds, terrain, and atmospheric turbulence into the models. Other additions include a new satellite orbiter, signal processor, line of sight uncertainty model, new Multi-Paired Algorithm and wind error analysis code. An atmospheric wind field library containing control fields, meteorological fields, phenomena fields, and new European Center for Medium Range Weather Forecasting (ECMWF) data was also added. The LSM was used to address some key LAWS issues and trades such as accuracy and interpretation of LAWS information, data density, signal strength, cloud obscuration, and temporal data resolution.

  10. Application of custom tools and algorithms to the development of terrain and target models

    NASA Astrophysics Data System (ADS)

    Wilkosz, Aaron; Williams, Bryan L.; Motz, Steve

    2003-09-01

    In this paper we give a high level discussion outlining methodologies and techniques employed in generating high fidelity terrain and target models. We present the current state of our IR signature development efforts, cover custom tools and algorithms, and discuss future plans. We outline the steps required to derive an IR terrain and target signature models, and provide some details about algorithms developed to classify aerial imagery. In addition, we discuss our tool used to apply IR signature data to tactical vehicle models. We discuss how we process the empirical IR data of target vehicles, apply it to target models, and generate target signature models that correlate with the measured calibrated IR data. The developed characterization databases and target models are used in digital simulations by various customers within the US Army Aviation and Missile Command (AMCOM).

  11. Testing the Fetal Origins Hypothesis in a developing country: evidence from the 1918 Influenza Pandemic.

    PubMed

    Nelson, Richard E

    2010-10-01

    The 1918 Influenza Pandemic is used as a natural experiment to test the Fetal Origins Hypothesis. This hypothesis states that individual health as well as socioeconomic outcomes, such as educational attainment, employment status, and wages, are affected by the health of that individual while in utero. Repeated cross sections from the Pesquisa Mensal de Emprego (PME), a labor market survey from Brazil, are used to test this hypothesis. I find evidence to support the Fetal Origins Hypothesis. In particular, compared to individuals born in the few years surrounding the Influenza Pandemic, those who were in utero during the pandemic are less likely to be college educated, be employed, have formal employment, or know how to read and have fewer years of schooling and a lower hourly wage. These results underscore the importance of fetal health especially in developing countries.

  12. Development of Outlier detection Algorithm Applicable to a Korean Surge-Gauge

    NASA Astrophysics Data System (ADS)

    Lee, Jun-Whan; Park, Sun-Cheon; Lee, Won-Jin; Lee, Duk Kee

    2016-04-01

    The Korea Meteorological Administration (KMA) is operating a surge-gauge (aerial ultrasonic type) at Ulleung-do to monitor tsunamis. And the National Institute of Meteorological Sciences (NIMS), KMA is developing a tsunami detection and observation system using this surge-gauge. Outliers resulting from a problem with the transmission and extreme events, which change the water level temporarily, are one of the most common discouraging problems in tsunami detection. Unlike a spike, multipoint outliers are difficult to detect clearly. Most of the previous studies used statistic values or signal processing methods such as wavelet transform and filter to detect the multipoint outliers, and used a continuous dataset. However, as the focus moved to a near real-time operation with a dataset that contains gaps, these methods are no longer tenable. In this study, we developed an outlier detection algorithm applicable to the Ulleung-do surge gauge where both multipoint outliers and missing data exist. Although only 9-point data and two arithmetic operations (plus and minus) are used, because of the newly developed keeping method, the algorithm is not only simple and fast but also effective in a non-continuous dataset. We calibrated 17 thresholds and conducted performance tests using the three month data from the Ulleung-do surge gauge. The results show that the newly developed despiking algorithm performs reliably in alleviating the outlier detecting problem.

  13. Development of a novel algorithm to determine adherence to chronic pain treatment guidelines using administrative claims

    PubMed Central

    Margolis, Jay M; Princic, Nicole; Smith, David M; Abraham, Lucy; Cappelleri, Joseph C; Shah, Sonali N; Park, Peter W

    2017-01-01

    Objective To develop a claims-based algorithm for identifying patients who are adherent versus nonadherent to published guidelines for chronic pain management. Methods Using medical and pharmacy health care claims from the MarketScan® Commercial and Medicare Supplemental Databases, patients were selected during July 1, 2010, to June 30, 2012, with the following chronic pain conditions: osteoarthritis (OA), gout (GT), painful diabetic peripheral neuropathy (pDPN), post-herpetic neuralgia (PHN), and fibromyalgia (FM). Patients newly diagnosed with 12 months of continuous medical and pharmacy benefits both before and after initial diagnosis (index date) were categorized as adherent, nonadherent, or unsure according to the guidelines-based algorithm using disease-specific pain medication classes grouped as first-line, later-line, or not recommended. Descriptive and multivariate analyses compared patient outcomes with algorithm-derived categorization endpoints. Results A total of 441,465 OA patients, 76,361 GT patients, 10,645 pDPN, 4,010 PHN patients, and 150,321 FM patients were included in the development of the algorithm. Patients found adherent to guidelines included 51.1% for OA, 25% for GT, 59.5% for pDPN, 54.9% for PHN, and 33.5% for FM. The majority (~90%) of patients adherent to the guidelines initiated therapy with prescriptions for first-line pain medications written for a minimum of 30 days. Patients found nonadherent to guidelines included 30.7% for OA, 6.8% for GT, 34.9% for pDPN, 23.1% for PHN, and 34.7% for FM. Conclusion This novel algorithm used real-world pharmacotherapy treatment patterns to evaluate adherence to pain management guidelines in five chronic pain conditions. Findings suggest that one-third to one-half of patients are managed according to guidelines. This method may have valuable applications for health care payers and providers analyzing treatment guideline adherence. PMID:28223842

  14. Development of an Aircraft Approach and Departure Atmospheric Profile Generation Algorithm

    NASA Technical Reports Server (NTRS)

    Buck, Bill K.; Velotas, Steven G.; Rutishauser, David K. (Technical Monitor)

    2004-01-01

    In support of NASA Virtual Airspace Modeling and Simulation (VAMS) project, an effort was initiated to develop and test techniques for extracting meteorological data from landing and departing aircraft, and for building altitude based profiles for key meteorological parameters from these data. The generated atmospheric profiles will be used as inputs to NASA s Aircraft Vortex Spacing System (AVOLSS) Prediction Algorithm (APA) for benefits and trade analysis. A Wake Vortex Advisory System (WakeVAS) is being developed to apply weather and wake prediction and sensing technologies with procedures to reduce current wake separation criteria when safe and appropriate to increase airport operational efficiency. The purpose of this report is to document the initial theory and design of the Aircraft Approach Departure Atmospheric Profile Generation Algorithm.

  15. Developments of a force image algorithm for micromachined optical bend loss sensor

    NASA Astrophysics Data System (ADS)

    Huang, Chu-Yu; Liu, Chao-Shih; Panergo, Reynold; Huang, Cheng-Sheng; Wang, Wei-Chih

    2005-05-01

    A flexible high-resolution sensor capable of measuring the distribution of both shear and pressure at the plantar interface are needed to study the actual distribution of this force during daily activities, and the role that shear plays in causing plantar ulceration. We have previously developed a novel means of transducing plantar shear and pressure stress via a new microfabricated optical system. However, a force image algorithm is needed to handle the complexity of construction of two-dimensional planar pressure and shear images. Here we have developed a force image algorithm for a micromachined optical bend loss sensor. A neural network is introduced to help identify different load shapes. According to the experimental result, we can conclude that once the neural network has been well trained, it can correctly identify the loading shape. With the neural network, our micromachined optical bend loss Sensor is able to construction the two-dimensional planar force images.

  16. Development of Cloud and Precipitation Property Retrieval Algorithms and Measurement Simulators from ASR Data

    SciTech Connect

    Mace, Gerald G.

    2016-02-10

    What has made the ASR program unique is the amount of information that is available. The suite of recently deployed instruments significantly expands the scope of the program (Mather and Voyles, 2013). The breadth of this information allows us to pose sophisticated process-level questions. Our ASR project, now entering its third year, has been about developing algorithms that use this information in ways that fully exploit the new capacity of the ARM data streams. Using optimal estimation (OE) and Markov Chain Monte Carlo (MCMC) inversion techniques, we have developed methodologies that allow us to use multiple radar frequency Doppler spectra along with lidar and passive constraints where data streams can be added or subtracted efficiently and algorithms can be reformulated for various combinations of hydrometeors by exchanging sets of empirical coefficients. These methodologies have been applied to boundary layer clouds, mixed phase snow cloud systems, and cirrus.

  17. Forecasting of the development of professional medical equipment engineering based on neuro-fuzzy algorithms

    NASA Astrophysics Data System (ADS)

    Vaganova, E. V.; Syryamkin, M. V.

    2015-11-01

    The purpose of the research is the development of evolutionary algorithms for assessments of promising scientific directions. The main attention of the present study is paid to the evaluation of the foresight possibilities for identification of technological peaks and emerging technologies in professional medical equipment engineering in Russia and worldwide on the basis of intellectual property items and neural network modeling. An automated information system consisting of modules implementing various classification methods for accuracy of the forecast improvement and the algorithm of construction of neuro-fuzzy decision tree have been developed. According to the study result, modern trends in this field will focus on personalized smart devices, telemedicine, bio monitoring, «e-Health» and «m-Health» technologies.

  18. Development and Evaluation of Single-Microphone Noise Reduction Algorithms for Digital Hearing Aids

    NASA Astrophysics Data System (ADS)

    Marzinzik, Mark; Kollmeier, Birger

    In this study single-microphone noise reduction procedures were investigated for use in digital hearing aids. One widely reported artifact of most noise suppression systems, the musical noise phenomenon, can partly be overcome by the Ephraim-Malah noise suppression algorithms [1,2]. Based on these algorithms, three different versions have been implemented together with a procedure for automatically updating the noise-spectrum estimate. To evaluate the algorithms, different tests have been performed with six normal-hearing and six hearing-impaired subjects. With `standard' measurement methods no increase in speech intelligibility was found compared to the unprocessed signal. However, benefits with respect to reductions in listener fatigue and in the mental effort needed to listen to speech in noise over longer periods of time were found in this study by use of a newly developed ease-of-listening test. Subsequent paired comparison tests also revealed a clear preference of the hearing-impaired subjects for the noise-reduced signals in situations with rather stationary noise. In the case of strongly fluctuating noise at low SNR, however, the subjects preferred the unprocessed signal due to speech distortions caused by the noise reduction algorithms.

  19. Development and application of efficient pathway enumeration algorithms for metabolic engineering applications.

    PubMed

    Liu, F; Vilaça, P; Rocha, I; Rocha, M

    2015-02-01

    Metabolic Engineering (ME) aims to design microbial cell factories towards the production of valuable compounds. In this endeavor, one important task relates to the search for the most suitable heterologous pathway(s) to add to the selected host. Different algorithms have been developed in the past towards this goal, following distinct approaches spanning constraint-based modeling, graph-based methods and knowledge-based systems based on chemical rules. While some of these methods search for pathways optimizing specific objective functions, here the focus will be on methods that address the enumeration of pathways that are able to convert a set of source compounds into desired targets and their posterior evaluation according to different criteria. Two pathway enumeration algorithms based on (hyper)graph-based representations are selected as the most promising ones and are analyzed in more detail: the Solution Structure Generation and the Find Path algorithms. Their capabilities and limitations are evaluated when designing novel heterologous pathways, by applying these methods on three case studies of synthetic ME related to the production of non-native compounds in E. coli and S. cerevisiae: 1-butanol, curcumin and vanillin. Some targeted improvements are implemented, extending both methods to address limitations identified that impair their scalability, improving their ability to extract potential pathways over large-scale databases. In all case-studies, the algorithms were able to find already described pathways for the production of the target compounds, but also alternative pathways that can represent novel ME solutions after further evaluation.

  20. Development of Algorithms for Control of Humidity in Plant Growth Chambers

    NASA Technical Reports Server (NTRS)

    Costello, Thomas A.

    2003-01-01

    Algorithms were developed to control humidity in plant growth chambers used for research on bioregenerative life support at Kennedy Space Center. The algorithms used the computed water vapor pressure (based on measured air temperature and relative humidity) as the process variable, with time-proportioned outputs to operate the humidifier and de-humidifier. Algorithms were based upon proportional-integral-differential (PID) and Fuzzy Logic schemes and were implemented using I/O Control software (OPTO-22) to define and download the control logic to an autonomous programmable logic controller (PLC, ultimate ethernet brain and assorted input-output modules, OPTO-22), which performed the monitoring and control logic processing, as well the physical control of the devices that effected the targeted environment in the chamber. During limited testing, the PLC's successfully implemented the intended control schemes and attained a control resolution for humidity of less than 1%. The algorithms have potential to be used not only with autonomous PLC's but could also be implemented within network-based supervisory control programs. This report documents unique control features that were implemented within the OPTO-22 framework and makes recommendations regarding future uses of the hardware and software for biological research by NASA.

  1. MEMS-based sensing and algorithm development for fall detection and gait analysis

    NASA Astrophysics Data System (ADS)

    Gupta, Piyush; Ramirez, Gabriel; Lie, Donald Y. C.; Dallas, Tim; Banister, Ron E.; Dentino, Andrew

    2010-02-01

    Falls by the elderly are highly detrimental to health, frequently resulting in injury, high medical costs, and even death. Using a MEMS-based sensing system, algorithms are being developed for detecting falls and monitoring the gait of elderly and disabled persons. In this study, wireless sensors utilize Zigbee protocols were incorporated into planar shoe insoles and a waist mounted device. The insole contains four sensors to measure pressure applied by the foot. A MEMS based tri-axial accelerometer is embedded in the insert and a second one is utilized by the waist mounted device. The primary fall detection algorithm is derived from the waist accelerometer. The differential acceleration is calculated from samples received in 1.5s time intervals. This differential acceleration provides the quantification via an energy index. From this index one may ascertain different gait and identify fall events. Once a pre-determined index threshold is exceeded, the algorithm will classify an event as a fall or a stumble. The secondary algorithm is derived from frequency analysis techniques. The analysis consists of wavelet transforms conducted on the waist accelerometer data. The insole pressure data is then used to underline discrepancies in the transforms, providing more accurate data for classifying gait and/or detecting falls. The range of the transform amplitude in the fourth iteration of a Daubechies-6 transform was found sufficient to detect and classify fall events.

  2. A Focus Group on Dental Pain Complaints with General Medical Practitioners: Developing a Treatment Algorithm.

    PubMed

    Carter, Ava Elizabeth; Carter, Geoff; Abbey, Robyn

    2016-01-01

    Objective. The differential diagnosis of pain in the mouth can be challenging for general medical practitioners (GMPs) as many different dental problems can present with similar signs and symptoms. This study aimed to create a treatment algorithm for GMPs to effectively and appropriately refer the patients and prescribe antibiotics. Design. The study design is comprised of qualitative focus group discussions. Setting and Subjects. Groups of GMPs within the Gold Coast and Brisbane urban and city regions. Outcome Measures. Content thematically analysed and treatment algorithm developed. Results. There were 5 focus groups with 8-9 participants per group. Addressing whether antibiotics should be given to patients with dental pain was considered very important to GMPs to prevent overtreatment and creating antibiotic resistance. Many practitioners were unsure of what the different forms of dental pains represent. 90% of the practitioners involved agreed that the treatment algorithm was useful to daily practice. Conclusion. Common dental complaints and infections are seldom surgical emergencies but can result in prolonged appointments for those GMPs who do not regularly deal with these issues. The treatment algorithm for referral processes and prescriptions was deemed easily downloadable and simple to interpret and detailed but succinct enough for clinical use by GMPs.

  3. Development of a validated algorithm for the diagnosis of paediatric asthma in electronic medical records

    PubMed Central

    Cave, Andrew J; Davey, Christina; Ahmadi, Elaheh; Drummond, Neil; Fuentes, Sonia; Kazemi-Bajestani, Seyyed Mohammad Reza; Sharpe, Heather; Taylor, Matt

    2016-01-01

    An accurate estimation of the prevalence of paediatric asthma in Alberta and elsewhere is hampered by uncertainty regarding disease definition and diagnosis. Electronic medical records (EMRs) provide a rich source of clinical data from primary-care practices that can be used in better understanding the occurrence of the disease. The Canadian Primary Care Sentinel Surveillance Network (CPCSSN) database includes cleaned data extracted from the EMRs of primary-care practitioners. The purpose of the study was to develop and validate a case definition of asthma in children 1–17 who consult family physicians, in order to provide primary-care estimates of childhood asthma in Alberta as accurately as possible. The validation involved the comparison of the application of a theoretical algorithm (to identify patients with asthma) to a physician review of records included in the CPCSSN database (to confirm an accurate diagnosis). The comparison yielded 87.4% sensitivity, 98.6% specificity and a positive and negative predictive value of 91.2% and 97.9%, respectively, in the age group 1–17 years. The algorithm was also run for ages 3–17 and 6–17 years, and was found to have comparable statistical values. Overall, the case definition and algorithm yielded strong sensitivity and specificity metrics and was found valid for use in research in CPCSSN primary-care practices. The use of the validated asthma algorithm may improve insight into the prevalence, diagnosis, and management of paediatric asthma in Alberta and Canada. PMID:27882997

  4. A Focus Group on Dental Pain Complaints with General Medical Practitioners: Developing a Treatment Algorithm

    PubMed Central

    Carter, Geoff; Abbey, Robyn

    2016-01-01

    Objective. The differential diagnosis of pain in the mouth can be challenging for general medical practitioners (GMPs) as many different dental problems can present with similar signs and symptoms. This study aimed to create a treatment algorithm for GMPs to effectively and appropriately refer the patients and prescribe antibiotics. Design. The study design is comprised of qualitative focus group discussions. Setting and Subjects. Groups of GMPs within the Gold Coast and Brisbane urban and city regions. Outcome Measures. Content thematically analysed and treatment algorithm developed. Results. There were 5 focus groups with 8-9 participants per group. Addressing whether antibiotics should be given to patients with dental pain was considered very important to GMPs to prevent overtreatment and creating antibiotic resistance. Many practitioners were unsure of what the different forms of dental pains represent. 90% of the practitioners involved agreed that the treatment algorithm was useful to daily practice. Conclusion. Common dental complaints and infections are seldom surgical emergencies but can result in prolonged appointments for those GMPs who do not regularly deal with these issues. The treatment algorithm for referral processes and prescriptions was deemed easily downloadable and simple to interpret and detailed but succinct enough for clinical use by GMPs. PMID:27462469

  5. Progress on the development of automated data analysis algorithms and software for ultrasonic inspection of composites

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Coughlin, Chris; Forsyth, David S.; Welter, John T.

    2014-02-01

    Progress is presented on the development and implementation of automated data analysis (ADA) software to address the burden in interpreting ultrasonic inspection data for large composite structures. The automated data analysis algorithm is presented in detail, which follows standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. ADA processing results are presented for test specimens that include inserted materials and discontinuities produced under poor manufacturing conditions.

  6. Origine et developpement des industries de la langue (Origin and Development of Language Utilities). Publication K-8.

    ERIC Educational Resources Information Center

    L'Homme, Marie-Claude

    The evolution of "language utilities," a concept confined largely to the francophone world and relating to the uses of language in computer science and the use of computer science for languages, is chronicled. The language utilities are of three types: (1) tools for language development, primarily dictionary databases and related tools;…

  7. Ocean Observations with EOS/MODIS: Algorithm Development and Post Launch Studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1997-01-01

    Significant accomplishments made during the present reporting period are as follows: (1) We developed a new method for identifying the presence of absorbing aerosols and, simultaneously, performing atmospheric correction. The algorithm consists of optimizing the match between the top-of-atmosphere radiance spectrum and the result of models of both the ocean and aerosol optical properties; (2) We developed an algorithm for providing an accurate computation of the diffuse transmittance of the atmosphere given an aerosol model. A module for inclusion into the MODIS atmospheric-correction algorithm was completed; (3) We acquired reflectance data for oceanic whitecaps during a cruise on the RV Ka'imimoana in the Tropical Pacific (Manzanillo, Mexico to Honolulu, Hawaii). The reflectance spectrum of whitecaps was found to be similar to that for breaking waves in the surf zone measured by Frouin, Schwindling and Deschamps, however, the drop in augmented reflectance from 670 to 860 nm was not as great, and the magnitude of the augmented reflectance was significantly less than expected; and (4) We developed a method for the approximate correction for the effects of the MODIS polarization sensitivity. The correction, however, requires adequate characterization of the polarization sensitivity of MODIS prior to launch.

  8. Development of an algorithm for identifying rheumatoid arthritis in the Korean National Health Insurance claims database.

    PubMed

    Cho, Soo-Kyung; Sung, Yoon-Kyoung; Choi, Chan-Bum; Kwon, Jeong-Mi; Lee, Eui-Kyung; Bae, Sang-Cheol

    2013-12-01

    This study aimed to develop an identification algorithm for validating the International Classification of Diseases-Tenth diagnostic codes for rheumatoid arthritis (RA) in the Korean National Health Insurance (NHI) claims database. An individual copayment beneficiaries program for rare and intractable diseases, including seropositive RA (M05), began in South Korea in July 2009. Patients registered in this system pay only 10 % of their total medical costs, but registration requires an official report from a doctor documenting that the patient fulfills the 1987 ACR criteria. We regarded patients registered in this system as gold standard RA and examined the validity of several algorithms to define RA diagnosis using diagnostic codes and prescription data. We constructed nine algorithms using two highly specific prescriptions (positive predictive value >90 % and specificity >90 %) and one prescription with high sensitivity (>80 %) and accuracy (>75 %). A total of 59,823 RA patients were included in this validation study. Among them, 50,082 (83.7 %) were registered in the individual copayment beneficiaries program and considered true RA. We tested nine algorithms that incorporated two specific regimens [biologics and leflunomide alone, methotrexate plus leflunomide, or more than 3 disease-modifying anti-rheumatic drugs (DMARDs)] and one sensitive drug (any non-steroidal anti-inflammatory drug (NSAID), any DMARD, or any NSAID plus any DMARD). The algorithm that included biologics, more than 3 DMARDs, and any DMARD yielded the highest accuracy (91.4 %). Patients with RA diagnostic codes with prescription of biologics or any DMARD can be considered as accurate cases of RA in Korean NHI claims database.

  9. Development and evaluation of an articulated registration algorithm for human skeleton registration.

    PubMed

    Yip, Stephen; Perk, Timothy; Jeraj, Robert

    2014-03-21

    Accurate registration over multiple scans is necessary to assess treatment response of bone diseases (e.g. metastatic bone lesions). This study aimed to develop and evaluate an articulated registration algorithm for the whole-body skeleton registration in human patients. In articulated registration, whole-body skeletons are registered by auto-segmenting into individual bones using atlas-based segmentation, and then rigidly aligning them. Sixteen patients (weight = 80-117 kg, height = 168-191 cm) with advanced prostate cancer underwent the pre- and mid-treatment PET/CT scans over a course of cancer therapy. Skeletons were extracted from the CT images by thresholding (HU>150). Skeletons were registered using the articulated, rigid, and deformable registration algorithms to account for position and postural variability between scans. The inter-observers agreement in the atlas creation, the agreement between the manually and atlas-based segmented bones, and the registration performances of all three registration algorithms were all assessed using the Dice similarity index-DSIobserved, DSIatlas, and DSIregister. Hausdorff distance (dHausdorff) of the registered skeletons was also used for registration evaluation. Nearly negligible inter-observers variability was found in the bone atlases creation as the DSIobserver was 96 ± 2%. Atlas-based and manual segmented bones were in excellent agreement with DSIatlas of 90 ± 3%. Articulated (DSIregsiter = 75 ± 2%, dHausdorff = 0.37 ± 0.08 cm) and deformable registration algorithms (DSIregister = 77 ± 3%, dHausdorff = 0.34 ± 0.08 cm) considerably outperformed the rigid registration algorithm (DSIregsiter = 59 ± 9%, dHausdorff = 0.69 ± 0.20 cm) in the skeleton registration as the rigid registration algorithm failed to capture the skeleton flexibility in the joints. Despite superior skeleton registration performance, deformable registration algorithm failed to preserve the local rigidity of bones as over 60% of the skeletons

  10. Development and evaluation of an articulated registration algorithm for human skeleton registration

    NASA Astrophysics Data System (ADS)

    Yip, Stephen; Perk, Timothy; Jeraj, Robert

    2014-03-01

    Accurate registration over multiple scans is necessary to assess treatment response of bone diseases (e.g. metastatic bone lesions). This study aimed to develop and evaluate an articulated registration algorithm for the whole-body skeleton registration in human patients. In articulated registration, whole-body skeletons are registered by auto-segmenting into individual bones using atlas-based segmentation, and then rigidly aligning them. Sixteen patients (weight = 80-117 kg, height = 168-191 cm) with advanced prostate cancer underwent the pre- and mid-treatment PET/CT scans over a course of cancer therapy. Skeletons were extracted from the CT images by thresholding (HU>150). Skeletons were registered using the articulated, rigid, and deformable registration algorithms to account for position and postural variability between scans. The inter-observers agreement in the atlas creation, the agreement between the manually and atlas-based segmented bones, and the registration performances of all three registration algorithms were all assessed using the Dice similarity index—DSIobserved, DSIatlas, and DSIregister. Hausdorff distance (dHausdorff) of the registered skeletons was also used for registration evaluation. Nearly negligible inter-observers variability was found in the bone atlases creation as the DSIobserver was 96 ± 2%. Atlas-based and manual segmented bones were in excellent agreement with DSIatlas of 90 ± 3%. Articulated (DSIregsiter = 75 ± 2%, dHausdorff = 0.37 ± 0.08 cm) and deformable registration algorithms (DSIregister = 77 ± 3%, dHausdorff = 0.34 ± 0.08 cm) considerably outperformed the rigid registration algorithm (DSIregsiter = 59 ± 9%, dHausdorff = 0.69 ± 0.20 cm) in the skeleton registration as the rigid registration algorithm failed to capture the skeleton flexibility in the joints. Despite superior skeleton registration performance, deformable registration algorithm failed to preserve the local rigidity of bones as over 60% of the

  11. Development of algorithms for building inventory compilation through remote sensing and statistical inferencing

    NASA Astrophysics Data System (ADS)

    Sarabandi, Pooya

    Building inventories are one of the core components of disaster vulnerability and loss estimations models, and as such, play a key role in providing decision support for risk assessment, disaster management and emergency response efforts. In may parts of the world inclusive building inventories, suitable for the use in catastrophe models cannot be found. Furthermore, there are serious shortcomings in the existing building inventories that include incomplete or out-dated information on critical attributes as well as missing or erroneous values for attributes. In this dissertation a set of methodologies for updating spatial and geometric information of buildings from single and multiple high-resolution optical satellite images are presented. Basic concepts, terminologies and fundamentals of 3-D terrain modeling from satellite images are first introduced. Different sensor projection models are then presented and sources of optical noise such as lens distortions are discussed. An algorithm for extracting height and creating 3-D building models from a single high-resolution satellite image is formulated. The proposed algorithm is a semi-automated supervised method capable of extracting attributes such as longitude, latitude, height, square footage, perimeter, irregularity index and etc. The associated errors due to the interactive nature of the algorithm are quantified and solutions for minimizing the human-induced errors are proposed. The height extraction algorithm is validated against independent survey data and results are presented. The validation results show that an average height modeling accuracy of 1.5% can be achieved using this algorithm. Furthermore, concept of cross-sensor data fusion for the purpose of 3-D scene reconstruction using quasi-stereo images is developed in this dissertation. The developed algorithm utilizes two or more single satellite images acquired from different sensors and provides the means to construct 3-D building models in a more

  12. Development of Deterministic Disaggregation Algorithm for Remotely Sensed Soil Moisture Products

    NASA Astrophysics Data System (ADS)

    Shin, Y.; Mohanty, B. P.

    2011-12-01

    Soil moisture near the land surface and in the subsurface profile is an important issue for hydrology, agronomy, and meteorology. Soil moisture data are limited in the spatial and temporal scales. Till now, point-scaled soil moisture measurements representing regional scales are available. Remote sensing (RS) scheme can be an alternative to direct measurement. However, the availability of RS datasets has a limitation due to the scale discrepancy between the RS resolution and local-scale. A number of studies have been conducted to develop downscaling/disaggregation algorithm for extracting fine-scaled soil moisture within a remote sensing product using the stochastic methods. The stochastic downscaling/disaggregation schemes provide us only for soil texture information and sub-area fractions contained in a RS pixel indicating that their specific locations are not recognized. Thus, we developed the deterministic disaggregation algorithm (DDA) with a genetic algorithm (GA) adapting the inverse method for extracting/searching soil textures and their specific location of sub-pixels within a RS soil moisture product under the numerical experiments and field validations. This approach performs quite well in disaggregating/recognizing the soil textures and their specific locations within a RS soil moisture footprint compared to the results of stochastic method. On the basis of these findings, we can suggest that the DDA can be useful for improving the availability of RS products.

  13. Development of an algorithm to predict comfort of wheelchair fit based on clinical measures

    PubMed Central

    Kon, Keisuke; Hayakawa, Yasuyuki; Shimizu, Shingo; Nosaka, Toshiya; Tsuruga, Takeshi; Matsubara, Hiroyuki; Nomura, Tomohiro; Murahara, Shin; Haruna, Hirokazu; Ino, Takumi; Inagaki, Jun; Kobayashi, Toshiki

    2015-01-01

    [Purpose] The purpose of this study was to develop an algorithm to predict the comfort of a subject seated in a wheelchair, based on common clinical measurements and without depending on verbal communication. [Subjects] Twenty healthy males (mean age: 21.5 ± 2 years; height: 171 ± 4.3 cm; weight: 56 ± 12.3 kg) participated in this study. [Methods] Each experimental session lasted for 60 min. The clinical measurements were obtained under 4 conditions (good posture, with and without a cushion; bad posture, with and without a cushion). Multiple regression analysis was performed to determine the relationship between a visual analogue scale and exercise physiology parameters (respiratory and metabolism), autonomic nervous parameters (heart rate, blood pressure, and salivary amylase level), and 3D-coordinate posture parameters (good or bad posture). [Results] For the equation (algorithm) to predict the visual analogue scale score, the adjusted multiple correlation coefficient was 0.72, the residual standard deviation was 1.2, and the prediction error was 12%. [Conclusion] The algorithm developed in this study could predict the comfort of healthy male seated in a wheelchair with 72% accuracy. PMID:26504299

  14. Algorithm and code development for unsteady three-dimensional Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Obayashi, Shigeru

    1994-01-01

    Aeroelastic tests require extensive cost and risk. An aeroelastic wind-tunnel experiment is an order of magnitude more expensive than a parallel experiment involving only aerodynamics. By complementing the wind-tunnel experiments with numerical simulations, the overall cost of the development of aircraft can be considerably reduced. In order to accurately compute aeroelastic phenomenon it is necessary to solve the unsteady Euler/Navier-Stokes equations simultaneously with the structural equations of motion. These equations accurately describe the flow phenomena for aeroelastic applications. At ARC a code, ENSAERO, is being developed for computing the unsteady aerodynamics and aeroelasticity of aircraft, and it solves the Euler/Navier-Stokes equations. The purpose of this cooperative agreement was to enhance ENSAERO in both algorithm and geometric capabilities. During the last five years, the algorithms of the code have been enhanced extensively by using high-resolution upwind algorithms and efficient implicit solvers. The zonal capability of the code has been extended from a one-to-one grid interface to a mismatching unsteady zonal interface. The geometric capability of the code has been extended from a single oscillating wing case to a full-span wing-body configuration with oscillating control surfaces. Each time a new capability was added, a proper validation case was simulated, and the capability of the code was demonstrated.

  15. Development of the Anatomical Quality Assurance (AQUA) checklist: Guidelines for reporting original anatomical studies.

    PubMed

    Tomaszewski, Krzysztof A; Henry, Brandon Michael; Kumar Ramakrishnan, Piravin; Roy, Joyeeta; Vikse, Jens; Loukas, Marios; Tubbs, R Shane; Walocha, Jerzy A

    2017-01-01

    The rise of evidence-based anatomy has emphasized the need for original anatomical studies with high clarity, transparency, and comprehensiveness in reporting. Currently, inconsistencies in the quality and reporting of such studies have placed limits on accurate reliability and impact assessment. Our aim was to develop a checklist of reporting items that should be addressed by authors of original anatomical studies. The study steering committee formulated a preliminary conceptual design and began to generate items on the basis of a literature review and expert opinion. This led to the development of a preliminary checklist. The validity of this checklist was assessed by a Delphi procedure, and feedback from the Delphi panelists, who were experts in the area of anatomical research, was used to improve it. The Delphi procedure involved 12 experts in anatomical research. It comprised two rounds, after which unanimous consensus was reached regarding the items to be included in the checklist. The steering committee agreed to name the checklist AQUA. The preliminary AQUA Checklist consisted of 26 items divided into eight sections. Following round 1, some of the items underwent major revision and three new ones were introduced. The checklist was revised only for minor language inaccuracies after round 2. The final version of the AQUA Checklist consisted of the initial eight sections with a total of 29 items. The steering committee hopes the AQUA Checklist will improve the quality and reporting of anatomical studies. Clin. Anat. 30:14-20, 2017. © 2016 Wiley Periodicals, Inc.

  16. Ocean Observations with EOS/MODIS: Algorithm Development and Post Launch Studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.; Conboy, Barbara (Technical Monitor)

    1999-01-01

    This separation has been logical thus far; however, as launch of AM-1 approaches, it must be recognized that many of these activities will shift emphasis from algorithm development to validation. For example, the second, third, and fifth bullets will become almost totally validation-focussed activities in the post-launch era, providing the core of our experimental validation effort. Work under the first bullet will continue into the post-launch time frame, driven in part by algorithm deficiencies revealed as a result of validation activities. Prior to the start of the 1999 fiscal year (FY99) we were requested to prepare a brief plan for our FY99 activities. This plan is included as Appendix 1. The present report describes the progress made on our planned activities.

  17. Hybrid Neural-Network: Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics Developed and Demonstrated

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2002-01-01

    As part of the NASA Aviation Safety Program, a unique model-based diagnostics method that employs neural networks and genetic algorithms for aircraft engine performance diagnostics has been developed and demonstrated at the NASA Glenn Research Center against a nonlinear gas turbine engine model. Neural networks are applied to estimate the internal health condition of the engine, and genetic algorithms are used for sensor fault detection, isolation, and quantification. This hybrid architecture combines the excellent nonlinear estimation capabilities of neural networks with the capability to rank the likelihood of various faults given a specific sensor suite signature. The method requires a significantly smaller data training set than a neural network approach alone does, and it performs the combined engine health monitoring objectives of performance diagnostics and sensor fault detection and isolation in the presence of nominal and degraded engine health conditions.

  18. Developments of global greenhouse gas retrieval algorithm based on Optimal Estimation Method

    NASA Astrophysics Data System (ADS)

    Kim, W. V.; Kim, J.; Lee, H.; Jung, Y.; Boesch, H.

    2013-12-01

    After the industrial revolution, atmospheric carbon dioxide concentration increased drastically over the last 250 years. It is still increasing and over than 400ppm of carbon dioxide was measured at Mauna Loa observatory for the first time which value was considered as important milestone. Therefore, understanding the source, emission, transport and sink of global carbon dioxide is unprecedentedly important. Currently, Total Carbon Column Observing Network (TCCON) is operated to observe CO2 concentration by ground base instruments. However, the number of site is very few and concentrated to Europe and North America. Remote sensing of CO2 could supplement those limitations. Greenhouse Gases Observing SATellite (GOSAT) which was launched 2009 is measuring column density of CO2 and other satellites are planned to launch in a few years. GOSAT provide valuable measurement data but its low spatial resolution and poor success rate of retrieval due to aerosol and cloud, forced the results to cover less than half of the whole globe. To improve data availability, accurate aerosol information is necessary, especially for East Asia region where the aerosol concentration is higher than other region. For the first step, we are developing CO2 retrieval algorithm based on optimal estimation method with VLIDORT the vector discrete ordinate radiative transfer model. Proto type algorithm, developed from various combinations of state vectors to find best combination of state vectors, shows appropriate result and good agreement with TCCON measurements. To reduce calculation cost low-stream interpolation is applied for model simulation and the simulation time is drastically reduced. For the further study, GOSAT CO2 retrieval algorithm will be combined with accurate GOSAT-CAI aerosol retrieval algorithm to obtain more accurate result especially for East Asia.

  19. The origin and development of individual size variation in early pelagic stages of fish.

    PubMed

    Huss, Magnus; Persson, Lennart; Byström, Pär

    2007-08-01

    Size variation among individuals born at the same time in a common environment (within cohorts) is a common phenomenon in natural populations. Still, the mechanisms behind the development of such variation and its consequences for population processes are far from clear. We experimentally investigated the development of early within-cohort size variation in larval perch (Perca fluviatilis). Specifically we tested the influence of initial variation, resulting from variation in egg strand size, and intraspecific density for the development of size variation. Variation in egg strand size translated into variation in initial larval size and time of hatching, which, in turn, had effects on growth and development. Perch from the smallest egg strands performed on average equally well independent of density, whereas larvae originating from larger egg strands performed less well under high densities. We related this difference in density dependence to size asymmetries in competitive abilities leading to higher growth rates of groups consisting of initially small individuals under high resource limitation. In contrast, within a single group of larvae, smaller individuals grew substantially slower under high densities whereas large individuals performed equally well independent of density. As a result, size variation among individuals within groups (i.e. originating from the same clutch) increased under high densities. This result may be explained by social interactions or differential timing of diet shifts and a depressed resource base for the initially smaller individuals. It is concluded that to fully appreciate the effects of density-dependent processes on individual size variation and size-dependent growth, consumer feedbacks on resources need to be considered.

  20. Development of Serum Marker Models to Increase Diagnostic Accuracy of Advanced Fibrosis in Nonalcoholic Fatty Liver Disease: The New LINKI Algorithm Compared with Established Algorithms

    PubMed Central

    Lykiardopoulos, Byron; Hagström, Hannes; Fredrikson, Mats; Ignatova, Simone; Stål, Per; Hultcrantz, Rolf; Ekstedt, Mattias

    2016-01-01

    Background and Aim Detection of advanced fibrosis (F3-F4) in nonalcoholic fatty liver disease (NAFLD) is important for ascertaining prognosis. Serum markers have been proposed as alternatives to biopsy. We attempted to develop a novel algorithm for detection of advanced fibrosis based on a more efficient combination of serological markers and to compare this with established algorithms. Methods We included 158 patients with biopsy-proven NAFLD. Of these, 38 had advanced fibrosis. The following fibrosis algorithms were calculated: NAFLD fibrosis score, BARD, NIKEI, NASH-CRN regression score, APRI, FIB-4, King´s score, GUCI, Lok index, Forns score, and ELF. Study population was randomly divided in a training and a validation group. A multiple logistic regression analysis using bootstrapping methods was applied to the training group. Among many variables analyzed age, fasting glucose, hyaluronic acid and AST were included, and a model (LINKI-1) for predicting advanced fibrosis was created. Moreover, these variables were combined with platelet count in a mathematical way exaggerating the opposing effects, and alternative models (LINKI-2) were also created. Models were compared using area under the receiver operator characteristic curves (AUROC). Results Of established algorithms FIB-4 and King´s score had the best diagnostic accuracy with AUROCs 0.84 and 0.83, respectively. Higher accuracy was achieved with the novel LINKI algorithms. AUROCs in the total cohort for LINKI-1 was 0.91 and for LINKI-2 models 0.89. Conclusion The LINKI algorithms for detection of advanced fibrosis in NAFLD showed better accuracy than established algorithms and should be validated in further studies including larger cohorts. PMID:27936091

  1. The development of line-scan image recognition algorithms for the detection of frass on mature tomatoes

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In this research, a multispectral algorithm derived from hyperspectral line-scan fluorescence imaging under violet LED excitation was developed for the detection of frass contamination on mature tomatoes. The algorithm utilized the fluorescence intensities at two wavebands, 664 nm and 690 nm, for co...

  2. Microphysical particle properties derived from inversion algorithms developed in the framework of EARLINET

    NASA Astrophysics Data System (ADS)

    Müller, Detlef; Böckmann, Christine; Kolgotin, Alexei; Schneidenbach, Lars; Chemyakin, Eduard; Rosemann, Julia; Znak, Pavel; Romanov, Anton

    2016-10-01

    We present a summary on the current status of two inversion algorithms that are used in EARLINET (European Aerosol Research Lidar Network) for the inversion of data collected with EARLINET multiwavelength Raman lidars. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. Development of these two algorithms started in 2000 when EARLINET was founded. The algorithms are based on a manually controlled inversion of optical data which allows for detailed sensitivity studies. The algorithms allow us to derive particle effective radius as well as volume and surface area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light absorption needs to be known with high accuracy. It is an extreme challenge to retrieve the real part with an accuracy better than 0.05 and the imaginary part with accuracy better than 0.005-0.1 or ±50 %. Single-scattering albedo can be computed from the retrieved microphysical parameters and allows us to categorize aerosols into high- and low-absorbing aerosols. On the basis of a few exemplary simulations with synthetic optical data we discuss the current status of these manually operated algorithms, the potentially achievable accuracy of data products, and the goals for future work. One algorithm was used with the purpose of testing how well microphysical parameters can be derived if the real part of the complex refractive index is known to at least 0.05 or 0.1. The other algorithm was used to find out how well microphysical parameters can be derived if this constraint for the real part is not applied. The optical data used in our study cover a range of Ångström exponents and extinction-to-backscatter (lidar) ratios that are found from lidar measurements of various aerosol types. We also tested

  3. Development and evaluation of collision warning/collision avoidance algorithms using an errable driver model

    NASA Astrophysics Data System (ADS)

    Yang, Hsin-Hsiang; Peng, Huei

    2010-12-01

    Collision warning/collision avoidance (CW/CA) systems must be designed to work seamlessly with a human driver, providing warning or control actions when the driver's response (or lack of) is deemed inappropriate. The effectiveness of CW/CA systems working with a human driver needs to be evaluated thoroughly because of legal/liability and other (e.g. traffic flow) concerns. CW/CA systems tuned only under open-loop manoeuvres were frequently found to work unsatisfactorily with human-in-the-loop. However, tuning CW/CA systems with human drivers co-existing is slow and non-repeatable. Driver models, if constructed and used properly, can capture human/control interactions and accelerate the CW/CA development process. Design and evaluation methods for CW/CA algorithms can be categorised into three approaches, scenario-based, performance-based and human-centred. The strength and weakness of these approaches were discussed in this paper and a humanised errable driver model was introduced to improve the developing process. The errable driver model used in this paper is a model that emulates human driver's functions and can generate both nominal (error-free) and devious (with error) behaviours. The car-following data used for developing and validating the model were obtained from a large-scale naturalistic driving database. Three error-inducing behaviours were introduced: human perceptual limitation, time delay and distraction. By including these error-inducing behaviours, rear-end collisions with a lead vehicle were found to occur at a probability similar to traffic accident statistics in the USA. This driver model is then used to evaluate the performance of several existing CW/CA algorithms. Finally, a new CW/CA algorithm was developed based on this errable driver model.

  4. GLASS daytime all-wave net radiation product: Algorithm development and preliminary validation

    SciTech Connect

    Jiang, Bo; Liang, Shunlin; Ma, Han; Zhang, Xiaotong; Xiao, Zhiqiang; Zhao, Xiang; Jia, Kun; Yao, Yunjun; Jia, Aolin

    2016-03-09

    Mapping surface all-wave net radiation (Rn) is critically needed for various applications. Several existing Rn products from numerical models and satellite observations have coarse spatial resolutions and their accuracies may not meet the requirements of land applications. In this study, we develop the Global LAnd Surface Satellite (GLASS) daytime Rn product at a 5 km spatial resolution. Its algorithm for converting shortwave radiation to all-wave net radiation using the Multivariate Adaptive Regression Splines (MARS) model is determined after comparison with three other algorithms. The validation of the GLASS Rn product based on high-quality in situ measurements in the United States shows a coefficient of determination value of 0.879, an average root mean square error value of 31.61 Wm-2, and an average bias of 17.59 Wm-2. Furthermore, we also compare our product/algorithm with another satellite product (CERES-SYN) and two reanalysis products (MERRA and JRA55), and find that the accuracy of the much higher spatial resolution GLASS Rn product is satisfactory. The GLASS Rn product from 2000 to the present is operational and freely available to the public.

  5. GLASS daytime all-wave net radiation product: Algorithm development and preliminary validation

    DOE PAGES

    Jiang, Bo; Liang, Shunlin; Ma, Han; ...

    2016-03-09

    Mapping surface all-wave net radiation (Rn) is critically needed for various applications. Several existing Rn products from numerical models and satellite observations have coarse spatial resolutions and their accuracies may not meet the requirements of land applications. In this study, we develop the Global LAnd Surface Satellite (GLASS) daytime Rn product at a 5 km spatial resolution. Its algorithm for converting shortwave radiation to all-wave net radiation using the Multivariate Adaptive Regression Splines (MARS) model is determined after comparison with three other algorithms. The validation of the GLASS Rn product based on high-quality in situ measurements in the United Statesmore » shows a coefficient of determination value of 0.879, an average root mean square error value of 31.61 Wm-2, and an average bias of 17.59 Wm-2. Furthermore, we also compare our product/algorithm with another satellite product (CERES-SYN) and two reanalysis products (MERRA and JRA55), and find that the accuracy of the much higher spatial resolution GLASS Rn product is satisfactory. The GLASS Rn product from 2000 to the present is operational and freely available to the public.« less

  6. Development of algorithms for understanding the temporal and spatial variability of the earth's radiation balance

    NASA Technical Reports Server (NTRS)

    Brooks, D. R.; Harrison, E. F.; Minnis, P.; Suttles, J. T.; Kandel, R. S.

    1986-01-01

    A brief description is given of how temporal and spatial variability in the earth's radiative behavior influences the goals of satellite radiation monitoring systems and how some previous systems have addressed the existing problems. Then, results of some simulations of radiation budget monitoring missions are presented. These studies led to the design of the Earth Radiation Budget Experiment (ERBE). A description is given of the temporal and spatial averaging algorithms developed for the ERBE data analysis. These algorithms are intended primarily to produce monthly averages of the net radiant exitance on regional, zonal, and global scales and to provide insight into the regional diurnal variability of radiative parameters such as albedo and long-wave radiant exitance. The algorithms are applied to scanner and nonscanner data for up to three satellites. Modeling of dialy shortwave albedo and radiant exitance with satellite samling that is insufficient to fully account for changing meteorology is discussed in detail. Studies performed during the ERBE mission and software design are reviewed. These studies provide quantitative estimates of the effects of temporally sparse and biased sampling on inferred diurnal and regional radiative parameters. Other topics covered include long-wave diurnal modeling, extraction of a regional monthly net clear-sky radiation budget, the statistical significance of observed diurnal variability, quality control of the analysis, and proposals for validating the results of ERBE time and space averaging.

  7. Ice surface temperature retrieval from AVHRR, ATSR, and passive microwave satellite data: Algorithm development and application

    NASA Technical Reports Server (NTRS)

    Key, Jeff; Maslanik, James; Steffen, Konrad

    1994-01-01

    During the first half of our second project year we have accomplished the following: (1) acquired a new AVHRR data set for the Beaufort Sea area spanning an entire year; (2) acquired additional ATSR data for the Arctic and Antarctic now totaling over seven months; (3) refined our AVHRR Arctic and Antarctic ice surface temperature (IST) retrieval algorithm, including work specific to Greenland; (4) developed ATSR retrieval algorithms for the Arctic and Antarctic, including work specific to Greenland; (5) investigated the effects of clouds and the atmosphere on passive microwave 'surface' temperature retrieval algorithms; (6) generated surface temperatures for the Beaufort Sea data set, both from AVHRR and SSM/I; and (7) continued work on compositing GAC data for coverage of the entire Arctic and Antarctic. During the second half of the year we will continue along these same lines, and will undertake a detailed validation study of the AVHRR and ATSR retrievals using LEADEX and the Beaufort Sea year-long data. Cloud masking methods used for the AVHRR will be modified for use with the ATSR. Methods of blending in situ and satellite-derived surface temperature data sets will be investigated.

  8. On developing B-spline registration algorithms for multi-core processors

    NASA Astrophysics Data System (ADS)

    Shackleford, J. A.; Kandasamy, N.; Sharp, G. C.

    2010-11-01

    Spline-based deformable registration methods are quite popular within the medical-imaging community due to their flexibility and robustness. However, they require a large amount of computing time to obtain adequate results. This paper makes two contributions towards accelerating B-spline-based registration. First, we propose a grid-alignment scheme and associated data structures that greatly reduce the complexity of the registration algorithm. Based on this grid-alignment scheme, we then develop highly data parallel designs for B-spline registration within the stream-processing model, suitable for implementation on multi-core processors such as graphics processing units (GPUs). Particular attention is focused on an optimal method for performing analytic gradient computations in a data parallel fashion. CPU and GPU versions are validated for execution time and registration quality. Performance results on large images show that our GPU algorithm achieves a speedup of 15 times over the single-threaded CPU implementation whereas our multi-core CPU algorithm achieves a speedup of 8 times over the single-threaded implementation. The CPU and GPU versions achieve near-identical registration quality in terms of RMS differences between the generated vector fields.

  9. Developing AEA system-of-systems mission plans with a multi-objective genetic algorithm

    NASA Astrophysics Data System (ADS)

    HandUber, Jason C.; Ridder, Jeffrey P.

    2007-04-01

    The role of an airborne electronic attack (AEA) system-of-systems (SoS) is to increase survivability of friendly aircraft by jamming hostile air defense radars. AEA systems are scarce, high-demand assets and have limited resources with which to engage a large number of radars. Given the limited resources, it is a significant challenge to plan their employment to achieve the desired results. Plans require specifying locations of jammers, as well as the mix of wide- and narrow-band jamming assignments delivered against particular radars. Further, the environment is uncertain as to the locations and emissions behaviors of radars. Therefore, we require plans that are not only capable, but also robust to the variability of the environment. In this paper, we use a multi-objective genetic algorithm to develop capable and robust AEA SoS mission plans. The algorithm seeks to determine the Pareto-front of three objectives - maximize the operational objectives achieved by friendly aircraft, minimize the threat to friendly aircraft, and minimize the expenditure of AEA assets. The results show that this algorithm is able to provide planners with the quantitative information necessary to intelligently construct capable and robust mission plans for an AEA SoS.

  10. The early life origin theory in the development of cardiovascular disease and type 2 diabetes.

    PubMed

    Lindblom, Runa; Ververis, Katherine; Tortorella, Stephanie M; Karagiannis, Tom C

    2015-04-01

    Life expectancy has been examined from a variety of perspectives in recent history. Epidemiology is one perspective which examines causes of morbidity and mortality at the population level. Over the past few 100 years there have been dramatic shifts in the major causes of death and expected life length. This change has suffered from inconsistency across time and space with vast inequalities observed between population groups. In current focus is the challenge of rising non-communicable diseases (NCD), such as cardiovascular disease and type 2 diabetes mellitus. In the search to discover methods to combat the rising incidence of these diseases, a number of new theories on the development of morbidity have arisen. A pertinent example is the hypothesis published by David Barker in 1995 which postulates the prenatal and early developmental origin of adult onset disease, and highlights the importance of the maternal environment. This theory has been subject to criticism however it has gradually gained acceptance. In addition, the relatively new field of epigenetics is contributing evidence in support of the theory. This review aims to explore the implication and limitations of the developmental origin hypothesis, via an historical perspective, in order to enhance understanding of the increasing incidence of NCDs, and facilitate an improvement in planning public health policy.

  11. Development of a highly immunogenic Newcastle disease virus chicken vaccine strain of duck origin.

    PubMed

    Kim, J Y; Kye, S J; Lee, H J; Gaikwad, S; Lee, H S; Jung, S C; Choi, K S

    2016-04-01

    Newcastle disease virus (NDV) strain NDRL0901 was developed as a live vaccine candidate for control of Newcastle disease. NDV isolate KR/duck/13/07 (DK1307) of duck origin was used as the selected vaccine strain. DK1307 was passaged 6 times in chickens. Then a single clone from the chicken-adapted virus (DK1307C) was finally selected, and the vaccine strain was named NDRL0901. DK1307C and the clone NDRL0901 viruses showed enhanced immunogenicity compared to the DK1307 virus. Principal component analysis based on fusion and hemagglutinin-neuraminidase genes revealed the codon usage pattern in the dataset is distinct separating duck viral sequences and avian sequences, and passage of the duck origin virus into the chicken host causes deviation in the codon usage pattern. The NDRL0901 virus was avirulent and did not acquire viral virulence even after 7 back passages in chickens. When day-old chicks were vaccinated with the NDRL0901 virus via spray, eye drops, and drinking water, the vaccinated birds showed no clinical signs and had significant protection efficacy (>80%) against very virulent NDV (Kr005 strain) infection regardless of the administration route employed. The results indicate that the NDRL0901 strain is safe in chickens and can offer protective immunity.

  12. Development of a new time domain-based algorithm for train detection and axle counting

    NASA Astrophysics Data System (ADS)

    Allotta, B.; D'Adamio, P.; Meli, E.; Pugi, L.

    2015-12-01

    This paper presents an innovative train detection algorithm, able to perform the train localisation and, at the same time, to estimate its speed, the crossing times on a fixed point of the track and the axle number. The proposed solution uses the same approach to evaluate all these quantities, starting from the knowledge of generic track inputs directly measured on the track (for example, the vertical forces on the sleepers, the rail deformation and the rail stress). More particularly, all the inputs are processed through cross-correlation operations to extract the required information in terms of speed, crossing time instants and axle counter. This approach has the advantage to be simple and less invasive than the standard ones (it requires less equipment) and represents a more reliable and robust solution against numerical noise because it exploits the whole shape of the input signal and not only the peak values. A suitable and accurate multibody model of railway vehicle and flexible track has also been developed by the authors to test the algorithm when experimental data are not available and in general, under any operating conditions (fundamental to verify the algorithm accuracy and robustness). The railway vehicle chosen as benchmark is the Manchester Wagon, modelled in the Adams VI-Rail environment. The physical model of the flexible track has been implemented in the Matlab and Comsol Multiphysics environments. A simulation campaign has been performed to verify the performance and the robustness of the proposed algorithm, and the results are quite promising. The research has been carried out in cooperation with Ansaldo STS and ECM Spa.

  13. Development and Validation of a Diabetic Retinopathy Referral Algorithm Based on Single-Field Fundus Photography

    PubMed Central

    Srinivasan, Sangeetha; Shetty, Sharan; Natarajan, Viswanathan; Sharma, Tarun; Raman, Rajiv

    2016-01-01

    Purpose To develop a simplified algorithm to identify and refer diabetic retinopathy (DR) from single-field retinal images specifically for sight-threatening diabetic retinopathy for appropriate care (ii) to determine the agreement and diagnostic accuracy of the algorithm as a pilot study among optometrists versus “gold standard” (retinal specialist grading). Methods The severity of DR was scored based on colour photo using a colour coded algorithm, which included the lesions of DR and number of quadrants involved. A total of 99 participants underwent training followed by evaluation. Data of the 99 participants were analyzed. Fifty posterior pole 45 degree retinal images with all stages of DR were presented. Kappa scores (κ), areas under the receiver operating characteristic curves (AUCs), sensitivity and specificity were determined, with further comparison between working optometrists and optometry students. Results Mean age of the participants was 22 years (range: 19–43 years), 87% being women. Participants correctly identified 91.5% images that required immediate referral (κ) = 0.696), 62.5% of images as requiring review after 6 months (κ = 0.462), and 51.2% of those requiring review after 1 year (κ = 0.532). The sensitivity and specificity of the optometrists were 91% and 78% for immediate referral, 62% and 84% for review after 6 months, and 51% and 95% for review after 1 year, respectively. The AUC was the highest (0.855) for immediate referral, second highest (0.824) for review after 1 year, and 0.727 for review after 6 months criteria. Optometry students performed better than the working optometrists for all grades of referral. Conclusions The diabetic retinopathy algorithm assessed in this work is a simple and a fairly accurate method for appropriate referral based on single-field 45 degree posterior pole retinal images. PMID:27661981

  14. Development of an Algorithm Suite for MODIS and VIIRS Cloud Data Record Continuity

    NASA Astrophysics Data System (ADS)

    Platnick, S. E.; Holz, R.; Heidinger, A. K.; Ackerman, S. A.; Meyer, K.; Frey, R.; Wind, G.; Amarasinghe, N.

    2014-12-01

    The launch of Suomi NPP in the fall of 2011 began the next generation of the U.S. operational polar orbiting environmental observations. Similar to MODIS, the VIIRS imager provides visible through IR observations at moderate spatial resolution with a 1330 LT equatorial crossing consistent with MODIS on the Aqua platform. However, unlike MODIS, VIIRS lacks key water vapor and CO2 absorbing channels used by the MODIS cloud algorithms for high cloud detection and cloud-top property retrievals (including emissivity), as well as multilayer cloud detection. In addition, there is a significant change in the spectral location of the 2.1 μm shortwave-infrared channel used by MODIS for cloud microphysical retrievals. The climate science community will face an interruption in the continuity of key global cloud data sets once the NASA EOS Terra and Aqua sensors cease operation. Given the instrument differences between MODIS EOS and VIIRS S-NPP/JPSS, we discuss methods for merging the 14+ year MODIS observational record with VIIRS/CrIS observations in order to generate cloud climate data record continuity across the observing systems. The main approach used by our team was to develop a cloud retrieval algorithm suite that is applied only to the common MODIS and VIIRS spectral channels. The suite uses heritage algorithms that produce the existing MODIS cloud mask (MOD35), MODIS cloud optical and microphysical properties (MOD06), and NOAA AWG/CLAVR-x cloud-top property products. Global monthly results from this hybrid algorithm suite (referred to as MODAWG) will be shown. Collocated CALIPSO comparisons will be shown that can independently evaluate inter-instrument product consistency for a subset of the MODAWG datasets.

  15. [Textual research on origin and development of genuine medicinal herbs of Shanyao].

    PubMed

    Feng, Xue-Feng; Huang, Lu-Qi; Ge, Xiao-Guang; Yang, Lian-Ju; Yang, Jing-Yu

    2008-04-01

    Making textual research on Bencao and documents, this article inquires to the origin and development of genuine medicinal herbs of Shanyao (Rhizoma Dioscoreae) on the points of change of the growing areas and the development of cultivating and processing techniques and clinical uses. The study indicates that the medicinal use of Dioscorea oposita went through several periods: the period of the use of wild D. oposita before Tang dynasty, the period of the mixed use of wild and cultivating D. oposita from Song to the middle of Qing dynasty, and the period of the main use of cultivating D. oposita after the latter stage of Qing dynasty (18th century). It considers that the growing area of genuine medicinal herbs of Shanyao appeared in Ming dynasty and finally formed "Huaishanyao" on the early of 20th century. The acknowledgement of Huaishanyao as genuine medicinal herbs is related closely to its cultivating and processing techniques and clinical uses. The development of cultivating techniques provided the resource of Shanyao, the invention of processing techniques improved its appearance and quality, and the clinical uses and practices by modern and contemporary famous medical men played an important role to the social approval and development of Huaishanyao.

  16. Development of Algorithms and Error Analyses for the Short Baseline Lightning Detection and Ranging System

    NASA Technical Reports Server (NTRS)

    Starr, Stanley O.

    1998-01-01

    NASA, at the John F. Kennedy Space Center (KSC), developed and operates a unique high-precision lightning location system to provide lightning-related weather warnings. These warnings are used to stop lightning- sensitive operations such as space vehicle launches and ground operations where equipment and personnel are at risk. The data is provided to the Range Weather Operations (45th Weather Squadron, U.S. Air Force) where it is used with other meteorological data to issue weather advisories and warnings for Cape Canaveral Air Station and KSC operations. This system, called Lightning Detection and Ranging (LDAR), provides users with a graphical display in three dimensions of 66 megahertz radio frequency events generated by lightning processes. The locations of these events provide a sound basis for the prediction of lightning hazards. This document provides the basis for the design approach and data analysis for a system of radio frequency receivers to provide azimuth and elevation data for lightning pulses detected simultaneously by the LDAR system. The intent is for this direction-finding system to correct and augment the data provided by LDAR and, thereby, increase the rate of valid data and to correct or discard any invalid data. This document develops the necessary equations and algorithms, identifies sources of systematic errors and means to correct them, and analyzes the algorithms for random error. This data analysis approach is not found in the existing literature and was developed to facilitate the operation of this Short Baseline LDAR (SBLDAR). These algorithms may also be useful for other direction-finding systems using radio pulses or ultrasonic pulse data.

  17. Integrative multicellular biological modeling: a case study of 3D epidermal development using GPU algorithms

    PubMed Central

    2010-01-01

    Background Simulation of sophisticated biological models requires considerable computational power. These models typically integrate together numerous biological phenomena such as spatially-explicit heterogeneous cells, cell-cell interactions, cell-environment interactions and intracellular gene networks. The recent advent of programming for graphical processing units (GPU) opens up the possibility of developing more integrative, detailed and predictive biological models while at the same time decreasing the computational cost to simulate those models. Results We construct a 3D model of epidermal development and provide a set of GPU algorithms that executes significantly faster than sequential central processing unit (CPU) code. We provide a parallel implementation of the subcellular element method for individual cells residing in a lattice-free spatial environment. Each cell in our epidermal model includes an internal gene network, which integrates cellular interaction of Notch signaling together with environmental interaction of basement membrane adhesion, to specify cellular state and behaviors such as growth and division. We take a pedagogical approach to describing how modeling methods are efficiently implemented on the GPU including memory layout of data structures and functional decomposition. We discuss various programmatic issues and provide a set of design guidelines for GPU programming that are instructive to avoid common pitfalls as well as to extract performance from the GPU architecture. Conclusions We demonstrate that GPU algorithms represent a significant technological advance for the simulation of complex biological models. We further demonstrate with our epidermal model that the integration of multiple complex modeling methods for heterogeneous multicellular biological processes is both feasible and computationally tractable using this new technology. We hope that the provided algorithms and source code will be a starting point for modelers to

  18. Development of a Smart Release Algorithm for Mid-Air Separation of Parachute Test Articles

    NASA Technical Reports Server (NTRS)

    Moore, James W.

    2011-01-01

    The Crew Exploration Vehicle Parachute Assembly System (CPAS) project is currently developing an autonomous method to separate a capsule-shaped parachute test vehicle from an air-drop platform for use in the test program to develop and validate the parachute system for the Orion spacecraft. The CPAS project seeks to perform air-drop tests of an Orion-like boilerplate capsule. Delivery of the boilerplate capsule to the test condition has proven to be a critical and complicated task. In the current concept, the boilerplate vehicle is extracted from an aircraft on top of a Type V pallet and then separated from the pallet in mid-air. The attitude of the vehicles at separation is critical to avoiding re-contact and successfully deploying the boilerplate into a heatshield-down orientation. Neither the pallet nor the boilerplate has an active control system. However, the attitude of the mated vehicle as a function of time is somewhat predictable. CPAS engineers have designed an avionics system to monitor the attitude of the mated vehicle as it is extracted from the aircraft and command a release when the desired conditions are met. The algorithm includes contingency capabilities designed to release the test vehicle before undesirable orientations occur. The algorithm was verified with simulation and ground testing. The pre-flight development and testing is discussed and limitations of ground testing are noted. The CPAS project performed a series of three drop tests as a proof-of-concept of the release technique. These tests helped to refine the attitude instrumentation and software algorithm to be used on future tests. The drop tests are described in detail and the evolution of the release system with each test is described.

  19. Childhood infections, the developing immune system, and the origins of asthma.

    PubMed

    Openshaw, Peter J M; Yamaguchi, Yuko; Tregoning, John S

    2004-12-01

    Asthma is an immune-mediated inflammatory condition characterized by increased responsiveness to bronchoconstrictive stimuli. Viruses have been shown to play an important role in asthma, with viral infection being present during about 85% of exacerbations. However, the role they play in the onset of asthma is more controversial. Some respiratory viral infections might be protective, but there is a strong association between respiratory syncytial virus-induced bronchiolitis in infancy and recurrent wheeze up to 12 years of age. Both the respiratory tract and the immune system undergo rapid maturation during the first year of life, and it seems that postnatal development is affected by and affects responses to viral infections. Understanding postnatal developmental changes in the immune system might help to explain the origins and pathogenesis of asthma and thus the effectiveness or ineffectiveness of specific asthma therapies.

  20. How the challenge of explaining learning influenced the origins and development of John B. Watson's behaviorism.

    PubMed

    Rilling, M

    2000-01-01

    Before he invented behaviorism, John B. Watson considered learning one of the most important topics in psychology. Watson conducted excellent empirical research on animal learning. He developed behaviorism in part to promote research and elevate the status of learning in psychology. Watson was much less successful in the adequacy and originality of the mechanisms he proposed to explain learning. By assimilating the method of classical conditioning and adopting Pavlov's theory of stimulus substitution, Watson linked behaviorism with a new method that could compete with both Titchener's method of introspection and Freud's methods of psychoanalysis. Watson's interest in explaining psychopathology led to the discovery of conditioned emotional responses and a behavioristic explanation for the learning of phobic behavior. Watson established learning as a central topic for basic research and application in American psychology.

  1. Development of an algorithm for automated enhancement of digital prototypes in machine engineering

    NASA Astrophysics Data System (ADS)

    Sokolova, E. A.; Dzhioev, G. A.

    2017-02-01

    The paper deals with the problem of processing digital prototypes in machine engineering with the use of modern approaches to computer vision, methods of taxonomy (a section of the decision theory), automation of manual retouching techniques. Upon further study of the problem, different taxonomic methods have been considered, among which the reference method has been chosen as the most appropriate for automated search of defective areas of the prototype. As a result, the algorithm for automated enhancement of digital prototypes of the digital image has been developed, using modern information technologies.

  2. Multidisciplinary Design, Analysis, and Optimization Tool Development Using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Li, Wesley

    2009-01-01

    Multidisciplinary design, analysis, and optimization using a genetic algorithm is being developed at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California) to automate analysis and design process by leveraging existing tools to enable true multidisciplinary optimization in the preliminary design stage of subsonic, transonic, supersonic, and hypersonic aircraft. This is a promising technology, but faces many challenges in large-scale, real-world application. This report describes current approaches, recent results, and challenges for multidisciplinary design, analysis, and optimization as demonstrated by experience with the Ikhana fire pod design.!

  3. Multidisciplinary Design, Analysis, and Optimization Tool Development using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Li, Wesley

    2008-01-01

    Multidisciplinary design, analysis, and optimization using a genetic algorithm is being developed at the National Aeronautics and Space A dministration Dryden Flight Research Center to automate analysis and design process by leveraging existing tools such as NASTRAN, ZAERO a nd CFD codes to enable true multidisciplinary optimization in the pr eliminary design stage of subsonic, transonic, supersonic, and hypers onic aircraft. This is a promising technology, but faces many challe nges in large-scale, real-world application. This paper describes cur rent approaches, recent results, and challenges for MDAO as demonstr ated by our experience with the Ikhana fire pod design.

  4. Estimating aquifer recharge in Mission River watershed, Texas: model development and calibration using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Uddameri, V.; Kuchanur, M.

    2007-01-01

    Soil moisture balance studies provide a convenient approach to estimate aquifer recharge when only limited site-specific data are available. A monthly mass-balance approach has been utilized in this study to estimate recharge in a small watershed in the coastal bend of South Texas. The developed lumped parameter model employs four adjustable parameters to calibrate model predicted stream runoff to observations at a gaging station. A new procedure was developed to correctly capture the intermittent nature of rainfall. The total monthly rainfall was assigned to a single-equivalent storm whose duration was obtained via calibration. A total of four calibrations were carried out using an evolutionary computing technique called genetic algorithms as well as the conventional gradient descent (GD) technique. Ordinary least squares and the heteroscedastic maximum likelihood error (HMLE) based objective functions were evaluated as part of this study as well. While the genetic algorithm based calibrations were relatively better in capturing the peak runoff events, the GD based calibration did slightly better in capturing the low flow events. Treating the Box-Cox exponent in the HMLE function as a calibration parameter did not yield better estimates and the study corroborates the suggestion made in the literature of fixing this exponent at 0.3. The model outputs were compared against available information and results indicate that the developed modeling approach provides a conservative estimate of recharge.

  5. Development and evaluation of a predictive algorithm for telerobotic task complexity

    NASA Technical Reports Server (NTRS)

    Gernhardt, M. L.; Hunter, R. C.; Hedgecock, J. C.; Stephenson, A. G.

    1993-01-01

    There is a wide range of complexity in the various telerobotic servicing tasks performed in subsea, space, and hazardous material handling environments. Experience with telerobotic servicing has evolved into a knowledge base used to design tasks to be 'telerobot friendly.' This knowledge base generally resides in a small group of people. Written documentation and requirements are limited in conveying this knowledge base to serviceable equipment designers and are subject to misinterpretation. A mathematical model of task complexity based on measurable task parameters and telerobot performance characteristics would be a valuable tool to designers and operational planners. Oceaneering Space Systems and TRW have performed an independent research and development project to develop such a tool for telerobotic orbital replacement unit (ORU) exchange. This algorithm was developed to predict an ORU exchange degree of difficulty rating (based on the Cooper-Harper rating used to assess piloted operations). It is based on measurable parameters of the ORU, attachment receptacle and quantifiable telerobotic performance characteristics (e.g., link length, joint ranges, positional accuracy, tool lengths, number of cameras, and locations). The resulting algorithm can be used to predict task complexity as the ORU parameters, receptacle parameters, and telerobotic characteristics are varied.

  6. Programmatic features of aging originating in development: aging mechanisms beyond molecular damage?

    PubMed Central

    de Magalhães, João Pedro

    2012-01-01

    The idea that aging follows a predetermined sequence of events, a program, has been discredited by most contemporary authors. Instead, aging is largely thought to occur due to the accumulation of various forms of molecular damage. Recent work employing functional genomics now suggests that, indeed, certain facets of mammalian aging may follow predetermined patterns encoded in the genome as part of developmental processes. It appears that genetic programs coordinating some aspects of growth and development persist into adulthood and may become detrimental. This link between development and aging may occur due to regulated processes, including through the action of microRNAs and epigenetic mechanisms. Taken together with other results, in particular from worms, these findings provide evidence that some aging changes are not primarily a result of a build-up of stochastic damage but are rather a product of regulated processes. These processes are interpreted as forms of antagonistic pleiotropy, the product of a “shortsighted watchmaker,” and thus do not assume aging evolved for a purpose. Overall, it appears that the genome does, indeed, contain specific instructions that drive aging in animals, a radical shift in our perception of the aging process.—de Magalhães, J. P. Programmatic features of aging originating in development: aging mechanisms beyond molecular damage? PMID:22964300

  7. Microglia across the lifespan: from origin to function in brain development, plasticity and cognition.

    PubMed

    Tay, Tuan Leng; Savage, Julie C; Hui, Chin Wai; Bisht, Kanchan; Tremblay, Marie-Ève

    2017-03-15

    Microglia are the only immune cells that permanently reside in the central nervous system (CNS) alongside neurons and other types of glial cells. The past decade has witnessed a revolution in our understanding of their roles during normal physiological conditions. Cutting-edge techniques revealed that these resident immune cells are critical for proper brain development, actively maintain health in the mature brain, and rapidly adapt their function to physiological or pathophysiological needs. In this review, we highlight recent studies on microglial origin (from the embryonic yolk sac) and the factors regulating their differentiation and homeostasis upon brain invasion. Elegant experiments tracking microglia in the CNS allowed studies of their unique roles compared with other types of resident macrophages. Here we review the emerging roles of microglia in brain development, plasticity and cognition, and discuss the implications of the depletion or dysfunction of microglia for our understanding of disease pathogenesis. Immune activation, inflammation and various other conditions resulting in undesirable microglial activity at different stages of life could severely impair learning, memory and other essential cognitive functions. The diversity of microglial phenotypes across the lifespan, between compartments of the CNS, and sexes, as well as their crosstalk with the body and external environment, is also emphasised. Understanding what defines particular microglial phenotypes is of major importance for future development of innovative therapies controlling their effector functions, with consequences for cognition across chronic stress, ageing, neuropsychiatric and neurological diseases.

  8. NASA's Physics of the Cosmos and Cosmic Origins Technology Development Programs

    NASA Technical Reports Server (NTRS)

    Clampin, Mark; Pham, Thai

    2014-01-01

    NASA's Physics of the Cosmos (PCOS) and Cosmic Origins (COR) Program Offices, established in 2011, reside at the NASA Goddard Space Flight Center (GSFC). The offices serve as the implementation arm for the Astrophysics Division at NASA Headquarters. We present an overview of the programs' technology development activities and technology investment portfolio, funded by NASA's Strategic Astrophysics Technology (SAT) program. We currently fund 19 technology advancements to enable future PCOS and COR missions to help answer the questions "How did our universe begin and evolve?" and "How did galaxies, stars, and planets come to be?" We discuss the process for addressing community-provided technology gaps and Technology Management Board (TMB)-vetted prioritization and investment recommendations that inform the SAT program. The process improves the transparency and relevance of our technology investments, provides the community a voice in the process, and promotes targeted external technology investments by defining needs and identifying customers. The programs' goal is to promote and support technology development needed to enable missions envisioned by the National Research Council's (NRC) "New Worlds, New Horizons in Astronomy and Astrophysics" (NWNH) Decadal Survey report [1] and the Astrophysics Implementation Plan (AIP) [2]. These include technology development for dark energy, gravitational waves, X-ray and inflation probe science, and a 4m-class UV/optical telescope to conduct imaging and spectroscopy studies, as a post-Hubble observatory with significantly improved sensitivity and capability.

  9. Origin and development of the dorso-ventral flight muscles in Chironomus (Diptera; Nematocera).

    PubMed

    Lebart-Pedebas, M C

    1990-01-01

    The origin and development of the dorso-ventral flight muscles (DVM) was studied by light and electron microscopy in Chironomus (Diptera; Nematocera). Chironomus was chosen because unlike Drosophila, its flight muscles develop during the last larval instar, before the lytic process of metamorphosis. Ten fibrillar DVM were shown to develop from a larval muscle associated with myoblasts. This muscle is connected to the imaginal leg disc so that its cavity communicates with the adepithelial cells present in the disc; but no migration of myoblasts seems to take place from the imaginal leg disc towards the larval muscle or vice versa. At the beginning of the last larval instar, the myoblasts were always present together with the nerves in the larval muscle. In addition, large larval muscle cells incorporated to the imaginal discs were observed to border on the area occupied by adepithelial cells, and are probably involved in the formation of 4 other fibrillar DVM with adepithelial cells. Three factors seem to determine the number of DVM fibres: the initial number of larval fibres in the Anlage, the fusions of myoblasts with these larval fibres and the number of motor axons in the Anlage. The extrapolation of these observations to Drosophila, a higher dipteran, is discussed.

  10. NASA's Physics of the Cosmos and Cosmic Origins technology development programs

    NASA Astrophysics Data System (ADS)

    Clampin, Mark; Pham, Thai

    2014-07-01

    NASA's Physics of the Cosmos (PCOS) and Cosmic Origins (COR) Program Offices, established in 2011, reside at the NASA Goddard Space Flight Center (GSFC). The offices serve as the implementation arm for the Astrophysics Division at NASA Headquarters. We present an overview of the programs' technology development activities and technology investment portfolio, funded by NASA's Strategic Astrophysics Technology (SAT) program. We currently fund 19 technology advancements to enable future PCOS and COR missions to help answer the questions "How did our universe begin and evolve?" and "How did galaxies, stars, and planets come to be?" We discuss the process for addressing community-provided technology gaps and Technology Management Board (TMB)-vetted prioritization and investment recommendations that inform the SAT program. The process improves the transparency and relevance of our technology investments, provides the community a voice in the process, and promotes targeted external technology investments by defining needs and identifying customers. The programs' goal is to promote and support technology development needed to enable missions envisioned by the National Research Council's (NRC) "New Worlds, New Horizons in Astronomy and Astrophysics" (NWNH) Decadal Survey report [1] and the Astrophysics Implementation Plan (AIP) [2]. These include technology development for dark energy, gravitational waves, X-ray and inflation probe science, and a 4m-class UV/optical telescope to conduct imaging and spectroscopy studies, as a post-Hubble observatory with significantly improved sensitivity and capability.

  11. Algorithm development for intensity modulated continuous wave laser absorption spectrometry in atmospheric CO2 measurements

    NASA Astrophysics Data System (ADS)

    Lin, B.; Harrison, F. W.; Browell, E. V.; Dobler, J. T.; Bryant, R. B.

    2011-12-01

    Currently, NASA Langley Research Center (LaRC) and ITT are jointly developing algorithms for demonstration of range discrimination using ITT's laser absorption spectrometer (LAS), which is being evaluated for the future NASA Active Sensing of CO2 Emissions during Nights, Days, and Seasons (ASCENDS) mission. The objective of this Decadal Survey mission is to measure atmospheric column CO2 mixing ratios (XCO2) for improved determination of atmospheric carbon sources and sinks. Intensity Modulated Continuous Wave (IM-CW) techniques are used in this LAS approach. The LAS is designed to simultaneously measure CO2 and O2 columns, and these measurements are used to determine the required XCO2 column. The LAS measurements are enabled by the multi-channel operation of the instrument at 1.57 and 1.26-um for CO2 and O2, respectively. The algorithm development for the IM-CW techniques of the multi-channel LAS is focused on addressing key retrieval issues such as surface signal detection, thin cloud and/or aerosol layer rejection, vertical atmospheric range resolution, and optimizing the size of the measurement footprint. With these considerations, the modulation algorithm needs to maintain high enough signal-to-noise ratio (SNR) so that the mission scientific goals can be reached. A basic selection of the modulation algorithms that make XCO2 measurement and thin cloud rejection possible is the stepped frequency modulation scheme and a similar scheme of swept sine modulation. The differences between these two schemes for thin cloud rejection are small, assuming the proper selection of parameters is made. The stepped frequency approach is only a quantified version of swept sine method for the frequencies used. Swept sine scheme is a very common modulation technique for range discrimination, while the consideration of the stepped frequency scheme is based on the history of the rolling-tone modulation used in the instrument in previous successful column CO2 measurements. The

  12. Development of Elevation and Relief Databases for ICESat-2/ATLAS Receiver Algorithms

    NASA Astrophysics Data System (ADS)

    Leigh, H. W.; Magruder, L. A.; Carabajal, C. C.; Saba, J. L.; Urban, T. J.; Mcgarry, J.; Schutz, B. E.

    2013-12-01

    The Advanced Topographic Laser Altimeter System (ATLAS) is planned to launch onboard NASA's ICESat-2 spacecraft in 2016. ATLAS operates at a wavelength of 532 nm with a laser repeat rate of 10 kHz and 6 individual laser footprints. The satellite will be in a 500 km, 91-day repeat ground track orbit at an inclination of 92°. A set of onboard Receiver Algorithms has been developed to reduce the data volume and data rate to acceptable levels while still transmitting the relevant ranging data. The onboard algorithms limit the data volume by distinguishing between surface returns and background noise and selecting a small vertical region around the surface return to be included in telemetry. The algorithms make use of signal processing techniques, along with three databases, the Digital Elevation Model (DEM), the Digital Relief Map (DRM), and the Surface Reference Mask (SRM), to find the signal and determine the appropriate dynamic range of vertical data surrounding the surface for downlink. The DEM provides software-based range gating for ATLAS. This approach allows the algorithm to limit the surface signal search to the vertical region between minimum and maximum elevations provided by the DEM (plus some margin to account for uncertainties). The DEM is constructed in a nested, three-tiered grid to account for a hardware constraint limiting the maximum vertical range to 6 km. The DRM is used to select the vertical width of the telemetry band around the surface return. The DRM contains global values of relief calculated along 140 m and 700 m ground track segments consistent with a 92° orbit. The DRM must contain the maximum value of relief seen in any given area, but must be as close to truth as possible as the DRM directly affects data volume. The SRM, which has been developed independently from the DEM and DRM, is used to set parameters within the algorithm and select telemetry bands for downlink. Both the DEM and DRM are constructed from publicly available digital

  13. An algorithm for hyperspectral remote sensing of aerosols: 1. Development of theoretical framework

    NASA Astrophysics Data System (ADS)

    Hou, Weizhen; Wang, Jun; Xu, Xiaoguang; Reid, Jeffrey S.; Han, Dong

    2016-07-01

    This paper describes the first part of a series of investigations to develop algorithms for simultaneous retrieval of aerosol parameters and surface reflectance from a newly developed hyperspectral instrument, the GEOstationary Trace gas and Aerosol Sensor Optimization (GEO-TASO), by taking full advantage of available hyperspectral measurement information in the visible bands. We describe the theoretical framework of an inversion algorithm for the hyperspectral remote sensing of the aerosol optical properties, in which major principal components (PCs) for surface reflectance is assumed known, and the spectrally dependent aerosol refractive indices are assumed to follow a power-law approximation with four unknown parameters (two for real and two for imaginary part of refractive index). New capabilities for computing the Jacobians of four Stokes parameters of reflected solar radiation at the top of the atmosphere with respect to these unknown aerosol parameters and the weighting coefficients for each PC of surface reflectance are added into the UNified Linearized Vector Radiative Transfer Model (UNL-VRTM), which in turn facilitates the optimization in the inversion process. Theoretical derivations of the formulas for these new capabilities are provided, and the analytical solutions of Jacobians are validated against the finite-difference calculations with relative error less than 0.2%. Finally, self-consistency check of the inversion algorithm is conducted for the idealized green-vegetation and rangeland surfaces that were spectrally characterized by the U.S. Geological Survey digital spectral library. It shows that the first six PCs can yield the reconstruction of spectral surface reflectance with errors less than 1%. Assuming that aerosol properties can be accurately characterized, the inversion yields a retrieval of hyperspectral surface reflectance with an uncertainty of 2% (and root-mean-square error of less than 0.003), which suggests self-consistency in the

  14. Development of algorithm for retrieving aerosols over land surfaces from NEMO-AM polarized measurements

    NASA Astrophysics Data System (ADS)

    Pandya, Mehul R.

    2016-04-01

    Atmospheric aerosols have a large effect on the Earth radiation budget through its direct and indirect effects. A systematic assessment of aerosol effects on Earth's climate requires global mapping of tropospheric aerosols through satellite remote sensing. However aerosol retrieval over land surface remains a challenging task due to bright background of the land surfaces. Polarized measurements can provide an improved aerosol sensing by providing a means of decoupling the surface and atmospheric contribution. The Indian Space Research Organisation has planned a Multi- Angle Dual-Polarization Instrument (MADPI) onboard a Nano satellite for Earth Monitoring & Observations for Aerosol Monitoring (NEMO-AM). MADPI has three spectral bands in blue, red and near infrared spectral regions with a nominal spatial resolution of 30 m from an altitude of 500 km polar orbit. A study has been taken up with the aim of development of an algorithm for retrieving aerosol optical thickness (AOT) over land surfaces from NEMO-AM polarized measurements. The study has three major components: (1) detailed theoretical modelling exercise for computing the atmospheric and surface polarized contributions, (2) modelling of total satellite-level polarized contribution, and (3) retrieval of aerosol optical thickness (AOT) by comparing the modelled and measured polarized signals. The algorithm has been developed for MADPI/NEMO-AM spectral bands and tested successfully on similar spectral bands of POLDER/PARASOL measurements to retrieve AOT over Indian landmass having diverse atmospheric conditions. POLDER-derived AOT fields were compared with MODIS-AOT products. Results showed a very good match (R2 0.69, RMSE 0.07). Initial results have provided encouraging results, however, comprehensive analysis and testing has to be carried out for establishing the proposed algorithm for retrieving AOT from NEMO-AM measurements.

  15. Remote Sensing of Ocean Color in the Arctic: Algorithm Development and Comparative Validation. Chapter 9

    NASA Technical Reports Server (NTRS)

    Cota, Glenn F.

    2001-01-01

    The overall goal of this effort is to acquire a large bio-optical database, encompassing most environmental variability in the Arctic, to develop algorithms for phytoplankton biomass and production and other optically active constituents. A large suite of bio-optical and biogeochemical observations have been collected in a variety of high latitude ecosystems at different seasons. The Ocean Research Consortium of the Arctic (ORCA) is a collaborative effort between G.F. Cota of Old Dominion University (ODU), W.G. Harrison and T. Platt of the Bedford Institute of Oceanography (BIO), S. Sathyendranath of Dalhousie University and S. Saitoh of Hokkaido University. ORCA has now conducted 12 cruises and collected over 500 in-water optical profiles plus a variety of ancillary data. Observational suites typically include apparent optical properties (AOPs), inherent optical property (IOPs), and a variety of ancillary observations including sun photometry, biogeochemical profiles, and productivity measurements. All quality-assured data have been submitted to NASA's SeaWIFS Bio-Optical Archive and Storage System (SeaBASS) data archive. Our algorithm development efforts address most of the potential bio-optical data products for the Sea-Viewing Wide Field-of-view Sensor (SeaWiFS), Moderate Resolution Imaging Spectroradiometer (MODIS), and GLI, and provides validation for a specific areas of concern, i.e., high latitudes and coastal waters.

  16. Development of optimization model for sputtering process parameter based on gravitational search algorithm

    NASA Astrophysics Data System (ADS)

    Norlina, M. S.; Diyana, M. S. Nor; Mazidah, P.; Rusop, M.

    2016-07-01

    In the RF magnetron sputtering process, the desirable layer properties are largely influenced by the process parameters and conditions. If the quality of the thin film has not reached up to its intended level, the experiments have to be repeated until the desirable quality has been met. This research is proposing Gravitational Search Algorithm (GSA) as the optimization model to reduce the time and cost to be spent in the thin film fabrication. The optimization model's engine has been developed using Java. The model is developed based on GSA concept, which is inspired by the Newtonian laws of gravity and motion. In this research, the model is expected to optimize four deposition parameters which are RF power, deposition time, oxygen flow rate and substrate temperature. The results have turned out to be promising and it could be concluded that the performance of the model is satisfying in this parameter optimization problem. Future work could compare GSA with other nature based algorithms and test them with various set of data.

  17. Developing algorithms for predicting protein-protein interactions of homology modeled proteins.

    SciTech Connect

    Martin, Shawn Bryan; Sale, Kenneth L.; Faulon, Jean-Loup Michel; Roe, Diana C.

    2006-01-01

    The goal of this project was to examine the protein-protein docking problem, especially as it relates to homology-based structures, identify the key bottlenecks in current software tools, and evaluate and prototype new algorithms that may be developed to improve these bottlenecks. This report describes the current challenges in the protein-protein docking problem: correctly predicting the binding site for the protein-protein interaction and correctly placing the sidechains. Two different and complementary approaches are taken that can help with the protein-protein docking problem. The first approach is to predict interaction sites prior to docking, and uses bioinformatics studies of protein-protein interactions to predict theses interaction site. The second approach is to improve validation of predicted complexes after docking, and uses an improved scoring function for evaluating proposed docked poses, incorporating a solvation term. This scoring function demonstrates significant improvement over current state-of-the art functions. Initial studies on both these approaches are promising, and argue for full development of these algorithms.

  18. Development of a Detection Algorithm for Use with Reflectance-Based, Real-Time Chemical Sensing

    PubMed Central

    Malanoski, Anthony P.; Johnson, Brandy J.; Erickson, Jeffrey S.; Stenger, David A.

    2016-01-01

    Here, we describe our efforts focused on development of an algorithm for identification of detection events in a real-time sensing application relying on reporting of color values using commercially available color sensing chips. The effort focuses on the identification of event occurrence, rather than target identification, and utilizes approaches suitable to onboard device incorporation to facilitate portable and autonomous use. The described algorithm first excludes electronic noise generated by the sensor system and determines response thresholds. This automatic adjustment provides the potential for use with device variations as well as accommodating differing indicator behaviors. Multiple signal channels (RGB) as well as multiple indicator array elements are combined for reporting of an event with a minimum of false responses. While the method reported was developed for use with paper-supported porphyrin and metalloporphyrin indicators, it should be equally applicable to other colorimetric indicators. Depending on device configurations, receiver operating characteristic (ROC) sensitivities of 1 could be obtained with specificities of 0.87 (threshold 160 ppb, ethanol). PMID:27854335

  19. Body wall development in lamprey and a new perspective on the origin of vertebrate paired fins.

    PubMed

    Tulenko, Frank J; McCauley, David W; Mackenzie, Ethan L; Mazan, Sylvie; Kuratani, Shigeru; Sugahara, Fumiaki; Kusakabe, Rie; Burke, Ann C

    2013-07-16

    Classical hypotheses regarding the evolutionary origin of paired appendages propose transformation of precursor structures (gill arches and lateral fin folds) into paired fins. During development, gnathostome paired appendages form as outgrowths of body wall somatopleure, a tissue composed of somatic lateral plate mesoderm (LPM) and overlying ectoderm. In amniotes, LPM contributes connective tissue to abaxial musculature and forms ventrolateral dermis of the interlimb body wall. The phylogenetic distribution of this character is uncertain because lineage analyses of LPM have not been generated in anamniotes. We focus on the evolutionary history of the somatopleure to gain insight into the tissue context in which paired fins first appeared. Lampreys diverged from other vertebrates before the acquisition of paired fins and provide a model for investigating the preappendicular condition. We present vital dye fate maps that suggest the somatopleure is eliminated in lamprey as the LPM is separated from the ectoderm and sequestered to the coelomic linings during myotome extension. We also examine the distribution of postcranial mesoderm in catshark and axolotl. In contrast to lamprey, our findings support an LPM contribution to the trunk body wall of these taxa, which is similar to published data for amniotes. Collectively, these data lead us to hypothesize that a persistent somatopleure in the lateral body wall is a gnathostome synapomorphy, and the redistribution of LPM was a key step in generating the novel developmental module that ultimately produced paired fins. These embryological criteria can refocus arguments on paired fin origins and generate hypotheses testable by comparative studies on the source, sequence, and extent of genetic redeployment.

  20. Simple Algorithms for Distributed Leader Election in Anonymous Synchronous Rings and Complete Networks Inspired by Neural Development in Fruit Flies.

    PubMed

    Xu, Lei; Jeavons, Peter

    2015-11-01

    Leader election in anonymous rings and complete networks is a very practical problem in distributed computing. Previous algorithms for this problem are generally designed for a classical message passing model where complex messages are exchanged. However, the need to send and receive complex messages makes such algorithms less practical for some real applications. We present some simple synchronous algorithms for distributed leader election in anonymous rings and complete networks that are inspired by the development of the neural system of the fruit fly. Our leader election algorithms all assume that only one-bit messages are broadcast by nodes in the network and processors are only able to distinguish between silence and the arrival of one or more messages. These restrictions allow implementations to use a simpler message-passing architecture. Even with these harsh restrictions our algorithms are shown to achieve good time and message complexity both analytically and experimentally.

  1. Development of Gis Tool for the Solution of Minimum Spanning Tree Problem using Prim's Algorithm

    NASA Astrophysics Data System (ADS)

    Dutta, S.; Patra, D.; Shankar, H.; Alok Verma, P.

    2014-11-01

    minimum spanning tree (MST) of a connected, undirected and weighted network is a tree of that network consisting of all its nodes and the sum of weights of all its edges is minimum among all such possible spanning trees of the same network. In this study, we have developed a new GIS tool using most commonly known rudimentary algorithm called Prim's algorithm to construct the minimum spanning tree of a connected, undirected and weighted road network. This algorithm is based on the weight (adjacency) matrix of a weighted network and helps to solve complex network MST problem easily, efficiently and effectively. The selection of the appropriate algorithm is very essential otherwise it will be very hard to get an optimal result. In case of Road Transportation Network, it is very essential to find the optimal results by considering all the necessary points based on cost factor (time or distance). This paper is based on solving the Minimum Spanning Tree (MST) problem of a road network by finding it's minimum span by considering all the important network junction point. GIS technology is usually used to solve the network related problems like the optimal path problem, travelling salesman problem, vehicle routing problems, location-allocation problems etc. Therefore, in this study we have developed a customized GIS tool using Python script in ArcGIS software for the solution of MST problem for a Road Transportation Network of Dehradun city by considering distance and time as the impedance (cost) factors. It has a number of advantages like the users do not need a greater knowledge of the subject as the tool is user-friendly and that allows to access information varied and adapted the needs of the users. This GIS tool for MST can be applied for a nationwide plan called Prime Minister Gram Sadak Yojana in India to provide optimal all weather road connectivity to unconnected villages (points). This tool is also useful for constructing highways or railways spanning several

  2. The origin and maintenance of nuclear endosperms: viewing development through a phylogenetic lens.

    PubMed Central

    Geeta, R

    2003-01-01

    The endosperm develops in fertilized ovules of angiosperms following fertilization of the central cell and nuclei in the female gametophyte. Endosperms differ in whether, and which, nuclear divisions are followed by cellular divisions; the variants are classified as cellular, nuclear or helobial. Functional correlates of this variation are little understood. Phylogenetic methods provide a powerful means of exploring taxonomic variation and phylogenetic patterns, to frame questions regarding biological processes. Data on endosperms across angiosperms were analysed in a phylogenetic context in order to determine homologies and detect biases in the direction of evolutionary transitions. Analyses confirm that neither all nuclear nor all helobial endosperms are homologous, raise the possibility that cellular development is a reversal in some derived angiosperms (e.g. asterids) and show that a statistically significant bias towards evolution of nuclear endosperms (and against reversals) prevails in angiosperms as a whole. This bias suggests strong selective advantages to having nuclear endosperm, developmental constraints to reversals or both. Homologies suggest that the microtubular cycle and cellularization pattern characteristic of reproductive cells across land plants may have been independently co-opted during multiple origins of nuclear endosperms, but information on cellular endosperms is essential to investigate further. PMID:12590768

  3. The origin and maintenance of nuclear endosperms: viewing development through a phylogenetic lens.

    PubMed

    Geeta, R

    2003-01-07

    The endosperm develops in fertilized ovules of angiosperms following fertilization of the central cell and nuclei in the female gametophyte. Endosperms differ in whether, and which, nuclear divisions are followed by cellular divisions; the variants are classified as cellular, nuclear or helobial. Functional correlates of this variation are little understood. Phylogenetic methods provide a powerful means of exploring taxonomic variation and phylogenetic patterns, to frame questions regarding biological processes. Data on endosperms across angiosperms were analysed in a phylogenetic context in order to determine homologies and detect biases in the direction of evolutionary transitions. Analyses confirm that neither all nuclear nor all helobial endosperms are homologous, raise the possibility that cellular development is a reversal in some derived angiosperms (e.g. asterids) and show that a statistically significant bias towards evolution of nuclear endosperms (and against reversals) prevails in angiosperms as a whole. This bias suggests strong selective advantages to having nuclear endosperm, developmental constraints to reversals or both. Homologies suggest that the microtubular cycle and cellularization pattern characteristic of reproductive cells across land plants may have been independently co-opted during multiple origins of nuclear endosperms, but information on cellular endosperms is essential to investigate further.

  4. Development and Implementation of a Hardware In-the-Loop Test Bed for Unmanned Aerial Vehicle Control Algorithms

    NASA Technical Reports Server (NTRS)

    Nyangweso, Emmanuel; Bole, Brian

    2014-01-01

    Successful prediction and management of battery life using prognostic algorithms through ground and flight tests is important for performance evaluation of electrical systems. This paper details the design of test beds suitable for replicating loading profiles that would be encountered in deployed electrical systems. The test bed data will be used to develop and validate prognostic algorithms for predicting battery discharge time and battery failure time. Online battery prognostic algorithms will enable health management strategies. The platform used for algorithm demonstration is the EDGE 540T electric unmanned aerial vehicle (UAV). The fully designed test beds developed and detailed in this paper can be used to conduct battery life tests by controlling current and recording voltage and temperature to develop a model that makes a prediction of end-of-charge and end-of-life of the system based on rapid state of health (SOH) assessment.

  5. Collaboration on Development and Validation of the AMSR-E Snow Water Equivalent Algorithm

    NASA Technical Reports Server (NTRS)

    Armstrong, Richard L.

    2000-01-01

    The National Snow and Ice Data Center (NSIDC) has produced a global SMMR and SSM/I Level 3 Brightness Temperature data set in the Equal Area Scalable Earth (EASE) Grid for the period 1978 to 2000. Processing of current data is-ongoing. The EASE-Grid passive microwave data sets are appropriate for algorithm development and validation prior to the launch of AMSR-E. Having the lower frequency channels of SMMR (6.6 and 10.7 GHz) and the higher frequency channels of SSM/I (85.5 GHz) in the same format will facilitate the preliminary development of applications which could potentially make use of similar frequencies from AMSR-E (6.9, 10.7, 89.0 GHz).

  6. Development of a New De Novo Design Algorithm for Exploring Chemical Space.

    PubMed

    Mishima, Kazuaki; Kaneko, Hiromasa; Funatsu, Kimito

    2014-12-01

    In the first stage of development of new drugs, various lead compounds with high activity are required. To design such compounds, we focus on chemical space defined by structural descriptors. New compounds close to areas where highly active compounds exist will show the same degree of activity. We have developed a new de novo design system to search a target area in chemical space. First, highly active compounds are manually selected as initial seeds. Then, the seeds are entered into our system, and structures slightly different from the seeds are generated and pooled. Next, seeds are selected from the new structure pool based on the distance from target coordinates on the map. To test the algorithm, we used two datasets of ligand binding affinity and showed that the proposed generator could produce diverse virtual compounds that had high activity in docking simulations.

  7. Development of a Machine Learning Algorithm for the Surveillance of Autism Spectrum Disorder

    PubMed Central

    Maenner, Matthew J.; Yeargin-Allsopp, Marshalyn; Van Naarden Braun, Kim; Christensen, Deborah L.; Schieve, Laura A.

    2016-01-01

    The Autism and Developmental Disabilities Monitoring (ADDM) Network conducts population-based surveillance of autism spectrum disorder (ASD) among 8-year old children in multiple US sites. To classify ASD, trained clinicians review developmental evaluations collected from multiple health and education sources to determine whether the child meets the ASD surveillance case criteria. The number of evaluations collected has dramatically increased since the year 2000, challenging the resources and timeliness of the surveillance system. We developed and evaluated a machine learning approach to classify case status in ADDM using words and phrases contained in children’s developmental evaluations. We trained a random forest classifier using data from the 2008 Georgia ADDM site which included 1,162 children with 5,396 evaluations (601 children met ADDM ASD criteria using standard ADDM methods). The classifier used the words and phrases from the evaluations to predict ASD case status. We evaluated its performance on the 2010 Georgia ADDM surveillance data (1,450 children with 9,811 evaluations; 754 children met ADDM ASD criteria). We also estimated ASD prevalence using predictions from the classification algorithm. Overall, the machine learning approach predicted ASD case statuses that were 86.5% concordant with the clinician-determined case statuses (84.0% sensitivity, 89.4% predictive value positive). The area under the resulting receiver-operating characteristic curve was 0.932. Algorithm-derived ASD “prevalence” was 1.46% compared to the published (clinician-determined) estimate of 1.55%. Using only the text contained in developmental evaluations, a machine learning algorithm was able to discriminate between children that do and do not meet ASD surveillance criteria at one surveillance site. PMID:28002438

  8. Ocean observations with EOS/MODIS: Algorithm development and post launch studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1995-01-01

    Several significant accomplishments were made during the present reporting period. (1) Initial simulations to understand the applicability of the MODerate Resolution Imaging Spectrometer (MODIS) 1380 nm band for removing the effects of stratospheric aerosols and thin cirrus clouds were completed using a model for an aged volcanic aerosol. The results suggest that very simple procedures requiring no a priori knowledge of the optical properties of the stratospheric aerosol may be as effective as complex procedures requiring full knowledge of the aerosol properties, except the concentration which is estimated from the reflectance at 1380 nm. The limitations of this conclusion will be examined in the next reporting period; (2) The lookup tables employed in the implementation of the atmospheric correction algorithm have been modified in several ways intended to improve the accuracy and/or speed of processing. These have been delivered to R. Evans for implementation into the MODIS prototype processing algorithm for testing; (3) A method was developed for removal of the effects of the O2 'A' absorption band from SeaWiFS band 7 (745-785 nm). This is important in that SeaWiFS imagery will be used as a test data set for the MODIS atmospheric correction algorithm over the oceans; and (4) Construction of a radiometer, and associated deployment boom, for studying the spectral reflectance of oceanic whitecaps at sea was completed. The system was successfully tested on a cruise off Hawaii on which whitecaps were plentiful during October-November. This data set is now under analysis.

  9. Amphioxus and ascidian Dmbx homeobox genes give clues to the vertebrate origins of midbrain development.

    PubMed

    Takahashi, Tokiharu; Holland, Peter W H

    2004-07-01

    The ancestral chordate neural tube had a tripartite structure, comprising anterior, midbrain-hindbrain boundary (MHB) and posterior regions. The most anterior region encompasses both forebrain and midbrain in vertebrates. It is not clear when or how the distinction between these two functionally and developmentally distinct regions arose in evolution. Recently, we reported a mouse PRD-class homeobox gene, Dmbx1, expressed in the presumptive midbrain at early developmental stages, and the hindbrain at later stages, with exclusion from the MHB. This gene provides a route to investigate the evolution of midbrain development. We report the cloning, genomic structure, phylogeny and embryonic expression of Dmbx genes from amphioxus and from Ciona, representing the two most closely related lineages to the vertebrates. Our analyses show that Dmbx genes form a distinct, ancient, homeobox gene family, with highly conserved sequence and genomic organisation, albeit more divergent in Ciona. In amphioxus, no Dmbx expression is observed in the neural tube, supporting previous arguments that the MHB equivalent region has been secondarily modified in evolution. In Ciona, the CiDmbx gene is detected in neural cells caudal to Pax2/5/8-positive cells (MHB homologue), in the Hox-positive region, but, interestingly, not in any cells rostral to them. These results suggest that a midbrain homologue is missing in Ciona, and argue that midbrain development is a novelty that evolved specifically on the vertebrate lineage. We discuss the evolution of midbrain development in relation to the ancestry of the tripartite neural ground plan and the origin of the MHB organiser.

  10. [The origin, diffusion and development of healing doctrines in medical history--exemplified by homeopathy].

    PubMed

    Schmidt, Josef M

    2007-01-01

    As a paradigmatic case study of the origin, spread, and development of medical systems, this paper investigates the 200-years history of homeopathy from different perspectives of medical history. On the basis of new research on Samuel Hahnemann (1755-1843), first, a concise and critical overview on the principles, explanations, and implications of his doctrine is presented. The historical, conceptual, and social background of the founder of homeopathy is then elaborated in terms of history of medicine, science, philosophy, sociology, culture, and ideas, as well as theory of science, theory of communication, and sociology of science. The process of the world wide spread of homeopathy is examined from different points of view, ranging from history of heroes, institutions, professionalisation, politics, economics, religion, and organisations to history of patients, perception, and semiotics. Finally, a comparative approach to the different development and status of homeopathy in different countries results in the extraction of a set of crucial variables, such as charismatic personage, influential patronage, economic sponsorship, political protection, media support, and patients' demand, which might explane a major part of these differences. Eventually, the notorious splits of homeopathy's doctrine suggest the idea that--in analogy to theory of evolution--a variety of concurrent strains (rather than one monolithic block) of a doctrine may prove to be a kind of advantage for survival. In conclusion, acceptance and relevance of medical systems are determined by many factors. Since external ones are usually outweighing internal ones, medical history may offer a broader and more comprehensive understanding of the dynamics of their spread and development than clinical trials and scientific objection alone.

  11. A prototype hail detection algorithm and hail climatology developed with the advanced microwave sounding unit (AMSU)

    NASA Astrophysics Data System (ADS)

    Ferraro, Ralph; Beauchamp, James; Cecil, Daniel; Heymsfield, Gerald

    2015-09-01

    In previous studies published in the open literature, a strong relationship between the occurrence of hail and the microwave brightness temperatures (primarily at 37 and 85 GHz) was documented. These studies were performed with the Nimbus-7 Scanning Multichannel Microwave Radiometer (SMMR), the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) and most recently, the Aqua Advanced Microwave Scanning Radiometer (AMSR-E) sensor. This led to climatologies of hail frequency from TMI and AMSR-E, however, limitations included geographical domain of the TMI sensor (35 S to 35 N) and the overpass time of the Aqua satellite (130 am/pm local time), both of which reduce an accurate mapping of hail events over the global domain and the full diurnal cycle. Nonetheless, these studies presented exciting, new applications for passive microwave sensors. NOAA and EUMETSAT have been operating the Advanced Microwave Sounding Unit (AMSU-A and -B) and the Microwave Humidity Sounder (MHS) on several operational satellites since 1998: NOAA-15 through NOAA-19; MetOp-A and -B. With multiple satellites in operation since 2000, the AMSU/MHS sensors provide near global coverage every 4 h, thus, offering a much larger time and temporal sampling than TRMM or AMSR-E. With similar observation frequencies near 30 and 85 GHz, one at 157 GHz, and additionally three at the 183 GHz water vapor band, the potential to detect strong convection associated with severe storms on a more comprehensive time and space scale exists. In this study, we develop a prototype AMSU-based hail detection algorithm through the use of collocated satellite and surface hail reports over the continental US for a 10-year period (2000-2009). Compared with the surface observations, the algorithm detects approximately 40% of hail occurrences. The simple threshold algorithm is then used to generate a hail climatology based on all available AMSU observations during 2000-2011 that is stratified in several ways

  12. A Prototype Hail Detection Algorithm and Hail Climatology Developed with the Advanced Microwave Sounding Unit (AMSU)

    NASA Technical Reports Server (NTRS)

    Ferraro, Ralph; Beauchamp, James; Cecil, Dan; Heymsfeld, Gerald

    2015-01-01

    In previous studies published in the open literature, a strong relationship between the occurrence of hail and the microwave brightness temperatures (primarily at 37 and 85 GHz) was documented. These studies were performed with the Nimbus-7 SMMR, the TRMM Microwave Imager (TMI) and most recently, the Aqua AMSR-E sensor. This lead to climatologies of hail frequency from TMI and AMSR-E, however, limitations include geographical domain of the TMI sensor (35 S to 35 N) and the overpass time of the Aqua satellite (130 am/pm local time), both of which reduce an accurate mapping of hail events over the global domain and the full diurnal cycle. Nonetheless, these studies presented exciting, new applications for passive microwave sensors. Since 1998, NOAA and EUMETSAT have been operating the AMSU-A/B and the MHS on several operational satellites: NOAA-15 through NOAA-19; MetOp-A and -B. With multiple satellites in operation since 2000, the AMSU/MHS sensors provide near global coverage every 4 hours, thus, offering a much larger time and temporal sampling than TRMM or AMSR-E. With similar observation frequencies near 30 and 85 GHz and additionally three at the 183 GHz water vapor band, the potential to detect strong convection associated with severe storms on a more comprehensive time and space scale exists. In this study, we develop a prototype AMSU-based hail detection algorithm through the use of collocated satellite and surface hail reports over the continental U.S. for a 12-year period (2000-2011). Compared with the surface observations, the algorithm detects approximately 40 percent of hail occurrences. The simple threshold algorithm is then used to generate a hail climatology that is based on all available AMSU observations during 2000-11 that is stratified in several ways, including total hail occurrence by month (March through September), total annual, and over the diurnal cycle. Independent comparisons are made compared to similar data sets derived from other

  13. DEVELOPMENT AND TESTING OF FAULT-DIAGNOSIS ALGORITHMS FOR REACTOR PLANT SYSTEMS

    SciTech Connect

    Grelle, Austin L.; Park, Young S.; Vilim, Richard B.

    2016-06-26

    Argonne National Laboratory is further developing fault diagnosis algorithms for use by the operator of a nuclear plant to aid in improved monitoring of overall plant condition and performance. The objective is better management of plant upsets through more timely, informed decisions on control actions with the ultimate goal of improved plant safety, production, and cost management. Integration of these algorithms with visual aids for operators is taking place through a collaboration under the concept of an operator advisory system. This is a software entity whose purpose is to manage and distill the enormous amount of information an operator must process to understand the plant state, particularly in off-normal situations, and how the state trajectory will unfold in time. The fault diagnosis algorithms were exhaustively tested using computer simulations of twenty different faults introduced into the chemical and volume control system (CVCS) of a pressurized water reactor (PWR). The algorithms are unique in that each new application to a facility requires providing only the piping and instrumentation diagram (PID) and no other plant-specific information; a subject-matter expert is not needed to install and maintain each instance of an application. The testing approach followed accepted procedures for verifying and validating software. It was shown that the code satisfies its functional requirement which is to accept sensor information, identify process variable trends based on this sensor information, and then to return an accurate diagnosis based on chains of rules related to these trends. The validation and verification exercise made use of GPASS, a one-dimensional systems code, for simulating CVCS operation. Plant components were failed and the code generated the resulting plant response. Parametric studies with respect to the severity of the fault, the richness of the plant sensor set, and the accuracy of sensors were performed as part of the validation

  14. Description of ALARMA: the alarm algorithm developed for the Nuclear Car Wash

    SciTech Connect

    Luu, T; Biltoft, P; Church, J; Descalle, M; Hall, J; Manatt, D; Mauger, J; Norman, E; Petersen, D; Pruet, J; Prussin, S; Slaughter, D

    2006-11-28

    The goal of any alarm algorithm should be that it provide the necessary tools to derive confidence limits on whether the existence of fissile materials is present in cargo containers. It should be able to extract these limits from (usually) noisy and/or weak data while maintaining a false alarm rate (FAR) that is economically suitable for port operations. It should also be able to perform its analysis within a reasonably short amount of time (i.e. {approx} seconds). To achieve this, it is essential that the algorithm be able to identify and subtract any interference signature that might otherwise be confused with a fissile signature. Lastly, the algorithm itself should be user-intuitive and user-friendly so that port operators with little or no experience with detection algorithms may use it with relative ease. In support of the Nuclear Car Wash project at Lawrence Livermore Laboratory, we have developed an alarm algorithm that satisfies the above requirements. The description of the this alarm algorithm, dubbed ALARMA, is the purpose of this technical report. The experimental setup of the nuclear car wash has been well documented [1, 2, 3]. The presence of fissile materials is inferred by examining the {beta}-delayed gamma spectrum induced after a brief neutron irradiation of cargo, particularly in the high-energy region above approximately 2.5 MeV. In this region naturally occurring gamma rays are virtually non-existent. Thermal-neutron induced fission of {sup 235}U and {sup 239}P, on the other hand, leaves a unique {beta}-delayed spectrum [4]. This spectrum comes from decays of fission products having half-lives as large as 30 seconds, many of which have high Q-values. Since high-energy photons penetrate matter more freely, it is natural to look for unique fissile signatures in this energy region after neutron irradiation. The goal of this interrogation procedure is a 95% success rate of detection of as little as 5 kilograms of fissile material while retaining

  15. Path optimization by a variational reaction coordinate method. I. Development of formalism and algorithms

    SciTech Connect

    Birkholz, Adam B.; Schlegel, H. Bernhard

    2015-12-28

    The development of algorithms to optimize reaction pathways between reactants and products is an active area of study. Existing algorithms typically describe the path as a discrete series of images (chain of states) which are moved downhill toward the path, using various reparameterization schemes, constraints, or fictitious forces to maintain a uniform description of the reaction path. The Variational Reaction Coordinate (VRC) method is a novel approach that finds the reaction path by minimizing the variational reaction energy (VRE) of Quapp and Bofill. The VRE is the line integral of the gradient norm along a path between reactants and products and minimization of VRE has been shown to yield the steepest descent reaction path. In the VRC method, we represent the reaction path by a linear expansion in a set of continuous basis functions and find the optimized path by minimizing the VRE with respect to the linear expansion coefficients. Improved convergence is obtained by applying constraints to the spacing of the basis functions and coupling the minimization of the VRE to the minimization of one or more points along the path that correspond to intermediates and transition states. The VRC method is demonstrated by optimizing the reaction path for the Müller-Brown surface and by finding a reaction path passing through 5 transition states and 4 intermediates for a 10 atom Lennard-Jones cluster.

  16. Development of a pharmacogenetic-guided warfarin dosing algorithm for Puerto Rican patients

    PubMed Central

    Ramos, Alga S; Seip, Richard L; Rivera-Miranda, Giselle; Felici-Giovanini, Marcos E; Garcia-Berdecia, Rafael; Alejandro-Cowan, Yirelia; Kocherla, Mohan; Cruz, Iadelisse; Feliu, Juan F; Cadilla, Carmen L; Renta, Jessica Y; Gorowski, Krystyna; Vergara, Cunegundo; Ruaño, Gualberto; Duconge, Jorge

    2012-01-01

    Aim This study was aimed at developing a pharmacogenetic-driven warfarin-dosing algorithm in 163 admixed Puerto Rican patients on stable warfarin therapy. Patients & methods A multiple linear-regression analysis was performed using log-transformed effective warfarin dose as the dependent variable, and combining CYP2C9 and VKORC1 genotyping with other relevant nongenetic clinical and demographic factors as independent predictors. Results The model explained more than two-thirds of the observed variance in the warfarin dose among Puerto Ricans, and also produced significantly better ‘ideal dose’ estimates than two pharmacogenetic models and clinical algorithms published previously, with the greatest benefit seen in patients ultimately requiring <7 mg/day. We also assessed the clinical validity of the model using an independent validation cohort of 55 Puerto Rican patients from Hartford, CT, USA (R2 = 51%). Conclusion Our findings provide the basis for planning prospective pharmacogenetic studies to demonstrate the clinical utility of genotyping warfarin-treated Puerto Rican patients. PMID:23215886

  17. White Light Modeling, Algorithm Development, and Validation on the Micro-arcsecond Metrology Testbed

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.; Regher, Martin; Shen, Tsae Pyng

    2004-01-01

    The Space Interferometry Mission (SIM) scheduled for launch in early 2010, is an optical interferometer that will perform narrow angle and global wide angle astrometry with unprecedented accuracy, providing differential position accuracies of 1uas, and 4uas global accuracies in position, proper motion and parallax. The astrometric observations of the SIM instrument are performed via delay measurements provided by three Michelson-type, white light interferometers. Two 'guide' interferometers acquire fringes on bright guide stars in order to make highly precise measurements of variations in spacecraft attitude, while the third interferometer performs the science measurement. SIM derives its performance from a combination of precise fringe measurements of the interfered starlight (a few ten-thousandths of a wave) and very precise (tens of picometers) relative distance measurements made between a set of fiducials. The focus of the present paper is on the development and analysis of algorithms for accurate white light estimation, and on validating some of these algorithms on the MicroArcsecond Testbed.

  18. Development of a computer algorithm for feedback controlled electrical nerve fiber stimulation.

    PubMed

    Doruk, R Özgür

    2011-09-01

    The purpose of this research is to develop an algorithm for a feedback controlled local electrical nerve fiber stimulation system which has the purpose to stop the repetitive firing in a particular region of the nervous system. The electrophysiological behavior of the neurons (under electrical currents) is modeled by Hodgkin-Huxley (HH) type nonlinear nerve fiber dynamics. The repetitive firing of in the modeled fiber is due to the deviations in the channel parameters, which is also called as bifurcation in the nonlinear systems theory. A washout filter is augmented to the HH dynamics and then the output of the filter is fed to the external current generator through a linear gain. This gain is computed by linear projective control theory. That is a linear output feedback control technique where the closed loop spectrum of the full state feedback closed loop is partially maintained. By obtaining a spectrum of eigenvalues with completely negative real parts the nerve fibers can be relaxed to the equilibrium point with or without a damped oscillation. The MATLAB script applying the theory of this work is provided at the end of this paper. A MATLAB-Simulink computer simulation is performed in order to verify the algorithm.

  19. Development of an algorithm for production of inactivated arbovirus antigens in cell culture

    PubMed Central

    Goodman, C.H.; Russell, B.J.; Velez, J.O.; Laven, J.J.; Nicholson, W.L; Bagarozzi, D.A.; Moon, J.L.; Bedi, K.; Johnson, B.W.

    2015-01-01

    Arboviruses are medically important pathogens that cause human disease ranging from a mild fever to encephalitis. Laboratory diagnosis is essential to differentiate arbovirus infections from other pathogens with similar clinical manifestations. The Arboviral Diseases Branch (ADB) reference laboratory at the CDC Division of Vector-Borne Diseases (DVBD) produces reference antigens used in serological assays such as the virus-specific immunoglobulin M antibody-capture enzyme-linked immunosorbent assay (MAC-ELISA). Antigen production in cell culture has largely replaced the use of suckling mice; however, the methods are not directly transferable. The development of a cell culture antigen production algorithm for nine arboviruses from the three main arbovirus families, Flaviviridae, Togaviridae, and Bunyaviridae, is described here. Virus cell culture growth and harvest conditions were optimized, inactivation methods were evaluated, and concentration procedures were compared for each virus. Antigen performance was evaluated by the MAC-ELISA at each step of the procedure. The antigen production algorithm is a framework for standardization of methodology and quality control; however, a single antigen production protocol was not applicable to all arboviruses and needed to be optimized for each virus. PMID:25102428

  20. Further development of image processing algorithms to improve detectability of defects in Sonic IR NDE

    NASA Astrophysics Data System (ADS)

    Obeidat, Omar; Yu, Qiuye; Han, Xiaoyan

    2017-02-01

    Sonic Infrared imaging (SIR) technology is a relatively new NDE technique that has received significant acceptance in the NDE community. SIR NDE is a super-fast, wide range NDE method. The technology uses short pulses of ultrasonic excitation together with infrared imaging to detect defects in the structures under inspection. Defects become visible to the IR camera when the temperature in the crack vicinity increases due to various heating mechanisms in the specimen. Defect detection is highly affected by noise levels as well as mode patterns in the image. Mode patterns result from the superposition of sonic waves interfering within the specimen during the application of sound pulse. Mode patterns can be a serious concern, especially in composite structures. Mode patterns can either mimic real defects in the specimen, or alternatively, hide defects if they overlap. In last year's QNDE, we have presented algorithms to improve defects detectability in severe noise. In this paper, we will present our development of algorithms on defect extraction targeting specifically to mode patterns in SIR images.

  1. Development of an algorithm for production of inactivated arbovirus antigens in cell culture.

    PubMed

    Goodman, C H; Russell, B J; Velez, J O; Laven, J J; Nicholson, W L; Bagarozzi, D A; Moon, J L; Bedi, K; Johnson, B W

    2014-11-01

    Arboviruses are medically important pathogens that cause human disease ranging from a mild fever to encephalitis. Laboratory diagnosis is essential to differentiate arbovirus infections from other pathogens with similar clinical manifestations. The Arboviral Diseases Branch (ADB) reference laboratory at the CDC Division of Vector-Borne Diseases (DVBD) produces reference antigens used in serological assays such as the virus-specific immunoglobulin M antibody-capture enzyme-linked immunosorbent assay (MAC-ELISA). Antigen production in cell culture has largely replaced the use of suckling mice; however, the methods are not directly transferable. The development of a cell culture antigen production algorithm for nine arboviruses from the three main arbovirus families, Flaviviridae, Togaviridae, and Bunyaviridae, is described here. Virus cell culture growth and harvest conditions were optimized, inactivation methods were evaluated, and concentration procedures were compared for each virus. Antigen performance was evaluated by the MAC-ELISA at each step of the procedure. The antigen production algorithm is a framework for standardization of methodology and quality control; however, a single antigen production protocol was not applicable to all arboviruses and needed to be optimized for each virus.

  2. Development of a neonate lung reconstruction algorithm using a wavelet AMG and estimated boundary form.

    PubMed

    Bayford, R; Kantartzis, P; Tizzard, A; Yerworth, R; Liatsis, P; Demosthenous, A

    2008-06-01

    Objective, non-invasive measures of lung maturity and development, oxygen requirements and lung function, suitable for use in small, unsedated infants, are urgently required to define the nature and severity of persisting lung disease, and to identify risk factors for developing chronic lung problems. Disorders of lung growth, maturation and control of breathing are among the most important problems faced by the neonatologists. At present, no system for continuous monitoring of neonate lung function to reduce the risk of chronic lung disease in infancy in intensive care units exists. We are in the process of developing a new integrated electrical impedance tomography (EIT) system based on wearable technology to integrate measures of the boundary diameter from the boundary form for neonates into the reconstruction algorithm. In principle, this approach could provide a reduction of image artefacts in the reconstructed image associated with incorrect boundary form assumptions. In this paper, we investigate the required accuracy of the boundary form that would be suitable to minimize artefacts in the reconstruction for neonate lung function. The number of data points needed to create the required boundary form is automatically determined using genetic algorithms. The approach presented in this paper is to assist quality of the reconstruction using different approximations to the ideal boundary form. We also investigate the use of a wavelet algebraic multi-grid (WAMG) preconditioner to reduce the reconstruction computation requirements. Results are presented that demonstrate a full 3D model is required to minimize artefact in the reconstructed image and the implementation of a WAMG for EIT.

  3. Not So Rare Earth? New Developments in Understanding the Origin of the Earth and Moon

    NASA Technical Reports Server (NTRS)

    Righter, Kevin

    2007-01-01

    A widely accepted model for the origin of the Earth and Moon has been a somewhat specific giant impact scenario involving an impactor to proto-Earth mass ratio of 3:7, occurring 50-60 Ma after T(sub 0), when the Earth was only half accreted, with the majority of Earth's water then accreted after the main stage of growth, perhaps from comets. There have been many changes to this specific scenario, due to advances in isotopic and trace element geochemistry, more detailed, improved, and realistic giant impact and terrestrial planet accretion modeling, and consideration of terrestrial water sources other than high D/H comets. The current scenario is that the Earth accreted faster and differentiated quickly, the Moon-forming impact could have been mid to late in the accretion process, and water may have been present during accretion. These new developments have broadened the range of conditions required to make an Earth-Moon system, and suggests there may be many new fruitful avenues of research. There are also some classic and unresolved problems such as the significance of the identical O isotopic composition of the Earth and Moon, the depletion of volatiles on the lunar mantle relative to Earth's, the relative contribution of the impactor and proto-Earth to the Moon's mass, and the timing of Earth's possible atmospheric loss relative to the giant impact.

  4. A Review: Origins of the Dielectric Properties of Proteins and Potential Development as Bio-Sensors

    PubMed Central

    Bibi, Fabien; Villain, Maud; Guillaume, Carole; Sorli, Brice; Gontard, Nathalie

    2016-01-01

    Polymers can be classified as synthetic polymers and natural polymers, and are often characterized by their most typical functions namely their high mechanical resistivity, electrical conductivity and dielectric properties. This bibliography report consists in: (i) Defining the origins of the dielectric properties of natural polymers by reviewing proteins. Despite their complex molecular chains, proteins present several points of interest, particularly, their charge content conferring their electrical and dielectric properties; (ii) Identifying factors influencing the dielectric properties of protein films. The effects of vapors and gases such as water vapor, oxygen, carbon dioxide, ammonia and ethanol on the dielectric properties are put forward; (iii) Finally, potential development of protein films as bio-sensors coated on electronic devices for detection of environmental changes particularly humidity or carbon dioxide content in relation with dielectric properties variations are discussed. As the study of the dielectric properties implies imposing an electric field to the material, it was necessary to evaluate the impact of frequency on the polymers and subsequently on their structure. Characterization techniques, on the one hand dielectric spectroscopy devoted for the determination of the glass transition temperature among others, and on the other hand other techniques such as infra-red spectroscopy for structure characterization as a function of moisture content for instance are also introduced. PMID:27527179

  5. A Review: Origins of the Dielectric Properties of Proteins and Potential Development as Bio-Sensors.

    PubMed

    Bibi, Fabien; Villain, Maud; Guillaume, Carole; Sorli, Brice; Gontard, Nathalie

    2016-08-04

    Polymers can be classified as synthetic polymers and natural polymers, and are often characterized by their most typical functions namely their high mechanical resistivity, electrical conductivity and dielectric properties. This bibliography report consists in: (i) Defining the origins of the dielectric properties of natural polymers by reviewing proteins. Despite their complex molecular chains, proteins present several points of interest, particularly, their charge content conferring their electrical and dielectric properties; (ii) Identifying factors influencing the dielectric properties of protein films. The effects of vapors and gases such as water vapor, oxygen, carbon dioxide, ammonia and ethanol on the dielectric properties are put forward; (iii) Finally, potential development of protein films as bio-sensors coated on electronic devices for detection of environmental changes particularly humidity or carbon dioxide content in relation with dielectric properties variations are discussed. As the study of the dielectric properties implies imposing an electric field to the material, it was necessary to evaluate the impact of frequency on the polymers and subsequently on their structure. Characterization techniques, on the one hand dielectric spectroscopy devoted for the determination of the glass transition temperature among others, and on the other hand other techniques such as infra-red spectroscopy for structure characterization as a function of moisture content for instance are also introduced.

  6. The Age Specific Incidence Anomaly Suggests that Cancers Originate During Development

    NASA Astrophysics Data System (ADS)

    Brody, James P.

    The accumulation of genetic alterations causes cancers. Since this accumulation takes time, the incidence of most cancers is thought to increase exponentially with age. However, careful measurements of the age-specific incidence show that the specific incidence for many forms of cancer rises with age to a maximum, and then decreases. This decrease in the age-specific incidence with age is an anomaly. Understanding this anomaly should lead to a better understanding of how tumors develop and grow. Here we derive the shape of the age-specific incidence, showing that it should follow the shape of a Weibull distribution. Measurements indicate that the age-specific incidence for colon cancer does indeed follow a Weibull distribution. This analysis leads to the interpretation that for colon cancer two subpopulations exist in the general population: a susceptible population and an immune population. Colon tumors will only occur in the susceptible population. This analysis is consistent with the developmental origins of disease hypothesis and generalizable to many other common forms of cancer.

  7. Life origination and development hydrate theory (LOH-Theory) in the context of biological, physicochemical, astrophysical, and paleontological studies

    NASA Astrophysics Data System (ADS)

    Ostrovskii, V. E.; Kadyshevich, E. A.

    2014-04-01

    Till now, we formulated and developed the Life Origination Hydrate Theory (LOH-Theory) and Mitosis and Replication Hydrate Theory (MRHTheory) as the instruments for understanding the physical and chemical mechanisms applied by Nature for the living matter origination and propagation. This work is aimed at coordination of these theories with the paleontological and astrophysical knowledges and hypotheses of the Earth and Solar System remote histories.

  8. NASA's Physics of the Cosmos and Cosmic Origins Technology Development Programs

    NASA Technical Reports Server (NTRS)

    Pham, Thai; Seery, Bernard; Ganel, Opher

    2016-01-01

    The strategic astrophysics missions of the coming decades will help answer the questions "How did our universe begin and evolve?" and "How did galaxies, stars, and planets come to be?" Enabling these missions requires advances in key technologies far beyond the current state of the art. NASA's Physics of the Cosmos (PCOS) and Cosmic Origins (COR) Program Offices manage technology maturation projects funded through the Strategic Astrophysics Technology (SAT) program to accomplish such advances. The PCOS and COR Program Offices, residing at the NASA Goddard Space Flight Center (GSFC), were established in 2011, and serve as the implementation arm for the Astrophysics Division at NASA Headquarters. We present an overview of the Programs' technology development activities and the current technology investment portfolio of 23 technology advancements. We discuss the process for addressing community-provided technology gaps and Technology Management Board (TMB)-vetted prioritization and investment recommendations that inform the SAT program. The process improves the transparency and relevance of our technology investments, provides the community a voice in the process, and promotes targeted external technology investments by defining needs and identifying customers. The Programs' priorities are driven by strategic direction from the Astrophysics Division, which is informed by the National Research Council's (NRC) "New Worlds, New Horizons in Astronomy and Astrophysics" (NWNH) 2010 Decadal Survey report [1], the Astrophysics Implementation Plan (AIP) [2] as updated, and the Astrophysics Roadmap "Enduring Quests, Daring Visions" [3]. These priorities include technology development for missions to study dark energy, gravitational waves, X-ray and inflation probe science, and large far-infrared (IR) and ultraviolet (UV)/optical/IR telescopes to conduct imaging and spectroscopy studies. The SAT program is the Astrophysics Division's main investment method to mature technologies

  9. NASA's Physics of the Cosmos and Cosmic Origins programs manage Strategic Astrophysics Technology (SAT) development

    NASA Astrophysics Data System (ADS)

    Pham, Thai; Thronson, Harley; Seery, Bernard; Ganel, Opher

    2016-07-01

    The strategic astrophysics missions of the coming decades will help answer the questions "How did our universe begin and evolve?" "How did galaxies, stars, and planets come to be?" and "Are we alone?" Enabling these missions requires advances in key technologies far beyond the current state of the art. NASA's Physics of the Cosmos2 (PCOS), Cosmic Origins3 (COR), and Exoplanet Exploration Program4 (ExEP) Program Offices manage technology maturation projects funded through the Strategic Astrophysics Technology (SAT) program to accomplish such advances. The PCOS and COR Program Offices, residing at the NASA Goddard Space Flight Center (GSFC), were established in 2011, and serve as the implementation arm for the Astrophysics Division at NASA Headquarters. We present an overview of the Programs' technology development activities and the current technology investment portfolio of 23 technology advancements. We discuss the process for addressing community-provided technology gaps and Technology Management Board (TMB)-vetted prioritization and investment recommendations that inform the SAT program. The process improves the transparency and relevance of our technology investments, provides the community a voice in the process, and promotes targeted external technology investments by defining needs and identifying customers. The Programs' priorities are driven by strategic direction from the Astrophysics Division, which is informed by the National Research Council's (NRC) "New Worlds, New Horizons in Astronomy and Astrophysics" (NWNH) 2010 Decadal Survey report [1], the Astrophysics Implementation Plan (AIP) [2] as updated, and the Astrophysics Roadmap "Enduring Quests, Daring Visions" [3]. These priorities include technology development for missions to study dark energy, gravitational waves, X-ray and inflation probe science, and large far-infrared (IR) and ultraviolet (UV)/optical/IR telescopes to conduct imaging and spectroscopy studies. The SAT program is the

  10. Development of algorithms for detection of mechanical injury on white mushrooms (Agaricus bisporus) using hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Gowen, A. A.; O'Donnell, C. P.

    2009-05-01

    White mushrooms were subjected to mechanical injury by controlled shaking in a plastic box at 400 rpm for different times (0, 60, 120, 300 and 600 s). Immediately after shaking, hyperspectral images were obtained using two pushbroom line-scanning hyperspectral imaging instruments, one operating in the wavelength range of 400 - 1000 nm with spectroscopic resolution of 5 nm, the other operating in the wavelength range of 950 - 1700 nm with spectroscopic resolution of 7 nm. Different spectral and spatial pretreatments were investigated to reduce the effect of sample curvature on hyperspectral data. Algorithms based on Chemometric techniques (Principal Component Analysis and Partial Least Squares Discriminant Analysis) and image processing methods (masking, thresholding, morphological operations) were developed for pixel classification in hyperspectral images. In addition, correlation analysis, spectral angle mapping and scaled difference of sample spectra were investigated and compared with the chemometric approaches.

  11. Soft sensor development for Mooney viscosity prediction in rubber mixing process based on GMMDJITGPR algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Chen, Xiangguang; Wang, Li; Jin, Huaiping

    2017-01-01

    In rubber mixing process, the key parameter (Mooney viscosity), which is used to evaluate the property of the product, can only be obtained with 4-6h delay offline. It is quite helpful for the industry, if the parameter can be estimate on line. Various data driven soft sensors have been used to prediction in the rubber mixing. However, it always not functions well due to the phase and nonlinear property in the process. The purpose of this paper is to develop an efficient soft sensing algorithm to solve the problem. Based on the proposed GMMD local sample selecting criterion, the phase information is extracted in the local modeling. Using the Gaussian local modeling method within Just-in-time (JIT) learning framework, nonlinearity of the process is well handled. Efficiency of the new method is verified by comparing the performance with various mainstream soft sensors, using the samples from real industrial rubber mixing process.

  12. Development of a Low-Lift Chiller Controller and Simplified Precooling Control Algorithm - Final Report

    SciTech Connect

    Gayeski, N.; Armstrong, Peter; Alvira, M.; Gagne, J.; Katipamula, Srinivas

    2011-11-30

    KGS Buildings LLC (KGS) and Pacific Northwest National Laboratory (PNNL) have developed a simplified control algorithm and prototype low-lift chiller controller suitable for model-predictive control in a demonstration project of low-lift cooling. Low-lift cooling is a highly efficient cooling strategy conceived to enable low or net-zero energy buildings. A low-lift cooling system consists of a high efficiency low-lift chiller, radiant cooling, thermal storage, and model-predictive control to pre-cool thermal storage overnight on an optimal cooling rate trajectory. We call the properly integrated and controlled combination of these elements a low-lift cooling system (LLCS). This document is the final report for that project.

  13. Development of Web-Based Menu Planning Support System and its Solution Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Kashima, Tomoko; Matsumoto, Shimpei; Ishii, Hiroaki

    2009-10-01

    Recently lifestyle-related diseases have become an object of public concern, while at the same time people are being more health conscious. As an essential factor for causing the lifestyle-related diseases, we assume that the knowledge circulation on dietary habits is still insufficient. This paper focuses on everyday meals close to our life and proposes a well-balanced menu planning system as a preventive measure of lifestyle-related diseases. The system is developed by using a Web-based frontend and it provides multi-user services and menu information sharing capabilities like social networking services (SNS). The system is implemented on a Web server running Apache (HTTP server software), MySQL (database management system), and PHP (scripting language for dynamic Web pages). For the menu planning, a genetic algorithm is applied by understanding this problem as multidimensional 0-1 integer programming.

  14. Developing image processing meta-algorithms with data mining of multiple metrics.

    PubMed

    Leung, Kelvin; Cunha, Alexandre; Toga, A W; Parker, D Stott

    2014-01-01

    People often use multiple metrics in image processing, but here we take a novel approach of mining the values of batteries of metrics on image processing results. We present a case for extending image processing methods to incorporate automated mining of multiple image metric values. Here by a metric we mean any image similarity or distance measure, and in this paper we consider intensity-based and statistical image measures and focus on registration as an image processing problem. We show how it is possible to develop meta-algorithms that evaluate different image processing results with a number of different metrics and mine the results in an automated fashion so as to select the best results. We show that the mining of multiple metrics offers a variety of potential benefits for many image processing problems, including improved robustness and validation.

  15. Develop algorithms to improve detectability of defects in Sonic IR imaging NDE

    NASA Astrophysics Data System (ADS)

    Obeidat, Omar; Yu, Qiuye; Han, Xiaoyan

    2016-02-01

    Sonic Infrared (IR) technology is relative new in the NDE family. It is a fast, wide area imaging method. It combines ultrasound excitation and infrared imaging while the former to apply ultrasound energy thus induce friction heating in defects and the latter to capture the IR emission from the target. This technology can detect both surface and subsurface defects such as cracks and disbands/delaminations in various materials, metal/metal alloy or composites. However, certain defects may results in only very small IR signature be buried in noise or heating patterns. In such cases, to effectively extract the defect signals becomes critical in identifying the defects. In this paper, we will present algorithms which are developed to improve the detectability of defects in Sonic IR.

  16. Developing Image Processing Meta-Algorithms with Data Mining of Multiple Metrics

    PubMed Central

    Cunha, Alexandre; Toga, A. W.; Parker, D. Stott

    2014-01-01

    People often use multiple metrics in image processing, but here we take a novel approach of mining the values of batteries of metrics on image processing results. We present a case for extending image processing methods to incorporate automated mining of multiple image metric values. Here by a metric we mean any image similarity or distance measure, and in this paper we consider intensity-based and statistical image measures and focus on registration as an image processing problem. We show how it is possible to develop meta-algorithms that evaluate different image processing results with a number of different metrics and mine the results in an automated fashion so as to select the best results. We show that the mining of multiple metrics offers a variety of potential benefits for many image processing problems, including improved robustness and validation. PMID:24653748

  17. A basis for the development of operational algorithms for simplified GPS integrity checking

    NASA Astrophysics Data System (ADS)

    Parkinson, Bradford W.; Axelrad, Penina

    Error models are developed for a least squares approach to GPS satellite failure detection, and a statistical analysis is presented. The algorithm assumes that the GPS user forms a navigation solution by performing a least squares fit to pseudorange measurements made to five or more satellites in view. Results for a C/A code receiver show that a nominal pseudorange measurement error can be realistically modelled as a normally distributed random variable with a mean ranging from -5 to +5 and a standard deviation of 0.4 m for Doppler aided, and 4.0 m for code only, measurements. Theoretical success rates are presented for specific user geometries and measurement errors.

  18. Algorithm and code development for unsteady three-dimensional Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Obayashi, Shigeru

    1993-01-01

    In the last two decades, there have been extensive developments in computational aerodynamics, which constitutes a major part of the general area of computational fluid dynamics. Such developments are essential to advance the understanding of the physics of complex flows, to complement expensive wind-tunnel tests, and to reduce the overall design cost of an aircraft, particularly in the area of aeroelasticity. Aeroelasticity plays an important role in the design and development of aircraft, particularly modern aircraft, which tend to be more flexible. Several phenomena that can be dangerous and limit the performance of an aircraft occur because of the interaction of the flow with flexible components. For example, an aircraft with highly swept wings may experience vortex-induced aeroelastic oscillations. Also, undesirable aeroelastic phenomena due to the presence and movement of shock waves occur in the transonic range. Aeroelastically critical phenomena, such as a low transonic flutter speed, have been known to occur through limited wind-tunnel tests and flight tests. Aeroelastic tests require extensive cost and risk. An aeroelastic wind-tunnel experiment is an order of magnitude more expensive than a parallel experiment involving only aerodynamics. By complementing the wind-tunnel experiments with numerical simulations the overall cost of the development of aircraft can be considerably reduced. In order to accurately compute aeroelastic phenomenon it is necessary to solve the unsteady Euler/Navier-Stokes equations simultaneously with the structural equations of motion. These equations accurately describe the flow phenomena for aeroelastic applications. At Ames a code, ENSAERO, is being developed for computing the unsteady aerodynamics and aeroelasticity of aircraft and it solves the Euler/Navier-Stokes equations. The purpose of this contract is to continue the algorithm enhancements of ENSAERO and to apply the code to complicated geometries. During the last year

  19. Developing Multiple Diverse Potential Designs for Heat Transfer Utilizing Graph Based Evolutionary Algorithms

    SciTech Connect

    David J. Muth Jr.

    2006-09-01

    This paper examines the use of graph based evolutionary algorithms (GBEAs) to find multiple acceptable solutions for heat transfer in engineering systems during the optimization process. GBEAs are a type of evolutionary algorithm (EA) in which a topology, or geography, is imposed on an evolving population of solutions. The rates at which solutions can spread within the population are controlled by the choice of topology. As in nature geography can be used to develop and sustain diversity within the solution population. Altering the choice of graph can create a more or less diverse population of potential solutions. The choice of graph can also affect the convergence rate for the EA and the number of mating events required for convergence. The engineering system examined in this paper is a biomass fueled cookstove used in developing nations for household cooking. In this cookstove wood is combusted in a small combustion chamber and the resulting hot gases are utilized to heat the stove’s cooking surface. The spatial temperature profile of the cooking surface is determined by a series of baffles that direct the flow of hot gases. The optimization goal is to find baffle configurations that provide an even temperature distribution on the cooking surface. Often in engineering, the goal of optimization is not to find the single optimum solution but rather to identify a number of good solutions that can be used as a starting point for detailed engineering design. Because of this a key aspect of evolutionary optimization is the diversity of the solutions found. The key conclusion in this paper is that GBEA’s can be used to create multiple good solutions needed to support engineering design.

  20. Development of a new genetic algorithm to solve the feedstock scheduling problem in an anaerobic digester

    NASA Astrophysics Data System (ADS)

    Cram, Ana Catalina

    As worldwide environmental awareness grow, alternative sources of energy have become important to mitigate climate change. Biogas in particular reduces greenhouse gas emissions that contribute to global warming and has the potential of providing 25% of the annual demand for natural gas in the U.S. In 2011, 55,000 metric tons of methane emissions were reduced and 301 metric tons of carbon dioxide emissions were avoided through the use of biogas alone. Biogas is produced by anaerobic digestion through the fermentation of organic material. It is mainly composed of methane with a rage of 50 to 80% in its concentration. Carbon dioxide covers 20 to 50% and small amounts of hydrogen, carbon monoxide and nitrogen. The biogas production systems are anaerobic digestion facilities and the optimal operation of an anaerobic digester requires the scheduling of all batches from multiple feedstocks during a specific time horizon. The availability times, biomass quantities, biogas production rates and storage decay rates must all be taken into account for maximal biogas production to be achieved during the planning horizon. Little work has been done to optimize the scheduling of different types of feedstock in anaerobic digestion facilities to maximize the total biogas produced by these systems. Therefore, in the present thesis, a new genetic algorithm is developed with the main objective of obtaining the optimal sequence in which different feedstocks will be processed and the optimal time to allocate to each feedstock in the digester with the main objective of maximizing the production of biogas considering different types of feedstocks, arrival times and decay rates. Moreover, all batches need to be processed in the digester in a specified time with the restriction that only one batch can be processed at a time. The developed algorithm is applied to 3 different examples and a comparison with results obtained in previous studies is presented.

  1. The development and concurrent validity of a real-time algorithm for temporal gait analysis using inertial measurement units.

    PubMed

    Allseits, E; Lučarević, J; Gailey, R; Agrawal, V; Gaunaurd, I; Bennett, C

    2017-04-11

    The use of inertial measurement units (IMUs) for gait analysis has emerged as a tool for clinical applications. Shank gyroscope signals have been utilized to identify heel-strike and toe-off, which serve as the foundation for calculating temporal parameters of gait such as single and double limb support time. Recent publications have shown that toe-off occurs later than predicted by the dual minima method (DMM), which has been adopted as an IMU-based gait event detection algorithm.In this study, a real-time algorithm, Noise-Zero Crossing (NZC), was developed to accurately compute temporal gait parameters. Our objective was to determine the concurrent validity of temporal gait parameters derived from the NZC algorithm against parameters measured by an instrumented walkway. The accuracy and precision of temporal gait parameters derived using NZC were compared to those derived using the DMM. The results from Bland-Altman Analysis showed that the NZC algorithm had excellent agreement with the instrumented walkway for identifying the temporal gait parameters of Gait Cycle Time (GCT), Single Limb Support (SLS) time, and Double Limb Support (DLS) time. By utilizing the moment of zero shank angular velocity to identify toe-off, the NZC algorithm performed better than the DMM algorithm in measuring SLS and DLS times. Utilizing the NZC algorithm's gait event detection preserves DLS time, which has significant clinical implications for pathologic gait assessment.

  2. Prediction system of hydroponic plant growth and development using algorithm Fuzzy Mamdani method

    NASA Astrophysics Data System (ADS)

    Sudana, I. Made; Purnawirawan, Okta; Arief, Ulfa Mediaty

    2017-03-01

    Hydroponics is a method of farming without soil. One of the Hydroponic plants is Watercress (Nasturtium Officinale). The development and growth process of hydroponic Watercress was influenced by levels of nutrients, acidity and temperature. The independent variables can be used as input variable system to predict the value level of plants growth and development. The prediction system is using Fuzzy Algorithm Mamdani method. This system was built to implement the function of Fuzzy Inference System (Fuzzy Inference System/FIS) as a part of the Fuzzy Logic Toolbox (FLT) by using MATLAB R2007b. FIS is a computing system that works on the principle of fuzzy reasoning which is similar to humans' reasoning. Basically FIS consists of four units which are fuzzification unit, fuzzy logic reasoning unit, base knowledge unit and defuzzification unit. In addition to know the effect of independent variables on the plants growth and development that can be visualized with the function diagram of FIS output surface that is shaped three-dimensional, and statistical tests based on the data from the prediction system using multiple linear regression method, which includes multiple linear regression analysis, T test, F test, the coefficient of determination and donations predictor that are calculated using SPSS (Statistical Product and Service Solutions) software applications.

  3. Innovative approach in the development of computer assisted algorithm for spine pedicle screw placement.

    PubMed

    Solitro, Giovanni F; Amirouche, Farid

    2016-04-01

    Pedicle screws are typically used for fusion, percutaneous fixation, and means of gripping a spinal segment. The screws act as a rigid and stable anchor points to bridge and connect with a rod as part of a construct. The foundation of the fusion is directly related to the placement of these screws. Malposition of pedicle screws causes intraoperative complications such as pedicle fractures and dural lesions and is a contributing factor to fusion failure. Computer assisted spine surgery (CASS) and patient-specific drill templates were developed to reduce this failure rate, but the trajectory of the screws remains a decision driven by anatomical landmarks often not easily defined. Current data shows the need of a robust and reliable technique that prevents screw misplacement. Furthermore, there is a need to enhance screw insertion guides to overcome the distortion of anatomical landmarks, which is viewed as a limiting factor by current techniques. The objective of this study is to develop a method and mathematical lemmas that are fundamental to the development of computer algorithms for pedicle screw placement. Using the proposed methodology, we show how we can generate automated optimal safe screw insertion trajectories based on the identification of a set of intrinsic parameters. The results, obtained from the validation of the proposed method on two full thoracic segments, are similar to previous morphological studies. The simplicity of the method, being pedicle arch based, is applicable to vertebrae where landmarks are either not well defined, altered or distorted.

  4. Development of a Near-Real Time Hail Damage Swath Identification Algorithm for Vegetation

    NASA Technical Reports Server (NTRS)

    Bell, Jordan R.; Molthan, Andrew L.; Schultz, Lori A.; McGrath, Kevin M.; Burks, Jason E.

    2015-01-01

    The Midwest is home to one of the world's largest agricultural growing regions. Between the time period of late May through early September, and with irrigation and seasonal rainfall these crops are able to reach their full maturity. Using moderate to high resolution remote sensors, the monitoring of the vegetation can be achieved using the red and near-infrared wavelengths. These wavelengths allow for the calculation of vegetation indices, such as Normalized Difference Vegetation Index (NDVI). The vegetation growth and greenness, in this region, grows and evolves uniformly as the growing season progresses. However one of the biggest threats to Midwest vegetation during the time period is thunderstorms that bring large hail and damaging winds. Hail and wind damage to crops can be very expensive to crop growers and, damage can be spread over long swaths associated with the tracks of the damaging storms. Damage to the vegetation can be apparent in remotely sensed imagery and is visible from space after storms slightly damage the crops, allowing for changes to occur slowly over time as the crops wilt or more readily apparent if the storms strip material from the crops or destroy them completely. Previous work on identifying these hail damage swaths used manual interpretation by the way of moderate and higher resolution satellite imagery. With the development of an automated and near-real time hail swath damage identification algorithm, detection can be improved, and more damage indicators be created in a faster and more efficient way. The automated detection of hail damage swaths will examine short-term, large changes in the vegetation by differencing near-real time eight day NDVI composites and comparing them to post storm imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard Terra and Aqua and Visible Infrared Imaging Radiometer Suite (VIIRS) aboard Suomi NPP. In addition land surface temperatures from these instruments will be examined as

  5. Development of the Tardivo Algorithm to Predict Amputation Risk of Diabetic Foot

    PubMed Central

    Tardivo, João Paulo; Baptista, Maurício S.; Correa, João Antonio; Adami, Fernando; Pinhal, Maria Aparecida Silva

    2015-01-01

    Diabetes is a chronic disease that affects almost 19% of the elderly population in Brazil and similar percentages around the world. Amputation of lower limbs in diabetic patients who present foot complications is a common occurrence with a significant reduction of life quality, and heavy costs on the health system. Unfortunately, there is no easy protocol to define the conditions that should be considered to proceed to amputation. The main objective of the present study is to create a simple prognostic score to evaluate the diabetic foot, which is called Tardivo Algorithm. Calculation of the score is based on three main factors: Wagner classification, signs of peripheral arterial disease (PAD), which is evaluated by using Peripheral Arterial Disease Classification, and the location of ulcers. The final score is obtained by multiplying the value of the individual factors. Patients with good peripheral vascularization received a value of 1, while clinical signs of ischemia received a value of 2 (PAD 2). Ulcer location was defined as forefoot, midfoot and hind foot. The conservative treatment used in patients with scores below 12 was based on a recently developed Photodynamic Therapy (PDT) protocol. 85.5% of these patients presented a good outcome and avoided amputation. The results showed that scores 12 or higher represented a significantly higher probability of amputation (Odds ratio and logistic regression-IC 95%, 12.2–1886.5). The Tardivo algorithm is a simple prognostic score for the diabetic foot, easily accessible by physicians. It helps to determine the amputation risk and the best treatment, whether it is conservative or surgical management. PMID:26281044

  6. The development of a near-real time hail damage swath identification algorithm for vegetation

    NASA Astrophysics Data System (ADS)

    Bell, Jordan R.

    The central United States is primarily covered in agricultural lands with a growing season that peaks during the same time as the region's climatological maximum for severe weather. These severe thunderstorms can bring large hail that can cause extensive areas of crop damage, which can be difficult to survey from the ground. Satellite remote sensing can help with the identification of these damaged areas. This study examined three techniques for identifying damage using satellite imagery that could be used in the development of a near-real time algorithm formulated for the detection of damage to agriculture caused by hail. The three techniques: a short term Normalized Difference Vegetation Index (NDVI) change product, a modified Vegetation Health Index (mVHI) that incorporates both NDVI and land surface temperature (LST), and a feature detection technique based on NDVI and LST anomalies were tested on a single training case and five case studies. Skill scores were computed for each of the techniques during the training case and each case study. Among the best-performing case studies, the probability of detection (POD) for the techniques ranged from 0.527 - 0.742. Greater skill was noted for environments that occurred later in the growing season over areas where the land cover was consistently one or two types of uniform vegetation. The techniques struggled in environments where the land cover was not able to provide uniform vegetation, resulting in POD of 0.067 - 0.223. The feature detection technique was selected to be used for the near-real-time algorithm, based on the consistent performance throughout the entire growing season.

  7. Development and evaluation of a micro-macro algorithm for the simulation of polymer flow

    SciTech Connect

    Feigl, Kathleen . E-mail: feigl@mtu.edu; Tanner, Franz X.

    2006-07-20

    A micro-macro algorithm for the calculation of polymer flow is developed and numerically evaluated. The system being solved consists of the momentum and mass conservation equations from continuum mechanics coupled with a microscopic-based rheological model for polymer stress. Standard finite element techniques are used to solve the conservation equations for velocity and pressure, while stochastic simulation techniques are used to compute polymer stress from the simulated polymer dynamics in the rheological model. The rheological model considered combines aspects of reptation, network and continuum models. Two types of spatial approximation are considered for the configuration fields defining the dynamics in the model: piecewise constant and piecewise linear. The micro-macro algorithm is evaluated by simulating the abrupt planar die entry flow of a polyisobutylene solution described in the literature. The computed velocity and stress fields are found to be essentially independent of mesh size and ensemble size, while there is some dependence of the results on the order of spatial approximation to the configuration fields close to the die entry. Comparison with experimental data shows that the piecewise linear approximation leads to better predictions of the centerline first normal stress difference. Finally, the computational time associated with the piecewise constant spatial approximation is found to be about 2.5 times lower than that associated with the piecewise linear approximation. This is the result of the more efficient time integration scheme that is possible with the former type of approximation due to the pointwise incompressibility guaranteed by the choice of velocity-pressure finite element.

  8. Development and validation of a simple algorithm for initiation of CPAP in neonates with respiratory distress in Malawi

    PubMed Central

    Hundalani, Shilpa G; Richards-Kortum, Rebecca; Oden, Maria; Kawaza, Kondwani; Gest, Alfred; Molyneux, Elizabeth

    2015-01-01

    Background Low-cost bubble continuous positive airway pressure (bCPAP) systems have been shown to improve survival in neonates with respiratory distress, in developing countries including Malawi. District hospitals in Malawi implementing CPAP requested simple and reliable guidelines to enable healthcare workers with basic skills and minimal training to determine when treatment with CPAP is necessary. We developed and validated TRY (T: Tone is good, R: Respiratory Distress and Y=Yes) CPAP, a simple algorithm to identify neonates with respiratory distress who would benefit from CPAP. Objective To validate the TRY CPAP algorithm for neonates with respiratory distress in a low-resource setting. Methods We constructed an algorithm using a combination of vital signs, tone and birth weight to determine the need for CPAP in neonates with respiratory distress. Neonates admitted to the neonatal ward of Queen Elizabeth Central Hospital, in Blantyre, Malawi, were assessed in a prospective, cross-sectional study. Nurses and paediatricians-in-training assessed neonates to determine whether they required CPAP using the TRY CPAP algorithm. To establish the accuracy of the TRY CPAP algorithm in evaluating the need for CPAP, their assessment was compared with the decision of a neonatologist blinded to the TRY CPAP algorithm findings. Results 325 neonates were evaluated over a 2-month period; 13% were deemed to require CPAP by the neonatologist. The inter-rater reliability with the algorithm was 0.90 for nurses and 0.97 for paediatricians-in-training using the neonatologist's assessment as the reference standard. Conclusions The TRY CPAP algorithm has the potential to be a simple and reliable tool to assist nurses and clinicians in identifying neonates who require treatment with CPAP in low-resource settings. PMID:25877290

  9. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  10. Timing of human preimplantation embryonic development is confounded by embryo origin

    PubMed Central

    Kirkegaard, K.; Sundvall, L.; Erlandsen, M.; Hindkjær, J.J.; Knudsen, U.B.; Ingerslev, H.J.

    2016-01-01

    STUDY QUESTION To what extent do patient- and treatment-related factors explain the variation in morphokinetic parameters proposed as embryo viability markers? SUMMARY ANSWER Up to 31% of the observed variation in timing of embryo development can be explained by embryo origin, but no single factor elicits a systematic influence. WHAT IS KNOWN ALREADY Several studies report that culture conditions, patient characteristics and treatment influence timing of embryo development, which have promoted the perception that each clinic must develop individual models. Most of the studies have, however, treated embryos from one patient as independent observations, and only very few studies that evaluate the influence from patient- and treatment-related factors on timing of development or time-lapse parameters as predictors of viability have controlled for confounding, which implies a high risk of overestimating the statistical significance of potential correlations. STUDY DESIGN, SIZE, DURATION Infertile patients were prospectively recruited to a cohort study at a hospital fertility clinic from February 2011 to May 2013. Patients aged <38 years without endometriosis were eligible if ≥8 oocytes were retrieved. Patients were included only once. All embryos were monitored for 6 days in a time-lapse incubator. PARTICIPANTS/MATERIALS, SETTING, METHODS A total of 1507 embryos from 243 patients were included. The influence of fertilization method, BMI, maternal age, FSH dose and number of previous cycles on timing of t2-t5, duration of the 2- and 3-cell stage, and development of a blastocoel (tEB) and full blastocoel (tFB) was tested in multivariate, multilevel linear regression analysis. Predictive parameters for live birth were tested in a logistic regression analysis for 223 single transferred blastocysts, where time-lapse parameters were investigated along with patient and embryo characteristics. MAIN RESULTS AND THE ROLE OF CHANCE Moderate intra-class correlation coefficients

  11. Calibration and Algorithm Development for Estimation of Nitrogen in Wheat Crop Using Tractor Mounted N-Sensor

    PubMed Central

    Singh, Manjeet; Kumar, Rajneesh; Sharma, Ankit; Singh, Bhupinder; Thind, S. K.

    2015-01-01

    The experiment was planned to investigate the tractor mounted N-sensor (Make Yara International) to predict nitrogen (N) for wheat crop under different nitrogen levels. It was observed that, for tractor mounted N-sensor, spectrometers can scan about 32% of total area of crop under consideration. An algorithm was developed using a linear relationship between sensor sufficiency index (SIsensor) and SISPAD to calculate the Napp as a function of SISPAD. There was a strong correlation among sensor attributes (sensor value, sensor biomass, and sensor NDVI) and different N-levels. It was concluded that tillering stage is most prominent stage to predict crop yield as compared to the other stages by using sensor attributes. The algorithms developed for tillering and booting stages are useful for the prediction of N-application rates for wheat crop. N-application rates predicted by algorithm developed and sensor value were almost the same for plots with different levels of N applied. PMID:25811039

  12. Ice surface temperature retrieval from AVHRR, ATSR, and passive microwave satellite data: Algorithm development and application

    NASA Technical Reports Server (NTRS)

    Key, Jeff; Maslanik, James; Steffen, Konrad

    1994-01-01

    One essential parameter used in the estimation of radiative and turbulent heat fluxes from satellite data is surface temperature. Sea and land surface temperature (SST and LST) retrieval algorithms that utilize the thermal infrared portion of the spectrum have been developed, with the degree of success dependent primarily upon the variability of the surface and atmospheric characteristics. However, little effort has been directed to the retrieval of the sea ice surface temperature (IST) in the Arctic and Antarctic pack ice or the ice sheet surface temperature over Antarctica and Greenland. The reason is not one of methodology, but rather our limited knowledge of atmospheric temperature, humidity, and aerosol vertical, spatial and temporal distributions, the microphysical properties of polar clouds, and the spectral characteristics of snow, ice, and water surfaces. Over the open ocean the surface is warm, dark, and relatively homogeneous. This makes SST retrieval, including cloud clearing, a fairly straightforward task. Over the ice, however, the surface within a single satellite pixel is likely to be highly heterogeneous, a mixture of ice of various thicknesses, open water, and snow cover in the case of sea ice. Additionally, the Arctic is cloudy - very cloudy - with typical cloud cover amounts ranging from 60-90 percent. There are few observations of cloud cover amounts over Antarctica. The goal of this research is to increase our knowledge of surface temperature patterns and magnitudes in both polar regions, by examining existing data and improving our ability to use satellite data as a monitoring tool. Four instruments are of interest in this study: the AVHRR, ATSR, SMMR, and SSM/I. Our objectives are as follows. Refine the existing AVHRR retrieval algorithm defined in Key and Haefliger (1992; hereafter KH92) and applied elsewhere. Develop a method for IST retrieval from ATSR data similar to the one used for SST. Further investigate the possibility of estimating

  13. Development of multi-objective genetic algorithm concurrent subspace optimization (MOGACSSO) method with robustness

    NASA Astrophysics Data System (ADS)

    Parashar, Sumeet

    Most engineering design problems are complex and multidisciplinary in nature, and quite often require more than one objective (cost) function to be extremized simultaneously. For multi-objective optimization problems, there is not a single optimum solution, but a set of optimum solutions called the Pareto set. The primary goal of this research is to develop a heuristic solution strategy to enable multi-objective optimization of highly coupled multidisciplinary design applications, wherein each discipline is able to retain some degree of autonomous control during the process. To achieve this goal, this research extends the capability of the Multi-Objective Pareto Concurrent Subspace Optimization (MOPCSSO) method to generate large numbers of non-dominated solutions in each cycle, with subsequent update and refinement, thereby greatly increasing efficiency. While the conventional MOPCSSO approach is easily able to generate Pareto solutions, it will only generate one Pareto solution at a time. In order to generate the complete Pareto front, MOPCSSO requires multiple runs (translating into many system convergence cycles) using different initial staring points. In this research, a Genetic Algorithm-based heuristic solution strategy is developed for multi-objective problems in coupled multidisciplinary design. The Multi-Objective Genetic Algorithm Concurrent Subspace Optimization (MOGACSSO) method allows for the generation of relatively evenly distributed Pareto solutions in a faster and more efficient manner than repeated implementation of MOPCSSO. While achieving an optimum design, it is often also desirable that the optimum design be robust to uncontrolled parameter variations. In this research, the capability of the MOGACSSO method is also extended to generate Pareto points that are robust in terms of performance and feasibility, for given uncontrolled parameter variations. The Roust-MOGACSSO method developed in this research can generate a large number of designs

  14. Development of algorithms and approximations for rapid operational air quality modelling

    NASA Astrophysics Data System (ADS)

    Barrett, Steven R. H.; Britter, Rex E.

    In regulatory and public health contexts the long-term average pollutant concentration in the vicinity of a source is frequently of interest. Well-developed modelling tools such as AERMOD and ADMS are able to generate time-series air quality estimates of considerable accuracy, applying an up-to-date understanding of atmospheric boundary layer behaviour. However, such models incur a significant computational cost with runtimes of hours to days. These approaches are often acceptable when considering a single industrial complex, but for widespread policy analyses the computational cost rapidly becomes intractable. In this paper we present some mathematical techniques and algorithmic approaches that can make air quality estimates several orders of magnitude faster. We show that, for long-term average concentrations, lateral dispersion need not be accounted for explicitly. This is applied to a simple reference case of a ground-level point source in a neutral boundary layer. A scaling law is also developed for the area in exceedance of a regulatory limit value.

  15. Design and development of guidance navigation and control algorithms for spacecraft rendezvous and docking experimentation

    NASA Astrophysics Data System (ADS)

    Guglieri, Giorgio; Maroglio, Franco; Pellegrino, Pasquale; Torre, Liliana

    2014-01-01

    This paper presents the design of the GNC system of a ground test-bed for spacecraft rendezvous and docking experiments. The test-bed is developed within the STEPS project (Systems and Technologies for Space Exploration). The facility consists of a flat floor and two scaled vehicles, one active chaser and one “semi-active” target. Rendezvous and docking maneuvers are performed floating on the plane with pierced plates as lifting systems. The system is designed to work both with inertial and non-inertial reference frame, receiving signals from navigation sensors as: accelerometers, gyroscopes, laser meter, radio finder and video camera, and combining them with a digital filter. A Proportional-Integrative-Derivative control law and Pulse Width Modulators are used to command the cold gas thrusters of the chaser, and to follow an assigned trajectory with its specified velocity profile. The design and development of the guidance, navigation and control system and its architecture-including the software algorithms-are detailed in the paper, presenting a performance analysis based on a simulated environment. A complete description of the integrated subsystems is also presented.

  16. Development of fast line scanning imaging algorithm for diseased chicken detection

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Chieh; Chao, Kuanglin; Chen, Yud-Ren; Kim, Moon S.

    2005-11-01

    A hyperspectral line-scan imaging system for automated inspection of wholesome and diseased chickens was developed and demonstrated. The hyperspectral imaging system consisted of an electron-multiplying charge-coupled-device (EMCCD) camera and an imaging spectrograph. The system used a spectrograph to collect spectral measurements across a pixel-wide vertical linear field of view through which moving chicken carcasses passed. After a series of image calibration procedures, the hyperspectral line-scan images were collected for chickens on a laboratory simulated processing line. From spectral analysis, four key wavebands for differentiating between wholesome and systemically diseased chickens were selected: 413 nm, 472 nm, 515 nm, and 546 nm, and a reference waveband, 622 nm. The ratio of relative reflectance between each key wavelength and the reference wavelength was calculated as an image feature. A fuzzy logic-based algorithm utilizing the key wavebands was developed to identify individual pixels on the chicken surface exhibiting symptoms of systemic disease. Two differentiation methods were built to successfully differentiate 72 systemically diseased chickens from 65 wholesome chickens.

  17. Development of a Robotic Colonoscopic Manipulation System, Using Haptic Feedback Algorithm

    PubMed Central

    Woo, Jaehong; Choi, Jae Hyuk; Seo, Jong Tae

    2017-01-01

    Purpose Colonoscopy is one of the most effective diagnostic and therapeutic tools for colorectal diseases. We aim to propose a master-slave robotic colonoscopy that is controllable in remote site using conventional colonoscopy. Materials and Methods The master and slave robot were developed to use conventional flexible colonoscopy. The robotic colonoscopic procedure was performed using a colonoscope training model by one expert endoscopist and two unexperienced engineers. To provide the haptic sensation, the insertion force and the rotating torque were measured and sent to the master robot. Results A slave robot was developed to hold the colonoscopy and its knob, and perform insertion, rotation, and two tilting motions of colonoscope. A master robot was designed to teach motions of the slave robot. These measured force and torque were scaled down by one tenth to provide the operator with some reflection force and torque at the haptic device. The haptic sensation and feedback system was successful and helpful to feel the constrained force or torque in colon. The insertion time using robotic system decreased with repeated procedures. Conclusion This work proposed a robotic approach for colonoscopy using haptic feedback algorithm, and this robotic device would effectively perform colonoscopy with reduced burden and comparable safety for patients in remote site. PMID:27873506

  18. Development of an Innovative Algorithm for Aerodynamics-Structure Interaction Using Lattice Boltzmann Method

    NASA Technical Reports Server (NTRS)

    Mei, Ren-Wei; Shyy, Wei; Yu, Da-Zhi; Luo, Li-Shi; Rudy, David (Technical Monitor)

    2001-01-01

    The lattice Boltzmann equation (LBE) is a kinetic formulation which offers an alternative computational method capable of solving fluid dynamics for various systems. Major advantages of the method are owing to the fact that the solution for the particle distribution functions is explicit, easy to implement, and the algorithm is natural to parallelize. In this final report, we summarize the works accomplished in the past three years. Since most works have been published, the technical details can be found in the literature. Brief summary will be provided in this report. In this project, a second-order accurate treatment of boundary condition in the LBE method is developed for a curved boundary and tested successfully in various 2-D and 3-D configurations. To evaluate the aerodynamic force on a body in the context of LBE method, several force evaluation schemes have been investigated. A simple momentum exchange method is shown to give reliable and accurate values for the force on a body in both 2-D and 3-D cases. Various 3-D LBE models have been assessed in terms of efficiency, accuracy, and robustness. In general, accurate 3-D results can be obtained using LBE methods. The 3-D 19-bit model is found to be the best one among the 15-bit, 19-bit, and 27-bit LBE models. To achieve desired grid resolution and to accommodate the far field boundary conditions in aerodynamics computations, a multi-block LBE method is developed by dividing the flow field into various blocks each having constant lattice spacing. Substantial contribution to the LBE method is also made through the development of a new, generalized lattice Boltzmann equation constructed in the moment space in order to improve the computational stability, detailed theoretical analysis on the stability, dispersion, and dissipation characteristics of the LBE method, and computational studies of high Reynolds number flows with singular gradients. Finally, a finite difference-based lattice Boltzmann method is

  19. Ocean observations with EOS/MODIS: Algorithm development and post launch studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1994-01-01

    During CY 1994 there are five objectives under this task: (1) investigate the effects of stratospheric aerosol on the proposed correction algorithm, and investigate the use of the 1380 nm MODIS band to remove the stratospheric aerosol perturbation; (2) investigate the effect of vertical structure in aerosol concentration and type on the behavior of the proposed correction algorithm; (3) investigate the effects of polarization on the accuracy of the algorithm; (4) improve the accuracy and speed of the existing algorithm; and (5) investigate removal of the O2 'A' absorption band at 762 nm from the 765 nm SeaWiFS band so the latter can be used in atmospheric correction of SeaWiFS. The importance of this to MODIS is that SeaWiFS data will be used extensively to test and improve the MODIS algorithm. Thus it is essential that the O2 absorption be adequately dealt with for SeaWiFS.

  20. The Centennial of Counselor Education: Origin and Early Development of a Discipline

    ERIC Educational Resources Information Center

    Savickas, Mark L.

    2011-01-01

    July 7, 2011, marks the centennial of counselor education as a formal discipline. In recognition of its 100th birthday, the author of this article describes the origins of the discipline, beginning with its prehistory in the work of Frank Parsons to establish the practice of vocational guidance, describing the 1st course in counselor education at…

  1. Utilization of Ancillary Data Sets for Conceptual SMAP Mission Algorithm Development and Product Generation

    NASA Technical Reports Server (NTRS)

    O'Neill, P.; Podest, E.

    2011-01-01

    The planned Soil Moisture Active Passive (SMAP) mission is one of the first Earth observation satellites being developed by NASA in response to the National Research Council's Decadal Survey, Earth Science and Applications from Space: National Imperatives for the Next Decade and Beyond [1]. Scheduled to launch late in 2014, the proposed SMAP mission would provide high resolution and frequent revisit global mapping of soil moisture and freeze/thaw state, utilizing enhanced Radio Frequency Interference (RFI) mitigation approaches to collect new measurements of the hydrological condition of the Earth's surface. The SMAP instrument design incorporates an L-band radar (3 km) and an L band radiometer (40 km) sharing a single 6-meter rotating mesh antenna to provide measurements of soil moisture and landscape freeze/thaw state [2]. These observations would (1) improve our understanding of linkages between the Earth's water, energy, and carbon cycles, (2) benefit many application areas including numerical weather and climate prediction, flood and drought monitoring, agricultural productivity, human health, and national security, (3) help to address priority questions on climate change, and (4) potentially provide continuity with brightness temperature and soil moisture measurements from ESA's SMOS (Soil Moisture Ocean Salinity) and NASA's Aquarius missions. In the planned SMAP mission prelaunch time frame, baseline algorithms are being developed for generating (1) soil moisture products both from radiometer measurements on a 36 km grid and from combined radar/radiometer measurements on a 9 km grid, and (2) freeze/thaw products from radar measurements on a 3 km grid. These retrieval algorithms need a variety of global ancillary data, both static and dynamic, to run the retrieval models, constrain the retrievals, and provide flags for indicating retrieval quality. The choice of which ancillary dataset to use for a particular SMAP product would be based on a number of factors

  2. Development of a deformable dosimetric phantom to verify dose accumulation algorithms for adaptive radiotherapy

    PubMed Central

    Zhong, Hualiang; Adams, Jeffrey; Glide-Hurst, Carri; Zhang, Hualin; Li, Haisen; Chetty, Indrin J.

    2016-01-01

    Adaptive radiotherapy may improve treatment outcomes for lung cancer patients. Because of the lack of an effective tool for quality assurance, this therapeutic modality is not yet accepted in clinic. The purpose of this study is to develop a deformable physical phantom for validation of dose accumulation algorithms in regions with heterogeneous mass. A three-dimensional (3D) deformable phantom was developed containing a tissue-equivalent tumor and heterogeneous sponge inserts. Thermoluminescent dosimeters (TLDs) were placed at multiple locations in the phantom each time before dose measurement. Doses were measured with the phantom in both the static and deformed cases. The deformation of the phantom was actuated by a motor driven piston. 4D computed tomography images were acquired to calculate 3D doses at each phase using Pinnacle and EGSnrc/DOSXYZnrc. These images were registered using two registration software packages: VelocityAI and Elastix. With the resultant displacement vector fields (DVFs), the calculated 3D doses were accumulated using a mass-and energy congruent mapping method and compared to those measured by the TLDs at four typical locations. In the static case, TLD measurements agreed with all the algorithms by 1.8% at the center of the tumor volume and by 4.0% in the penumbra. In the deformable case, the phantom's deformation was reproduced within 1.1 mm. For the 3D dose calculated by Pinnacle, the total dose accumulated with the Elastix DVF agreed well to the TLD measurements with their differences <2.5% at four measured locations. When the VelocityAI DVF was used, their difference increased up to 11.8%. For the 3D dose calculated by EGSnrc/DOSXYZnrc, the total doses accumulated with the two DVFs were within 5.7% of the TLD measurements which are slightly over the rate of 5% for clinical acceptance. The detector-embedded deformable phantom allows radiation dose to be measured in a dynamic environment, similar to deforming lung tissues, supporting

  3. Development and validation of algorithms for heart failure patient care: a Delphi study

    PubMed Central

    Gopal, Cynthia Priyadarshini; Ranga, Asri; Joseph, Kevin Louis; Tangiisuran, Balamurugan

    2015-01-01

    INTRODUCTION Although heart failure (HF) management is available at primary and secondary care facilities in Malaysia, the optimisation of drug therapy is still suboptimal. Although pharmacists can help bridge the gap in optimising HF therapy, pharmacists in Malaysia currently do not manage and titrate HF pharmacotherapy. The aim of this study was to develop treatment algorithms and monitoring protocols for angiotensin-converting enzyme inhibitors, angiotensin II receptor blockers, beta-blockers and spironolactone based on extensive literature review for validation and utilisation by pharmacists involved in HF management. METHODS A Delphi survey involving 32 panellists from private and government hospitals that provide cardiac services in Malaysia was conducted to obtain a consensus of opinion on the treatment protocols. The panellists completed two rounds of self-administered questionnaires to determine their level of agreement with all the components in the protocols. RESULTS Consensus was achieved for most of the sections of the protocols for the four classes of drugs. The panellists’ opinions were taken into consideration when amending the components of the protocols that did not achieve consensus of opinion. Full consensus was achieved with the second survey conducted, enabling the finalisation of the drug titration protocols. CONCLUSION The resulting validated HF titration protocols can be used as a guide for pharmacists when recommending the initiation and titration of HF drug therapy in daily clinical practice. Recommendations should be made in collaboration with the patient’s treating physician, with concomitant monitoring of the patient’s response to the drugs. PMID:25532514

  4. Development of Pressurized Water Reactor Integrated Safety Analysis Methodology Using Multilevel Coupling Algorithm

    SciTech Connect

    Ziabletsev, Dmitri; Avramova, Maria; Ivanov, Kostadin

    2004-11-15

    The subchannel code COBRA-TF has been introduced for an evaluation of thermal margins on the local pin-by-pin level in a pressurized water reactor. The coupling of COBRA-TF with TRAC-PF1/NEM is performed by providing from TRAC to COBRA-TF axial and radial thermal-hydraulic boundary conditions and relative pin-power profiles, obtained with the pin power reconstruction model of the nodal expansion method (NEM). An efficient algorithm for coupling of the subchannel code COBRA-TF with TRAC-PF1/NEM in the parallel virtual machine environment was developed addressing the issues of time synchronization, data exchange, spatial overlays, and coupled convergence. Local feedback modeling on the pin level was implemented into COBRA-TF, which enabled updating the local form functions and the recalculation of the pin powers in TRAC-PF1/NEM after obtaining the local feedback parameters. The coupled TRAC-PF1/NEM/COBRA-TF code system was tested on the rod ejection accident and main steam line break benchmark problems. In both problems, the local results are closer than before the introduced multilevel coupling to the corresponding critical limits. This fact indicates that the assembly average results tend to underestimate the accident consequences in terms of local safety margins. The capability of local safety evaluation, performed simultaneously (online) with coupled global three-dimensional neutron kinetics/thermal-hydraulic calculations, is introduced and tested. The obtained results demonstrate the importance of the current work.

  5. Algorithm for Compressing Time-Series Data

    NASA Technical Reports Server (NTRS)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  6. Development of Variational Guiding Center Algorithms for Parallel Calculations in Experimental Magnetic Equilibria

    SciTech Connect

    Ellison, C. Leland; Finn, J. M.; Qin, H.; Tang, William M.

    2014-10-01

    Structure-preserving algorithms obtained via discrete variational principles exhibit strong promise for the calculation of guiding center test particle trajectories. The non-canonical Hamiltonian structure of the guiding center equations forms a novel and challenging context for geometric integration. To demonstrate the practical relevance of these methods, a prototypical variational midpoint algorithm is applied to an experimental magnetic equilibrium. The stability characteristics, conservation properties, and implementation requirements associated with the variational algorithms are addressed. Furthermore, computational run time is reduced for large numbers of particles by parallelizing the calculation on GPU hardware.

  7. Development, analysis, and testing of robust nonlinear guidance algorithms for space applications

    NASA Astrophysics Data System (ADS)

    Wibben, Daniel R.

    This work focuses on the analysis and application of various nonlinear, autonomous guidance algorithms that utilize sliding mode control to guarantee system stability and robustness. While the basis for the algorithms has previously been proposed, past efforts barely scratched the surface of the theoretical details and implications of these algorithms. Of the three algorithms that are the subject of this research, two are directly derived from optimal control theory and augmented using sliding mode control. Analysis of the derivation of these algorithms has shown that they are two different representations of the same result, one of which uses a simple error state model (Delta r/Deltav) and the other uses definitions of the zero-effort miss and zero-effort velocity (ZEM/ZEV) values. By investigating the dynamics of the defined sliding surfaces and their impact on the overall system, many implications have been deduced regarding the behavior of these systems which are noted to feature time-varying sliding modes. A formal finite time stability analysis has also been performed to theoretically demonstrate that the algorithms globally stabilize the system in finite time in the presence of perturbations and unmodeled dynamics. The third algorithm that has been subject to analysis is derived from a direct application of higher-order sliding mode control and Lyapunov stability analysis without consideration of optimal control theory and has been named the Multiple Sliding Surface Guidance (MSSG). Via use of reinforcement learning methods an optimal set of gains has been found that make the guidance perform similarly to an open-loop optimal solution. Careful side-by-side inspection of the MSSG and Optimal Sliding Guidance (OSG) algorithms has shown some striking similarities. A detailed comparison of the algorithms has demonstrated that though they are nearly indistinguishable at first glance, there are some key differences between the two algorithms and they are indeed

  8. Development of an algorithm to improve the accuracy of dose delivery in Gamma Knife radiosurgery

    NASA Astrophysics Data System (ADS)

    Cernica, George Dumitru

    2007-12-01

    Gamma Knife stereotactic radiosurgery has demonstrated decades of successful treatments. Despite its high spatial accuracy, the Gamma Knife's planning software, GammaPlan, uses a simple exponential as the TPR curve for all four collimator sizes, and a skull scaling device to acquire ruler measurements to interpolate a threedimensional spline to model the patient's skull. The consequences of these approximations have not been previously investigated. The true TPR curves of the four collimators were measured by blocking 200 of the 201 sources with steel plugs. Additional attenuation was provided through the use of a 16 cm tungsten sphere, designed to enable beamlet measurements along one axis. TPR, PDD, and beamlet profiles were obtained using both an ion chamber and GafChromic EBT film for all collimators. Additionally, an in-house planning algorithm able to calculate the contour of the skull directly from an image set and implement the measured beamlet data in shot time calculations was developed. Clinical and theoretical Gamma Knife cases were imported into our algorithm. The TPR curves showed small deviations from a simple exponential curve, with average discrepancies under 1%, but with a maximum discrepancy of 2% found for the 18 mm collimator beamlet at shallow depths. The consequences on the PDD of the of the beamlets were slight, with a maximum of 1.6% found with the 18 mm collimator beamlet. Beamlet profiles of the 4 mm, 8 mm, and 14 mm showed some underestimates of the off-axis ratio near the shoulders (up to 10%). The toes of the profiles were underestimated for all collimators, with differences up to 7%. Shot times were affected by up to 1.6% due to TPR differences, but clinical cases showed deviations by no more than 0.5%. The beamlet profiles affected the dose calculations more significantly, with shot time calculations differing by as much as 0.8%. The skull scaling affected the shot time calculations the most significantly, with differences of up to 5

  9. Informing radar retrieval algorithm development using an alternative soil moisture validation technique

    NASA Astrophysics Data System (ADS)

    Crow, W. T.; Wagner, W.

    2009-12-01

    incidence angle on retrieval skill. Results imply the need for a significant interaction term in vegetation backscatter models in order to match the observed relationship between incidence angle and retrieval skill. Implications for the development of radar retrieval algorithms for the NASA Soil Moisture Active/Passive (SMAP) mission will be discussed.

  10. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert M.

    2013-01-01

    A new regression model search algorithm was developed that may be applied to both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The algorithm is a simplified version of a more complex algorithm that was originally developed for the NASA Ames Balance Calibration Laboratory. The new algorithm performs regression model term reduction to prevent overfitting of data. It has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a regression model search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression model. Therefore, the simplified algorithm is not intended to replace the original algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new search algorithm.

  11. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred

    2013-01-01

    A new regression model search algorithm was developed in 2011 that may be used to analyze both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The new algorithm is a simplified version of a more complex search algorithm that was originally developed at the NASA Ames Balance Calibration Laboratory. The new algorithm has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression models. Therefore, the simplified search algorithm is not intended to replace the original search algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm either fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new regression model search algorithm.

  12. Successive smoothing algorithm for constructing the semiempirical model developed at ONERA to predict unsteady aerodynamic forces. [aeroelasticity in helicopters

    NASA Technical Reports Server (NTRS)

    Petot, D.; Loiseau, H.

    1982-01-01

    Unsteady aerodynamic methods adopted for the study of aeroelasticity in helicopters are considered with focus on the development of a semiempirical model of unsteady aerodynamic forces acting on an oscillating profile at high incidence. The successive smoothing algorithm described leads to the model's coefficients in a very satisfactory manner.

  13. Detection of fruit-fly infestation in olives using X-ray imaging: Algorithm development and prospects

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An algorithm using a Bayesian classifier was developed to automatically detect olive fruit fly infestations in x-ray images of olives. The data set consisted of 249 olives with various degrees of infestation and 161 non-infested olives. Each olive was x-rayed on film and digital images were acquired...

  14. Development of double-pair double difference earthquake location algorithm for improving earthquake locations

    NASA Astrophysics Data System (ADS)

    Guo, Hao; Zhang, Haijiang

    2017-01-01

    Event-pair double-difference (DD) earthquake location method, as incorporated in hypoDD, has been widely used to improve relative earthquake locations by using event-pair differential arrival times from pairs of events to common stations because some common path anomalies outside the source region can be cancelled out due to similar ray paths. Similarly, station-pair differential arrival times from one event to pairs of stations can also be used to improve earthquake locations by cancelling out the event origin time and some path anomalies inside the source region. To utilize advantages of both DD location methods, we have developed a new double-pair DD location method to use differential times constructed from pairs of events to pairs of stations to determine higher-precision relative earthquake locations. Compared to the event-pair and station-pair DD location methods, the new method can remove event origin times and station correction terms from the inversion system and cancel out path anomalies both outside and inside the source region at the same time. The new method is tested on earthquakes around the San Andreas Fault, California to validate its performance. From earthquake relocations it is demonstrated that the double-pair DD location method is able to better sharpen the images of seismicity with smaller relative location uncertainties compared to the event-pair DD location method and thus to reveal more fine-scale structures. In comparison, among three DD location methods, station-pair DD location method can better improve the absolute earthquake locations. For this reason, we further propose a hybrid double-pair DD location method combining station-pair and double-pair differential times to determine accurate absolute and relative locations at the same time, which is validated by both synthetic and real data sets.

  15. Development of Turbulent Diffusion Transfer Algorithms to Estimate Lake Tahoe Water Budget

    NASA Astrophysics Data System (ADS)

    Sahoo, G. B.; Schladow, S. G.; Reuter, J. E.

    2012-12-01

    The evaporative loss is a dominant component in the Lake Tahoe hydrologic budget because watershed area (813km2) is very small compared to the lake surface area (501 km2). The 5.5 m high dam built at the lake's only outlet, the Truckee River at Tahoe City can increase the lake's capacity by approximately 0.9185 km3. The lake serves as a flood protection for downstream areas and source of water supply for downstream cities, irrigation, hydropower, and instream environmental requirements. When the lake water level falls below the natural rim, cessation of flows from the lake cause problems for water supply, irrigation, and fishing. Therefore, it is important to develop algorithms to correctly estimate the lake hydrologic budget. We developed a turbulent diffusion transfer model and coupled to the dynamic lake model (DLM-WQ). We generated the stream flows and pollutants loadings of the streams using the US Environmental Protection Agency (USEPA) supported watershed model, Loading Simulation Program in C++ (LSPC). The bulk transfer coefficients were calibrated using correlation coefficient (R2) as the objective function. Sensitivity analysis was conducted for the meteorological inputs and model parameters. The DLM-WQ estimated lake water level and water temperatures were in agreement to those of measured records with R2 equal to 0.96 and 0.99, respectively for the period 1994 to 2008. The estimated average evaporation from the lake, stream inflow, precipitation over the lake, groundwater fluxes, and outflow from the lake during 1994 to 2008 were found to be 32.0%, 25.0%, 19.0%, 0.3%, and 11.7%, respectively.

  16. A preliminary report on the development of MATLAB tensor classes for fast algorithm prototyping.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson

    2004-07-01

    We describe three MATLAB classes for manipulating tensors in order to allow fast algorithm prototyping. A tensor is a multidimensional or N-way array. We present a tensor class for manipulating tensors which allows for tensor multiplication and 'matricization.' We have further added two classes for representing tensors in decomposed format: cp{_}tensor and tucker{_}tensor. We demonstrate the use of these classes by implementing several algorithms that have appeared in the literature.

  17. Preliminary Development and Evaluation of Lightning Jump Algorithms for the Real-Time Detection of Severe Weather

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2009-01-01

    Previous studies have demonstrated that rapid increases in total lightning activity (intracloud + cloud-to-ground) are often observed tens of minutes in advance of the occurrence of severe weather at the ground. These rapid increases in lightning activity have been termed "lightning jumps." Herein, we document a positive correlation between lightning jumps and the manifestation of severe weather in thunderstorms occurring across the Tennessee Valley and Washington D.C. A total of 107 thunderstorms were examined in this study, with 69 of the 107 thunderstorms falling into the category of non-severe, and 38 into the category of severe. From the dataset of 69 isolated non-severe thunderstorms, an average peak 1 minute flash rate of 10 flashes/min was determined. A variety of severe thunderstorm types were examined for this study including an MCS, MCV, tornadic outer rainbands of tropical remnants, supercells, and pulse severe thunderstorms. Of the 107 thunderstorms, 85 thunderstorms (47 non-severe, 38 severe) from the Tennessee Valley and Washington D.C tested 6 lightning jump algorithm configurations (Gatlin, Gatlin 45, 2(sigma), 3(sigma), Threshold 10, and Threshold 8). Performance metrics for each algorithm were then calculated, yielding encouraging results from the limited sample of 85 thunderstorms. The 2(sigma) lightning jump algorithm had a high probability of detection (POD; 87%), a modest false alarm rate (FAR; 33%), and a solid Heidke Skill Score (HSS; 0.75). A second and more simplistic lightning jump algorithm named the Threshold 8 lightning jump algorithm also shows promise, with a POD of 81% and a FAR of 41%. Average lead times to severe weather occurrence for these two algorithms were 23 minutes and 20 minutes, respectively. The overall goal of this study is to advance the development of an operationally-applicable jump algorithm that can be used with either total lightning observations made from the ground, or in the near future from space using the

  18. Characterizing the Preturbulence Environment for Sensor Development, New Hazard Algorithms and NASA Experimental Flight Planning

    NASA Technical Reports Server (NTRS)

    Kaplan, Michael L.; Lin, Yuh-Lang

    2004-01-01

    During the grant period, several tasks were performed in support of the NASA Turbulence Prediction and Warning Systems (TPAWS) program. The primary focus of the research was on characterizing the preturbulence environment by developing predictive tools and simulating atmospheric conditions that preceded severe turbulence. The goal of the research being to provide both dynamical understanding of conditions that preceded turbulence as well as providing predictive tools in support of operational NASA B-757 turbulence research flights. The advancements in characterizing the preturbulence environment will be applied by NASA to sensor development for predicting turbulence onboard commercial aircraft. Numerical simulations with atmospheric models as well as multi-scale observational analyses provided insights into the environment organizing turbulence in a total of forty-eight specific case studies of severe accident producing turbulence on commercial aircraft. These accidents exclusively affected commercial aircraft. A paradigm was developed which diagnosed specific atmospheric circulation systems from the synoptic scale down to the meso-y scale that preceded turbulence in both clear air and in proximity to convection. The emphasis was primarily on convective turbulence as that is what the TPAWS program is most focused on in terms of developing improved sensors for turbulence warning and avoidance. However, the dynamical paradigm also has applicability to clear air and mountain turbulence. This dynamical sequence of events was then employed to formulate and test new hazard prediction indices that were first tested in research simulation studies and then ultimately were further tested in support of the NASA B-757 turbulence research flights. The new hazard characterization algorithms were utilized in a Real Time Turbulence Model (RTTM) that was operationally employed to support the NASA B-757 turbulence research flights. Improvements in the RTTM were implemented in an

  19. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: algorithm development and flight test results

    SciTech Connect

    Knox, C.E.; Vicroy, D.D.; Simmon, D.A.

    1985-05-01

    A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

  20. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: Algorithm development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Vicroy, D. D.; Simmon, D. A.

    1985-01-01

    A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

  1. Development of a dose algorithm for the modified panasonic UD-802 personal dosimeter used at three mile island

    SciTech Connect

    Miklos, J. A.; Plato, P.

    1988-01-01

    During the fall of 1981, the personnel dosimetry group at GPU Nuclear Corporation at Three Mile Island (TMI) requested assistance from The University of Michigan (UM) in developing a dose algorithm for use at TMI-2. The dose algorithm had to satisfy the specific needs of TMI-2, particularly the need to distinguish beta-particle emitters of different energies, as well as having the capability of satisfying the requirements of the American National Standards Institute (ANSI) N13.11-1983 standard. A standard Panasonic UD-802 dosimeter was modified by having the plastic filter over element 2 removed. The dosimeter and hanger consists of the elements with a 14 mg/cm/sup 2/ density thickness and the filtrations shown. The hanger on this dosimeter had a double open window to facilitate monitoring for low-energy beta particles. The dose algorithm was written to satisfy the requirements of the ANSI N13.11-1983 standard, to include /sup 204/Tl with mixtures of /sup 204/Tl with /sup 90/Sr//sup 90/Y and /sup 137/Cs, and to include 81- and 200-keV average energy X-ray spectra. Stress tests were conducted to observe the algorithm performance to low doses, temperature, humidity, and the residual response following high-dose irradiations. The ability of the algorithm to determine dose from the beta particles of /sup 147/Pm was also investigated.

  2. Challenges and Recent Developments in Hearing Aids: Part I. Speech Understanding in Noise, Microphone Technologies and Noise Reduction Algorithms

    PubMed Central

    Chung, King

    2004-01-01

    This review discusses the challenges in hearing aid design and fitting and the recent developments in advanced signal processing technologies to meet these challenges. The first part of the review discusses the basic concepts and the building blocks of digital signal processing algorithms, namely, the signal detection and analysis unit, the decision rules, and the time constants involved in the execution of the decision. In addition, mechanisms and the differences in the implementation of various strategies used to reduce the negative effects of noise are discussed. These technologies include the microphone technologies that take advantage of the spatial differences between speech and noise and the noise reduction algorithms that take advantage of the spectral difference and temporal separation between speech and noise. The specific technologies discussed in this paper include first-order directional microphones, adaptive directional microphones, second-order directional microphones, microphone matching algorithms, array microphones, multichannel adaptive noise reduction algorithms, and synchrony detection noise reduction algorithms. Verification data for these technologies, if available, are also summarized. PMID:15678225

  3. Development of a blended-control, predictor-corrector guidance algorithm for a crewed Mars aerocapture vehicle

    NASA Astrophysics Data System (ADS)

    Jits, Roman Yuryevich

    A robust blended-control guidance system for a crewed Mars aerocapture vehicle is developed. The key features of its guidance algorithm are the use of the both bank-angle and angle-of-attack modulation to control the aerobraking vehicle, and the use of multiple controls (sequenced pairs of bank-angles and angles-of-attack) within its numeric predictor-corrector targeting routine. The guidance algorithm macrologic is based on extensive open loop trajectory analyses, described in the present research, which led to the selection of a blended-control scheme. A heuristic approach to recover from situations where no converged guidance solution could be found by the numeric predictor-corrector is implemented in the guidance algorithm, and has been successfully demonstrated in a large number of test runs. In this research both the outer and inner loop of the guidance and control system employ the POST (Program to Optimize Simulated Trajectories) computer code as the basic simulation module. At each guidance update, the inner loop solves the rigorous three-dimensional equations of motion and computes the control (bank-angle and angle-of-attack) sequence that is required to meet the required atmospheric exit conditions. Throughout the aerocapture trajectory, the guidance algorithm modifies this control sequence computed by the inner loop, and generates commanded controls for the vehicle, which, when implemented by the outer loop, meet an imposed g-load constraint of 5 Earth g's and compensate for unexpected off-nominal conditions. This blended-control, predictor-corrector guidance algorithm has been successfully developed, implemented and tested and has been shown to be capable of meeting the prescribed g-load constraint and guiding the vehicle to the desired exit conditions for a range of off-nominal factors much wider than those which could be accommodated by prior algorithms and bank-angle-only guidance.

  4. A VLSI architecture for simplified arithmetic Fourier transform algorithm

    NASA Technical Reports Server (NTRS)

    Reed, Irving S.; Shih, Ming-Tang; Truong, T. K.; Hendon, E.; Tufts, D. W.

    1992-01-01

    The arithmetic Fourier transform (AFT) is a number-theoretic approach to Fourier analysis which has been shown to perform competitively with the classical FFT in terms of accuracy, complexity, and speed. Theorems developed in a previous paper for the AFT algorithm are used here to derive the original AFT algorithm which Bruns found in 1903. This is shown to yield an algorithm of less complexity and of improved performance over certain recent AFT algorithms. A VLSI architecture is suggested for this simplified AFT algorithm. This architecture uses a butterfly structure which reduces the number of additions by 25 percent of that used in the direct method.

  5. Feather Development Genes and Associated Regulatory Innovation Predate the Origin of Dinosauria

    PubMed Central

    Lowe, Craig B.; Clarke, Julia A.; Baker, Allan J.; Haussler, David; Edwards, Scott V.

    2015-01-01

    The evolution of avian feathers has recently been illuminated by fossils and the identification of genes involved in feather patterning and morphogenesis. However, molecular studies have focused mainly on protein-coding genes. Using comparative genomics and more than 600,000 conserved regulatory elements, we show that patterns of genome evolution in the vicinity of feather genes are consistent with a major role for regulatory innovation in the evolution of feathers. Rates of innovation at feather regulatory elements exhibit an extended period of innovation with peaks in the ancestors of amniotes and archosaurs. We estimate that 86% of such regulatory elements and 100% of the nonkeratin feather gene set were present prior to the origin of Dinosauria. On the branch leading to modern birds, we detect a strong signal of regulatory innovation near insulin-like growth factor binding protein (IGFBP) 2 and IGFBP5, which have roles in body size reduction, and may represent a genomic signature for the miniaturization of dinosaurian body size preceding the origin of flight. PMID:25415961

  6. Feather development genes and associated regulatory innovation predate the origin of Dinosauria.

    PubMed

    Lowe, Craig B; Clarke, Julia A; Baker, Allan J; Haussler, David; Edwards, Scott V

    2015-01-01

    The evolution of avian feathers has recently been illuminated by fossils and the identification of genes involved in feather patterning and morphogenesis. However, molecular studies have focused mainly on protein-coding genes. Using comparative genomics and more than 600,000 conserved regulatory elements, we show that patterns of genome evolution in the vicinity of feather genes are consistent with a major role for regulatory innovation in the evolution of feathers. Rates of innovation at feather regulatory elements exhibit an extended period of innovation with peaks in the ancestors of amniotes and archosaurs. We estimate that 86% of such regulatory elements and 100% of the nonkeratin feather gene set were present prior to the origin of Dinosauria. On the branch leading to modern birds, we detect a strong signal of regulatory innovation near insulin-like growth factor binding protein (IGFBP) 2 and IGFBP5, which have roles in body size reduction, and may represent a genomic signature for the miniaturization of dinosaurian body size preceding the origin of flight.

  7. Adjusting for COPD severity in database research: developing and validating an algorithm

    PubMed Central

    Goossens, Lucas MA; Baker, Christine L; Monz, Brigitta U; Zou, Kelly H; Mölken, Maureen PMH Rutten-van

    2011-01-01

    Purpose When comparing chronic obstructive lung disease (COPD) interventions in database research, it is important to adjust for severity. Global Initiative for Chronic Obstructive Lung Disease (GOLD) guidelines grade severity according to lung function. Most databases lack data on lung function. Previous database research has approximated COPD severity using demographics and healthcare utilization. This study aims to derive an algorithm for COPD severity using baseline data from a large respiratory trial (UPLIFT). Methods Partial proportional odds logit models were developed for probabilities of being in GOLD stages II, III and IV. Concordance between predicted and observed stage was assessed using kappa-statistics. Models were estimated in a random selection of 2/3 of patients and validated in the remainder. The analysis was repeated in a subsample with a balanced distribution across severity stages. Univariate associations of COPD severity with the covariates were tested as well. Results More severe COPD was associated with being male and younger, having quit smoking, lower BMI, osteoporosis, hospitalizations, using certain medications, and oxygen. After adjusting for these variables, co-morbidities, previous healthcare resource use (eg, emergency room, hospitalizations) and inhaled corticosteroids, xanthines, or mucolytics were no longer independently associated with COPD severity, although they were in univariate tests. The concordance was poor (kappa = 0.151) and only slightly better in the balanced sample (kappa = 0.215). Conclusion COPD severity cannot be reliably predicted from demographics and healthcare use. This limitation should be considered when interpreting findings from database studies, and additional research should explore other methods to account for COPD severity. PMID:22259243

  8. A personal view on the origins and developments of the metamaterial concept

    NASA Astrophysics Data System (ADS)

    Tretyakov, Sergei A.

    2017-01-01

    This review paper is a personal attempt to understand the current state of metamaterials science and its development directions, analyzing the main historical steps of its development from the late 19th century to our days.

  9. Using a multi-objective genetic algorithm for developing aerial sensor team search strategies

    NASA Astrophysics Data System (ADS)

    Ridder, Jeffrey P.; Herweg, Jared A.; Sciortino, John C., Jr.

    2008-04-01

    Finding certain associated signals in the modern electromagnetic environment can prove a difficult task due to signal characteristics and associated platform tactics as well as the systems used to find these signals. One approach to finding such signal sets is to employ multiple small unmanned aerial systems (UASs) equipped with RF sensors in a team to search an area. The search environment may be partially known, but with a significant level of uncertainty as to the locations and emissions behavior of the individual signals and their associated platforms. The team is likely to benefit from a combination of using uncertain a priori information for planning and online search algorithms for dynamic tasking of the team. Two search algorithms are examined for effectiveness: Archimedean spirals, in which the UASs comprising the team do not respond to the environment, and artificial potential fields, in which they use environmental perception and interactions to dynamically guide the search. A multi-objective genetic algorithm (MOGA) is used to explore the desirable characteristics of search algorithms for this problem using two performance objectives. The results indicate that the MOGA can successfully use uncertain a priori information to set the parameters of the search algorithms. Also, we find that artificial potential fields may result in good performance, but that each of the fields has a different contribution that may be appropriate only in certain states.

  10. Development of an Algorithm for MODIS and VIIRS Cloud Optical Property Data Record Continuity

    NASA Astrophysics Data System (ADS)

    Meyer, K.; Platnick, S. E.; Ackerman, S. A.; Heidinger, A. K.; Holz, R.; Wind, G.; Amarasinghe, N.; Marchant, B.

    2015-12-01

    The launch of Suomi NPP in the fall of 2011 began the next generation of U.S. operational polar orbiting environmental observations. Similar to MODIS, the VIIRS imager provides visible through IR observations at moderate spatial resolution with a 1330 LT equatorial crossing consistent with MODIS on the Aqua platform. However, unlike MODIS, VIIRS lacks key water vapor and CO2 absorbing channels used by the MODIS cloud algorithms for high cloud detection and cloud-top property retrievals. In addition, there is a significant change in the spectral location of the 2.1μm shortwave-infrared channel used by MODIS for cloud optical/microphysical retrievals. Given the instrument differences between MODIS EOS and VIIRS S-NPP/JPSS, we discuss our adopted method for merging the 15+ year MODIS observational record with VIIRS in order to generate cloud optical property data record continuity across the observing systems. The optical property retrieval code uses heritage algorithms that produce the existing MODIS cloud optical and microphysical properties product (MOD06). As explained in other presentations submitted to this session, the NOAA AWG/CLAVR-x cloud-top property algorithm and a common MODIS-VIIRS cloud mask feed into the optical property algorithm to account for the different channel sets of the two imagers. Data granule and aggregated examples for the current version of the algorithm will be shown.

  11. Algorithm and code development for unsteady three-dimensional Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Obayashi, Shigeru

    1991-01-01

    A streamwise upwind algorithm for solving the unsteady 3-D Navier-Stokes equations was extended to handle the moving grid system. It is noted that the finite volume concept is essential to extend the algorithm. The resulting algorithm is conservative for any motion of the coordinate system. Two extensions to an implicit method were considered and the implicit extension that makes the algorithm computationally efficient is implemented into Ames's aeroelasticity code, ENSAERO. The new flow solver has been validated through the solution of test problems. Test cases include three-dimensional problems with fixed and moving grids. The first test case shown is an unsteady viscous flow over an F-5 wing, while the second test considers the motion of the leading edge vortex as well as the motion of the shock wave for a clipped delta wing. The resulting algorithm has been implemented into ENSAERO. The upwind version leads to higher accuracy in both steady and unsteady computations than the previously used central-difference method does, while the increase in the computational time is small.

  12. The development of a bearing spectral analyzer and algorithms to detect turbopump bearing wear from deflectometer and strain gage data

    NASA Astrophysics Data System (ADS)

    Martinez, Carol L.

    1992-07-01

    Over the last several years, Rocketdyne has actively developed condition and health monitoring techniques and their elements for rocket engine components, specifically high pressure turbopumps. Of key interest is the development of bearing signature analysis systems for real-time monitoring of the cryogen-cooled turbopump shaft bearings, which spin at speeds up to 36,000 RPM. These system elements include advanced bearing vibration sensors, signal processing techniques, wear mode algorithms, and integrated control software. Results of development efforts in the areas of signal processing and wear mode identification and quantification algorithms based on strain gage and deflectometer data are presented. Wear modes investigated include: inner race wear, cage pocket wear, outer race wear, differential ball wear, cracked inner race, and nominal wear.

  13. Towards developing robust algorithms for solving partial differential equations on MIMD machines

    NASA Technical Reports Server (NTRS)

    Saltz, Joel H.; Naik, Vijay K.

    1988-01-01

    Methods for efficient computation of numerical algorithms on a wide variety of MIMD machines are proposed. These techniques reorganize the data dependency patterns to improve the processor utilization. The model problem finds the time-accurate solution to a parabolic partial differential equation discretized in space and implicitly marched forward in time. The algorithms are extensions of Jacobi and SOR. The extensions consist of iterating over a window of several timesteps, allowing efficient overlap of computation with communication. The methods increase the degree to which work can be performed while data are communicated between processors. The effect of the window size and of domain partitioning on the system performance is examined both by implementing the algorithm on a simulated multiprocessor system.

  14. Towards developing robust algorithms for solving partial differential equations on MIMD machines

    NASA Technical Reports Server (NTRS)

    Saltz, J. H.; Naik, V. K.

    1985-01-01

    Methods for efficient computation of numerical algorithms on a wide variety of MIMD machines are proposed. These techniques reorganize the data dependency patterns to improve the processor utilization. The model problem finds the time-accurate solution to a parabolic partial differential equation discretized in space and implicitly marched forward in time. The algorithms are extensions of Jacobi and SOR. The extensions consist of iterating over a window of several timesteps, allowing efficient overlap of computation with communication. The methods increase the degree to which work can be performed while data are communicated between processors. The effect of the window size and of domain partitioning on the system performance is examined both by implementing the algorithm on a simulated multiprocessor system.

  15. Advances in biosensor development for the screening of antibiotic residues in food products of animal origin - A comprehensive review.

    PubMed

    Gaudin, Valérie

    2017-04-15

    Antibiotic residues may be found in food of animal origin, since veterinary drugs are used for preventive and curative purposes to treat animals. The control of veterinary drug residues in food is necessary to ensure consumer safety. Screening methods are the first step in the control of antibiotic residues in food of animal origin. Conventional screening methods are based on different technologies, microbiological methods, immunological methods or physico-chemical methods (e.g. thin-layer chromatography, HPLC, LC-MS/MS). Screening methods should be simple, quick, inexpensive and specific, with low detection limits and high sample throughput. Biosensors can meet some of these requirements. Therefore, the development of biosensors for the screening of antibiotic residues has been increasing since the 1980s. The present review provides extensive and up-to-date findings on biosensors for the screening of antibiotic residues in food products of animal origin. Biosensors are constituted of a bioreceptor and a transducer. In the detection of antibiotic residues, even though antibodies were the first bioreceptors to be used, new kinds of bioreceptors are being developed more and more (enzymes, aptamers, MIPs); their advantages and drawbacks are discussed in this review. The different categories of transducers (electrochemical, mass-based biosensors, optical and thermal) and their potential applications for the screening of antibiotic residues in food are presented. Moreover, the advantages and drawbacks of the different types of transducers are discussed. Lastly, outlook and the future development of biosensors for the control of antibiotic residues in food are highlighted.

  16. The Development of a Parameterized Scatter Removal Algorithm for Nuclear Materials Identification System Imaging

    SciTech Connect

    Grogan, Brandon Robert

    2010-03-01

    This dissertation presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects non-intrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross-sections of features inside the object can be determined. The cross sections can then be used to identify the materials and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons which are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using

  17. THE DEVELOPMENT OF A PARAMETERIZED SCATTER REMOVAL ALGORITHM FOR NUCLEAR MATERIALS IDENTIFICATION SYSTEM IMAGING

    SciTech Connect

    Grogan, Brandon R

    2010-05-01

    This report presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects nonintrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross sections of features inside the object can be determined. The cross sections can then be used to identify the materials, and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons that are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized, and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements, and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using the

  18. Development of a Near Real-Time Hail Damage Swath Identification Algorithm for Vegetation

    NASA Technical Reports Server (NTRS)

    Bell, Jordan R.; Molthan, Andrew L.; Schultz, Kori A.; McGrath, Kevin M.; Burks, Jason E.

    2015-01-01

    Every year in the Midwest and Great Plains, widespread greenness forms in conjunction with the latter part of the spring-summer growing season. This prevalent greenness forms as a result of the high concentration of agricultural areas having their crops reach their maturity before the fall harvest. This time of year also coincides with an enhanced hail frequency for the Great Plains (Cintineo et al. 2012). These severe thunderstorms can bring damaging winds and large hail that can result in damage to the surface vegetation. The spatial extent of the damage can relatively small concentrated area or be a vast swath of damage that is visible from space. These large areas of damage have been well documented over the years. In the late 1960s aerial photography was used to evaluate crop damage caused by hail. As satellite remote sensing technology has evolved, the identification of these hail damage streaks has increased. Satellites have made it possible to view these streaks in additional spectrums. Parker et al. (2005) documented two streaks using the Moderate Resolution Imaging Spectroradiometer (MODIS) that occurred in South Dakota. He noted the potential impact that these streaks had on the surface temperature and associated surface fluxes that are impacted by a change in temperature. Gallo et al. (2012) examined at the correlation between radar signatures and ground observations from storms that produced a hail damage swath in Central Iowa also using MODIS. Finally, Molthan et al. (2013) identified hail damage streaks through MODIS, Landsat-7, and SPOT observations of different resolutions for the development of a potential near-real time applications. The manual analysis of hail damage streaks in satellite imagery is both tedious and time consuming, and may be inconsistent from event to event. This study focuses on development of an objective and automatic algorithm to detect these areas of damage in a more efficient and timely manner. This study utilizes the

  19. Development of a short form and scoring algorithm from the validated actionable bladder symptom screening tool

    PubMed Central

    2013-01-01

    Background The majority of multiple sclerosis (MS) patients develop some form of lower urinary tract dysfunction, usually as a result of neurogenic detrusor overactivity (NDO). Patients identify urinary incontinence as one of the worst aspects of this disease. Despite the high prevalence of NDO, urological evaluation and treatment are significantly under-accessed in this population. The objectives of this study were: 1) to adapt the previously validated Actionable Bladder Symptom Screening Tool (ABSST) to a short form for ease and brevity of application in a clinical setting that is clinically meaningful; and 2) to develop a scoring algorithm that would be interpretable in terms of referring/considering precise diagnosis and treatment. Methods A US-based, non-randomized, multi-center, stand-alone observational study was conducted to assess the psychometric properties of the ABSST among patients who have MS with and without NDO. Mixed psychometric methods (e.g., classical statistics (Psychometric theory (3rd ed.). New York: McGraw-Hill; 1994) and item response methods (Applying the Rasch Model: Fundamental Measurement in the Human Sciences. New Jersey: Lawrence Earlbaum Associates; 2001)) were used to evaluate the predictive and clinical validity of the shortened form. The latter included clinicians flagging clinically meaningful items and associated response options which would indicate the need for further evaluation or treatment. Results A total of 151 patients, all with MS and with and without NDO, were recruited by 28 clinicians in various US geographical locations. Approximately 41% of patients reported a history of or currently having urinary incontinence and/or urinary urgency. The prediction model across the entire range of classification thresholds was evaluated, plotting the true positive identification rate against the false positive rate (1-Specificity) for various cut scores. In this study, the cut-point or total score of greater than or equal to 6 had

  20. Development of Molecular Markers for Determining Continental Origin of Wood from White Oaks (Quercus L. sect. Quercus)

    PubMed Central

    Schroeder, Hilke; Cronn, Richard; Yanbaev, Yulai; Jennings, Tara; Mader, Malte; Degen, Bernd; Kersten, Birgit

    2016-01-01

    To detect and avoid illegal logging of valuable tree species, identification methods for the origin of timber are necessary. We used next-generation sequencing to identify chloroplast genome regions that differentiate the origin of white oaks from the three continents; Asia, Europe, and North America. By using the chloroplast genome of Asian Q. mongolica as a reference, we identified 861 variant sites (672 single nucleotide polymorphisms (SNPs); 189 insertion/deletion (indel) polymorphism) from representative species of three continents (Q. mongolica from Asia; Q. petraea and Q. robur from Europe; Q. alba from North America), and we identified additional chloroplast polymorphisms in pools of 20 individuals each from Q. mongolica (789 variant sites) and Q. robur (346 variant sites). Genome sequences were screened for indels to develop markers that identify continental origin of oak species, and that can be easily evaluated using a variety of detection methods. We identified five indels and one SNP that reliably identify continent-of-origin, based on evaluations of up to 1078 individuals representing 13 white oak species and three continents. Due to the size of length polymorphisms revealed, this marker set can be visualized using capillary electrophoresis or high resolution gel (acrylamide or agarose) electrophoresis. With these markers, we provide the wood trading market with an instrument to comply with the U.S. and European laws that require timber companies to avoid the trade of illegally harvested timber. PMID:27352242

  1. Development of Molecular Markers for Determining Continental Origin of Wood from White Oaks (Quercus L. sect. Quercus).

    PubMed

    Schroeder, Hilke; Cronn, Richard; Yanbaev, Yulai; Jennings, Tara; Mader, Malte; Degen, Bernd; Kersten, Birgit

    2016-01-01

    To detect and avoid illegal logging of valuable tree species, identification methods for the origin of timber are necessary. We used next-generation sequencing to identify chloroplast genome regions that differentiate the origin of white oaks from the three continents; Asia, Europe, and North America. By using the chloroplast genome of Asian Q. mongolica as a reference, we identified 861 variant sites (672 single nucleotide polymorphisms (SNPs); 189 insertion/deletion (indel) polymorphism) from representative species of three continents (Q. mongolica from Asia; Q. petraea and Q. robur from Europe; Q. alba from North America), and we identified additional chloroplast polymorphisms in pools of 20 individuals each from Q. mongolica (789 variant sites) and Q. robur (346 variant sites). Genome sequences were screened for indels to develop markers that identify continental origin of oak species, and that can be easily evaluated using a variety of detection methods. We identified five indels and one SNP that reliably identify continent-of-origin, based on evaluations of up to 1078 individuals representing 13 white oak species and three continents. Due to the size of length polymorphisms revealed, this marker set can be visualized using capillary electrophoresis or high resolution gel (acrylamide or agarose) electrophoresis. With these markers, we provide the wood trading market with an instrument to comply with the U.S. and European laws that require timber companies to avoid the trade of illegally harvested timber.

  2. Vibrational self-consistent field calculations for spectroscopy of biological molecules: new algorithmic developments and applications.

    PubMed

    Roy, Tapta Kanchan; Gerber, R Benny

    2013-06-28

    This review describes the vibrational self-consistent field (VSCF) method and its other variants for computing anharmonic vibrational spectroscopy of biological molecules. The superiority and limitations of this algorithm are discussed with examples. The spectroscopic accuracy of the VSCF method is compared with experimental results and other available state-of-the-art algorithms for various biologically important systems. For large biological molecules with many vibrational modes, the scaling of computational effort is investigated. The accuracy of the vibrational spectra of biological molecules using the VSCF approach for different electronic structure methods is also assessed. Finally, a few open problems and challenges in this field are discussed.

  3. Development of homotopy algorithms for fixed-order mixed H2/H(infinity) controller synthesis

    NASA Technical Reports Server (NTRS)

    Whorton, M.; Buschek, H.; Calise, A. J.

    1994-01-01

    A major difficulty associated with H-infinity and mu-synthesis methods is the order of the resulting compensator. Whereas model and/or controller reduction techniques are sometimes applied, performance and robustness properties are not preserved. By directly constraining compensator order during the optimization process, these properties are better preserved, albeit at the expense of computational complexity. This paper presents a novel homotopy algorithm to synthesize fixed-order mixed H2/H-infinity compensators. Numerical results are presented for a four-disk flexible structure to evaluate the efficiency of the algorithm.

  4. The origin of spontaneous activity in developing networks of the vertebrate nervous system.

    PubMed

    O'Donovan, M J

    1999-02-01

    Spontaneous neuronal activity has been detected in many parts of the developing vertebrate nervous system. Recent studies suggest that this activity depends on properties that are probably shared by all developing networks. Of particular importance is the high excitability of recurrently connected, developing networks and the presence of activity-induced transient depression of network excitability. In the spinal cord, it has been proposed that the interaction of these properties gives rise to spontaneous, periodic activity.

  5. Litter of origin effects on gilt development in a commercial setting

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The preweaning litter environment of gilts can affect subsequent development. In a recent experiment designed to test the effects of dietary ME and lysine on gilt development, individual birth weights, immunocrits (related to colostrum intake), sow parity, number weaned, individual weaning weights, ...

  6. [Historical review of the treatment of fractures. Contribution of the Belgian surgery to the origin and development of osteosynthesis].

    PubMed

    Andrianne, Y; Hinsenkamp, M

    2011-01-01

    The word osteosynthesis was proposed by A. Lambotte in 1904. His definition, given in 1908, is still valid today: "Osteo-synthesis is the artificial contention of the bone fragments of fractures, by special devices acting directly on bones, exposed or not, with the aim to strongly fix them in their original position". The authors review the methods of contention before the invention of osteosynthesis and later the developments of bone fixation techniques. They insist in particular on the durable innovations of various pioneers including A. Lambotte, R. Danis, R. Hoffmann and G. Küntscher. The School of Brussels has been implicated in the developments and conceptualisation of osteo-synthesis.

  7. Structured interview for mild traumatic brain injury after military blast: inter-rater agreement and development of diagnostic algorithm.

    PubMed

    Walker, William C; Cifu, David X; Hudak, Anne M; Goldberg, Gary; Kunz, Richard D; Sima, Adam P

    2015-04-01

    The existing gold standard for diagnosing a suspected previous mild traumatic brain injury (mTBI) is clinical interview. But it is prone to bias, especially for parsing the physical versus psychological effects of traumatic combat events, and its inter-rater reliability is unknown. Several standardized TBI interview instruments have been developed for research use but have similar limitations. Therefore, we developed the Virginia Commonwealth University (VCU) retrospective concussion diagnostic interview, blast version (VCU rCDI-B), and undertook this cross-sectional study aiming to 1) measure agreement among clinicians' mTBI diagnosis ratings, 2) using clinician consensus develop a fully structured diagnostic algorithm, and 3) assess accuracy of this algorithm in a separate sample. Two samples (n = 66; n = 37) of individuals within 2 years of experiencing blast effects during military deployment underwent semistructured interview regarding their worst blast experience. Five highly trained TBI physicians independently reviewed and interpreted the interview content and gave blinded ratings of whether or not the experience was probably an mTBI. Paired inter-rater reliability was extremely variable, with kappa ranging from 0.194 to 0.825. In sample 1, the physician consensus prevalence of probable mTBI was 84%. Using these diagnosis ratings, an algorithm was developed and refined from the fully structured portion of the VCU rCDI-B. The final algorithm considered certain symptom patterns more specific for mTBI than others. For example, an isolated symptom of "saw stars" was deemed sufficient to indicate mTBI, whereas an isolated symptom of "dazed" was not. The accuracy of this algorithm, when applied against the actual physician consensus in sample 2, was almost perfect (correctly classified = 97%; Cohen's kappa = 0.91). In conclusion, we found that highly trained clinicians often disagree on historical blast-related mTBI determinations. A fully structured interview

  8. Developments of aerosol retrieval algorithm for Geostationary Environmental Monitoring Spectrometer (GEMS) and the retrieval accuracy test

    NASA Astrophysics Data System (ADS)

    KIM, M.; Kim, J.; Jeong, U.; Ahn, C.; Bhartia, P. K.; Torres, O.

    2013-12-01

    A scanning UV-Visible spectrometer, the GEMS (Geostationary Environment Monitoring Spectrometer) onboard the GEO-KOMPSAT2B (Geostationary Korea Multi-Purpose Satellite) is planned to be launched in geostationary orbit in 2018. The GEMS employs hyper-spectral imaging with 0.6 nm resolution to observe solar backscatter radiation in the UV and Visible range. In the UV range, the low surface contribution to the backscattered radiation and strong interaction between aerosol absorption and molecular scattering can be advantageous in retrieving aerosol optical properties such as aerosol optical depth (AOD) and single scattering albedo (SSA). By taking the advantage, the OMI UV aerosol algorithm has provided information on the absorbing aerosol (Torres et al., 2007; Ahn et al., 2008). This study presents a UV-VIS algorithm to retrieve AOD and SSA from GEMS. The algorithm is based on the general inversion method, which uses pre-calculated look-up table with assumed aerosol properties and measurement condition. To obtain the retrieval accuracy, the error of the look-up table method occurred by the interpolation of pre-calculated radiances is estimated by using the reference dataset, and the uncertainties about aerosol type and height are evaluated. Also, the GEMS aerosol algorithm is tested with measured normalized radiance from OMI, a provisional data set for GEMS measurement, and the results are compared with the values from AERONET measurements over Asia. Additionally, the method for simultaneous retrieve of the AOD and aerosol height is discussed.

  9. Algorithms for Developing Test Questions from Sentences in Instructional Materials. Interim Report, January-September 1977.

    ERIC Educational Resources Information Center

    Roid, Gale; Finn, Patrick

    The feasibility of generating multiple-choice test questions by transforming sentences from prose instructional materials was examined. A computer-based algorithm was used to analyze prose subject matter and to identify high-information words. Sentences containing selected words were then transformed into multiple-choice items by four writers who…

  10. Algorithms for Developing Test Questions from Sentences in Instructional Materials: An Extension of an Earlier Study.

    ERIC Educational Resources Information Center

    Roid, Gale H.; And Others

    An earlier study was extended and replicated to examine the feasibility of generating multiple-choice test questions by transforming sentences from prose instructional material. In the first study, a computer-based algorithm was used to analyze prose subject matter and to identify high-information words. Sentences containing selected words were…

  11. TrackNTrace: A simple and extendable open-source framework for developing single-molecule localization and tracking algorithms.

    PubMed

    Stein, Simon Christoph; Thiart, Jan

    2016-11-25

    Super-resolution localization microscopy and single particle tracking are important tools for fluorescence microscopy. Both rely on detecting, and tracking, a large number of fluorescent markers using increasingly sophisticated computer algorithms. However, this rise in complexity makes it difficult to fine-tune parameters and detect inconsistencies, improve existing routines, or develop new approaches founded on established principles. We present an open-source MATLAB framework for single molecule localization, tracking and super-resolution applications. The purpose of this software is to facilitate the development, distribution, and comparison of methods in the community by providing a unique, easily extendable plugin-based system and combining it with a novel visualization system. This graphical interface incorporates possibilities for quick inspection of localization and tracking results, giving direct feedback of the quality achieved with the chosen algorithms and parameter values, as well as possible sources for errors. This is of great importance in practical applications and even more so when developing new techniques. The plugin system greatly simplifies the development of new methods as well as adapting and tailoring routines towards any research problem's individual requirements. We demonstrate its high speed and accuracy with plugins implementing state-of-the-art algorithms and show two biological applications.

  12. TrackNTrace: A simple and extendable open-source framework for developing single-molecule localization and tracking algorithms

    PubMed Central

    Stein, Simon Christoph; Thiart, Jan

    2016-01-01

    Super-resolution localization microscopy and single particle tracking are important tools for fluorescence microscopy. Both rely on detecting, and tracking, a large number of fluorescent markers using increasingly sophisticated computer algorithms. However, this rise in complexity makes it difficult to fine-tune parameters and detect inconsistencies, improve existing routines, or develop new approaches founded on established principles. We present an open-source MATLAB framework for single molecule localization, tracking and super-resolution applications. The purpose of this software is to facilitate the development, distribution, and comparison of methods in the community by providing a unique, easily extendable plugin-based system and combining it with a novel visualization system. This graphical interface incorporates possibilities for quick inspection of localization and tracking results, giving direct feedback of the quality achieved with the chosen algorithms and parameter values, as well as possible sources for errors. This is of great importance in practical applications and even more so when developing new techniques. The plugin system greatly simplifies the development of new methods as well as adapting and tailoring routines towards any research problem’s individual requirements. We demonstrate its high speed and accuracy with plugins implementing state-of-the-art algorithms and show two biological applications. PMID:27885259

  13. Development and Evaluation of Algorithms to Improve Small- and Medium-Size Commercial Building Operations

    SciTech Connect

    Kim, Woohyun; Katipamula, Srinivas; Lutes, Robert G.; Underhill, Ronald M.

    2016-10-31

    Small- and medium-sized (<100,000 sf) commercial buildings (SMBs) represent over 95% of the U.S. commercial building stock and consume over 60% of total site energy consumption. Many of these buildings use rudimentary controls that are mostly manual, with limited scheduling capability, no monitoring or failure management. Therefore, many of these buildings are operated inefficiently and consume excess energy. SMBs typically utilize packaged rooftop units (RTUs) that are controlled by an individual thermostat. There is increased urgency to improve the operating efficiency of existing commercial building stock in the U.S. for many reasons, chief among them is to mitigate the climate change impacts. Studies have shown that managing set points and schedules of the RTUs will result in up to 20% energy and cost savings. Another problem associated with RTUs is short-cycling, where an RTU goes through ON and OFF cycles too frequently. Excessive cycling can lead to excessive wear and lead to premature failure of the compressor or its components. The short cycling can result in a significantly decreased average efficiency (up to 10%), even if there are no physical failures in the equipment. Also, SMBs use a time-of-day scheduling is to start the RTUs before the building will be occupied and shut it off when unoccupied. Ensuring correct use of the zone set points and eliminating frequent cycling of RTUs thereby leading to persistent building operations can significantly increase the operational efficiency of the SMBs. A growing trend is to use low-cost control infrastructure that can enable scalable and cost-effective intelligent building operations. The work reported in this report describes three algorithms for detecting the zone set point temperature, RTU cycling rate and occupancy schedule detection that can be deployed on the low-cost infrastructure. These algorithms only require the zone temperature data for detection. The algorithms have been tested and validated using

  14. Development of a Satellite-based evapotranspiration algorithm: A case study for Two Deciduous Forest Sites

    NASA Astrophysics Data System (ADS)

    Elmasri, B.; Rahman, A. F.

    2011-12-01

    We introduce a new methodology to estimate 8-day average daily evapotranspiration (ET) using both routinely available data and the Penman-Monteith (P-M) equation. Our algorithm considers the environmental constraints on surface resistance and ET by (1) including vapor pressure deficit (VPD), incoming solar radiation, soil moisture, and temperature constraints on stomatal conductance; (2) using leaf area index (LAI) to scale from the leaf to the canopy conductance; and (3) calculating canopy resistance as a function of environmental variables such as net radiation, precipitation index, and VPD. Remote sensing data from the Moderate Resolution Spectroradiometer (MODIS) and the Advance Microwave Scanning Radiometer-EOS (AMSR-E) were used to estimate ET by using MODIS land surface temperature (LST) to estimated VPD, AMSR-E soil moisture to estimate canopy conductance, and MODIS surface emissivity and albedo to estimate shortwave and net radiation. The algorithm was evaluated using ET observations from two AmeriFlux Eddy covariance flux towers located at the Morgan Monroe State Forest (MMSF) in Indiana and the Harvard Forest (HarvF) in Massachusetts for the periods of 2003-2008. ET estimates from our algorithm was compared to the flux observations. Results indicated a root mean square error (RMSE) of the 8-day average ET of 0.57 mm for the HarvF and 0.47 mm for the MMSF. A significant correlation was found between the estimated 8-day average ET and the observed 8-day average ET with r2 of 0.84 for the HarvF and 0.88 for the MMSF. Using tower meteorological data, the r2 slightly increased to 0.90 for the MMSF. The algorithms for VPD and radiation were tested against flux observations and found a strong correlation with r2 ranging from 0.68 to 0.82. Sensitivity analysis revealed that the modeled ET predictions are highly sensitive to changes in the canopy resistance values, so accurate estimates of canopy resistance is essential for improve ET predictions. Our algorithm

  15. Common Origins of Diverse Misconceptions: Cognitive Principles and the Development of Biology Thinking

    PubMed Central

    Coley, John D.; Tanner, Kimberly D.

    2012-01-01

    Many ideas in the biological sciences seem especially difficult to understand, learn, and teach successfully. Our goal in this feature is to explore how these difficulties may stem not from the complexity or opacity of the concepts themselves, but from the fact that they may clash with informal, intuitive, and deeply held ways of understanding the world that have been studied for decades by psychologists. We give a brief overview of the field of developmental cognitive psychology. Then, in each of the following sections, we present a number of common challenges faced by students in the biological sciences. These may be in the form of misconceptions, biases, or simply concepts that are difficult to learn and teach, and they occur at all levels of biological analysis (molecular, cellular, organismal, population, and ecosystem). We then introduce the notion of a cognitive construal and discuss specific examples of how these cognitive principles may explain what makes some misconceptions so alluring and some biological concepts so challenging for undergraduates. We will argue that seemingly unrelated misconceptions may have common origins in a single underlying cognitive construal. These ideas emerge from our own ongoing cross-disciplinary conversation, and we think that expanding this conversation to include other biological scientists and educators, as well as other cognitive scientists, could have significant utility in improving biology teaching and learning. PMID:22949417

  16. Benign paroxysmal positioning vertigo: classic descriptions, origins of the provocative positioning technique, and conceptual developments.

    PubMed

    Lanska, D J; Remler, B

    1997-05-01

    The original description of benign paroxysmal positioning vertigo (BPPV) has been variously attributed to Bárány, Adler, and others. In addition, the proper eponymic designation for the provocative positioning test used to diagnose BPPV has been unclear, because authors use a variety of different terms, including Bárány, Nylén-Bárány, Nylén, Hallpike, Hallpike-Dix, and Dix-Hallpike to refer to the procedure in current use. Based on a review of the extant medical literature, Bárány was the first to describe the condition in detail, and Dix and Hallpike were the first to clearly describe both the currently used provocative positioning technique and the essential clinical manifestations of benign paroxysmal positioning vertigo elicited by that technique. Nevertheless, despite their important contributions, neither Bárány nor Dix and Hallpike understood the pathophysiology of BPPV nor did they appreciate that the positioning techniques they used actually demonstrated pathology in the semicircular canals rather than the utricle. The modern understanding of the pathophysiology of BPPV began with Schuknecht's proposal that the dysfunction resulted from the gravity-dependent movement of loose or fixed dense material within the posterior semicircular canal ("cupulolithiasis"). Although Schuknecht's formulations were not consistent with all clinical features of the disease, they led to the modern "canalolithiasis theory" and highly effective canalith repositioning or "liberatory" maneuvers for BPPV.

  17. Human development x: Explanation of macroevolution--top-down evolution materializes consciousness. The origin of metamorphosis.

    PubMed

    Hermansen, Tyge Dahl; Ventegodt, Søren; Merrick, Joav

    2006-12-15

    In this paper, we first give a short discussion of the macroevolution viewing life as information-directed, complex, dynamic systems. On this basis, we give our explanation of the origin of life and discuss the top-down evolution of molecules, proteins, and macroevolution. We discuss these subjects according to our new holistic biological paradigm. In view of this, we discuss the macroevolution of the organism, the species, the biosphere, and human society. After this, we discuss the shift in evolution from natural selection to a new proposed process of nature called the "metamorphous top-down" evolution. We discuss the capability of the evolutionary shift to govern some of the processes that lead to the formation of new species. We discuss the mechanisms we think are behind this proposed shift in evolution and conclude that this event is able to explain the huge biological diversity of nature in combination with evolutionary natural selection. We also discuss this event of nature as an isolated, but integrated, part of the universe. We propose the most important genetic and biochemical process that we think is behind the evolutionary shift as a complicated symbiosis of mechanisms leading to metamorphosis in all biological individuals, from bacteria to humans. The energetic superorbital that manifests the consciousness governs all these processes through quantum chemical activity. This is the key to evolutionary shift through the consciousness, and we propose to call this process "adult human metamorphosis".

  18. A novel hybrid classification model of genetic algorithms, modified k-Nearest Neighbor and developed backpropagation neural network.

    PubMed

    Salari, Nader; Shohaimi, Shamarina; Najafi, Farid; Nallappan, Meenakshii; Karishnarajah, Isthrinayagy

    2014-01-01

    Among numerous artificial intelligence approaches, k-Nearest Neighbor algorithms, genetic algorithms, and artificial neural networks are considered as the most common and effective methods in classification problems in numerous studies. In the present study, the results of the implementation of a novel hybrid feature selection-classification model using the above mentioned methods are presented. The purpose is benefitting from the synergies obtained from combining these technologies for the development of classification models. Such a combination creates an opportunity to invest in the strength of each algorithm, and is an approach to make up for their deficiencies. To develop proposed model, with the aim of obtaining the best array of features, first, feature ranking techniques such as the Fisher's discriminant ratio and class separability criteria were used to prioritize features. Second, the obtained results that included arrays of the top-ranked features were used as the initial population of a genetic algorithm to produce optimum arrays of features. Third, using a modified k-Nearest Neighbor method as well as an improved method of backpropagation neural networks, the classification process was advanced based on optimum arrays of the features selected by genetic algorithms. The performance of the proposed model was compared with thirteen well-known classification models based on seven datasets. Furthermore, the statistical analysis was performed using the Friedman test followed by post-hoc tests. The experimental findings indicated that the novel proposed hybrid model resulted in significantly better classification performance compared with all 13 classification methods. Finally, the performance results of the proposed model was benchmarked against the best ones reported as the state-of-the-art classifiers in terms of classification accuracy for the same data sets. The substantial findings of the comprehensive comparative study revealed that performance of the

  19. A Novel Hybrid Classification Model of Genetic Algorithms, Modified k-Nearest Neighbor and Developed Backpropagation Neural Network

    PubMed Central

    Salari, Nader; Shohaimi, Shamarina; Najafi, Farid; Nallappan, Meenakshii; Karishnarajah, Isthrinayagy

    2014-01-01

    Among numerous artificial intelligence approaches, k-Nearest Neighbor algorithms, genetic algorithms, and artificial neural networks are considered as the most common and effective methods in classification problems in numerous studies. In the present study, the results of the implementation of a novel hybrid feature selection-classification model using the above mentioned methods are presented. The purpose is benefitting from the synergies obtained from combining these technologies for the development of classification models. Such a combination creates an opportunity to invest in the strength of each algorithm, and is an approach to make up for their deficiencies. To develop proposed model, with the aim of obtaining the best array of features, first, feature ranking techniques such as the Fisher's discriminant ratio and class separability criteria were used to prioritize features. Second, the obtained results that included arrays of the top-ranked features were used as the initial population of a genetic algorithm to produce optimum arrays of features. Third, using a modified k-Nearest Neighbor method as well as an improved method of backpropagation neural networks, the classification process was advanced based on optimum arrays of the features selected by genetic algorithms. The performance of the proposed model was compared with thirteen well-known classification models based on seven datasets. Furthermore, the statistical analysis was performed using the Friedman test followed by post-hoc tests. The experimental findings indicated that the novel proposed hybrid model resulted in significantly better classification performance compared with all 13 classification methods. Finally, the performance results of the proposed model was benchmarked against the best ones reported as the state-of-the-art classifiers in terms of classification accuracy for the same data sets. The substantial findings of the comprehensive comparative study revealed that performance of the

  20. TH-E-BRE-07: Development of Dose Calculation Error Predictors for a Widely Implemented Clinical Algorithm

    SciTech Connect

    Egan, A; Laub, W

    2014-06-15

    Purpose: Several shortcomings of the current implementation of the analytic anisotropic algorithm (AAA) may lead to dose calculation errors in highly modulated treatments delivered to highly heterogeneous geometries. Here we introduce a set of dosimetric error predictors that can be applied to a clinical treatment plan and patient geometry in order to identify high risk plans. Once a problematic plan is identified, the treatment can be recalculated with more accurate algorithm in order to better assess its viability. Methods: Here we focus on three distinct sources dosimetric error in the AAA algorithm. First, due to a combination of discrepancies in smallfield beam modeling as well as volume averaging effects, dose calculated through small MLC apertures can be underestimated, while that behind small MLC blocks can overestimated. Second, due the rectilinear scaling of the Monte Carlo generated pencil beam kernel, energy is not properly transported through heterogeneities near, but not impeding, the central axis of the beamlet. And third, AAA overestimates dose in regions very low density (< 0.2 g/cm{sup 3}). We have developed an algorithm to detect the location and magnitude of each scenario within the patient geometry, namely the field-size index (FSI), the heterogeneous scatter index (HSI), and the lowdensity index (LDI) respectively. Results: Error indices successfully identify deviations between AAA and Monte Carlo dose distributions in simple phantom geometries. Algorithms are currently implemented in the MATLAB computing environment and are able to run on a typical RapidArc head and neck geometry in less than an hour. Conclusion: Because these error indices successfully identify each type of error in contrived cases, with sufficient benchmarking, this method can be developed into a clinical tool that may be able to help estimate AAA dose calculation errors and when it might be advisable to use Monte Carlo calculations.

  1. Development of micro-flow hydrothermal monitoring systems and their applications to the origin of life study on Earth.

    PubMed

    Kawamura, Kunio

    2011-01-01

    Continuous extensive studies on thermophilic organisms have suggested that life emerged on hydrothermal systems on primitive Earth. Thus, it is well known that hydrothermal reactions are, therefore, very important to study fields deeply related to the origin-of-life study. Furthermore, the importance of hydrothermal and solvothermal systems is now realized in both fundamental and practical areas. Here, our recent investigations are described for the development of real-time and in situ monitoring systems for hydrothermal reactions. The systems were primarily developed for the origin-of-life study, but it was also applicable to fundamental and practical areas. The present techniques are based on the concept that a sample solution is injected to a narrow tubing flow reactor at high temperatures, where the sample is rapidly heated up in a very short time by exposure at to a high-temperature narrow tubing flow reactor with a very short time scale. This enables millisecond to second time-scale monitoring in real time and/or in situ at temperatures of up to 400°C. By using these techniques, a series of studies on the hydrothermal origin-of-life have been successfully carried out.

  2. Lightning Jump Algorithm Development for the GOES·R Geostationary Lightning Mapper

    NASA Technical Reports Server (NTRS)

    Schultz. E.; Schultz. C.; Chronis, T.; Stough, S.; Carey, L.; Calhoun, K.; Ortega, K.; Stano, G.; Cecil, D.; Bateman, M.; Goodman, S.

    2014-01-01

    Current work on the lightning jump algorithm to be used in GOES-R Geostationary Lightning Mapper (GLM)'s data stream is multifaceted due to the intricate interplay between the storm tracking, GLM proxy data, and the performance of the lightning jump itself. This work outlines the progress of the last year, where analysis and performance of the lightning jump algorithm with automated storm tracking and GLM proxy data were assessed using over 700 storms from North Alabama. The cases analyzed coincide with previous semi-objective work performed using total lightning mapping array (LMA) measurements in Schultz et al. (2011). Analysis shows that key components of the algorithm (flash rate and sigma thresholds) have the greatest influence on the performance of the algorithm when validating using severe storm reports. Automated objective analysis using the GLM proxy data has shown probability of detection (POD) values around 60% with false alarm rates (FAR) around 73% using similar methodology to Schultz et al. (2011). However, when applying verification methods similar to those employed by the National Weather Service, POD values increase slightly (69%) and FAR values decrease (63%). The relationship between storm tracking and lightning jump has also been tested in a real-time framework at NSSL. This system includes fully automated tracking by radar alone, real-time LMA and radar observations and the lightning jump. Results indicate that the POD is strong at 65%. However, the FAR is significantly higher than in Schultz et al. (2011) (50-80% depending on various tracking/lightning jump parameters) when using storm reports for verification. Given known issues with Storm Data, the performance of the real-time jump algorithm is also being tested with high density radar and surface observations from the NSSL Severe Hazards Analysis & Verification Experiment (SHAVE).

  3. Passive microwave remote sensing of rainfall with SSM/I: Algorithm development and implementation

    NASA Technical Reports Server (NTRS)

    Ferriday, James G.; Avery, Susan K.

    1994-01-01

    A physically based algorithm sensitive to emission and scattering is used to estimate rainfall using the Special Sensor Microwave/Imager (SSM/I). The algorithm is derived from radiative transfer calculations through an atmospheric cloud model specifying vertical distributions of ice and liquid hydrometeors as a function of rain rate. The algorithm is structured in two parts: SSM/I brightness temperatures are screened to detect rainfall and are then used in rain-rate calculation. The screening process distinguishes between nonraining background conditions and emission and scattering associated with hydrometeors. Thermometric temperature and polarization thresholds determined from the radiative transfer calculations are used to detect rain, whereas the rain-rate calculation is based on a linear function fit to a linear combination of channels. Separate calculations for ocean and land account for different background conditions. The rain-rate calculation is constructed to respond to both emission and scattering, reduce extraneous atmospheric and surface effects, and to correct for beam filling. The resulting SSM/I rain-rate estimates are compared to three precipitation radars as well as to a dynamically simulated rainfall event. Global estimates from the SSM/I algorithm are also compared to continental and shipboard measurements over a 4-month period. The algorithm is found to accurately describe both localized instantaneous rainfall events and global monthly patterns over both land and ovean. Over land the 4-month mean difference between SSM/I and the Global Precipitation Climatology Center continental rain gauge database is less than 10%. Over the ocean, the mean difference between SSM/I and the Legates and Willmott global shipboard rain gauge climatology is less than 20%.

  4. Algorithm Development and Validation of CDOM Properties for Estuarine and Continental Shelf Waters Along the Northeastern U.S. Coast

    NASA Technical Reports Server (NTRS)

    Mannino, Antonio; Novak, Michael G.; Hooker, Stanford B.; Hyde, Kimberly; Aurin, Dick

    2014-01-01

    An extensive set of field measurements have been collected throughout the continental margin of the northeastern U.S. from 2004 to 2011 to develop and validate ocean color satellite algorithms for the retrieval of the absorption coefficient of chromophoric dissolved organic matter (aCDOM) and CDOM spectral slopes for the 275:295 nm and 300:600 nm spectral range (S275:295 and S300:600). Remote sensing reflectance (Rrs) measurements computed from in-water radiometry profiles along with aCDOM() data are applied to develop several types of algorithms for the SeaWiFS and MODIS-Aqua ocean color satellite sensors, which involve least squares linear regression of aCDOM() with (1) Rrs band ratios, (2) quasi-analytical algorithm-based (QAA based) products of total absorption coefficients, (3) multiple Rrs bands within a multiple linear regression (MLR) analysis, and (4) diffuse attenuation coefficient (Kd). The relative error (mean absolute percent difference; MAPD) for the MLR retrievals of aCDOM(275), aCDOM(355), aCDOM(380), aCDOM(412) and aCDOM(443) for our study region range from 20.4-23.9 for MODIS-Aqua and 27.3-30 for SeaWiFS. Because of the narrower range of CDOM spectral slope values, the MAPD for the MLR S275:295 and QAA-based S300:600 algorithms are much lower ranging from 9.9 and 8.3 for SeaWiFS, respectively, and 8.7 and 6.3 for MODIS, respectively. Seasonal and spatial MODIS-Aqua and SeaWiFS distributions of aCDOM, S275:295 and S300:600 processed with these algorithms are consistent with field measurements and the processes that impact CDOM levels along the continental shelf of the northeastern U.S. Several satellite data processing factors correlate with higher uncertainty in satellite retrievals of aCDOM, S275:295 and S300:600 within the coastal ocean, including solar zenith angle, sensor viewing angle, and atmospheric products applied for atmospheric corrections. Algorithms that include ultraviolet Rrs bands provide a better fit to field measurements than

  5. Development of a voltage-dependent current noise algorithm for conductance-based stochastic modelling of auditory nerve fibres.

    PubMed

    Badenhorst, Werner; Hanekom, Tania; Hanekom, Johan J

    2016-12-01

    This study presents the development of an alternative noise current term and novel voltage-dependent current noise algorithm for conductance-based stochastic auditory nerve fibre (ANF) models. ANFs are known to have significant variance in threshold stimulus which affects temporal characteristics such as latency. This variance is primarily caused by the stochastic behaviour or microscopic fluctuations of the node of Ranvier's voltage-dependent sodium channels of which the intensity is a function of membrane voltage. Though easy to implement and low in computational cost, existing current noise models have two deficiencies: it is independent of membrane voltage, and it is unable to inherently determine the noise intensity required to produce in vivo measured discharge probability functions. The proposed algorithm overcomes these deficiencies while maintaining its low computational cost and ease of implementation compared to other conductance and Markovian-based stochastic models. The algorithm is applied to a Hodgkin-Huxley-based compartmental cat ANF model and validated via comparison of the threshold probability and latency distributions to measured cat ANF data. Simulation results show the algorithm's adherence to in vivo stochastic fibre characteristics such as an exponential relationship between the membrane noise and transmembrane voltage, a negative linear relationship between the log of the relative spread of the discharge probability and the log of the fibre diameter and a decrease in latency with an increase in stimulus intensity.

  6. Development of a Real-Time Pulse Processing Algorithm for TES-Based X-Ray Microcalorimeters

    NASA Technical Reports Server (NTRS)

    Tan, Hui; Hennig, Wolfgang; Warburton, William K.; Doriese, W. Bertrand; Kilbourne, Caroline A.

    2011-01-01

    We report here a real-time pulse processing algorithm for superconducting transition-edge sensor (TES) based x-ray microcalorimeters. TES-based. microca1orimeters offer ultra-high energy resolutions, but the small volume of each pixel requires that large arrays of identical microcalorimeter pixe1s be built to achieve sufficient detection efficiency. That in turn requires as much pulse processing as possible must be performed at the front end of readout electronics to avoid transferring large amounts of data to a host computer for post-processing. Therefore, a real-time pulse processing algorithm that not only can be implemented in the readout electronics but also achieve satisfactory energy resolutions is desired. We have developed an algorithm that can be easily implemented. in hardware. We then tested the algorithm offline using several data sets acquired with an 8 x 8 Goddard TES x-ray calorimeter array and 2x16 NIST time-division SQUID multiplexer. We obtained an average energy resolution of close to 3.0 eV at 6 keV for the multiplexed pixels while preserving over 99% of the events in the data sets.

  7. The development of a line-scan imaging algorithm for the detection of fecal contamination on leafy geens

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Chieh; Kim, Moon S.; Chuang, Yung-Kun; Lee, Hoyoung

    2013-05-01

    This paper reports the development of a multispectral algorithm, using the line-scan hyperspectral imaging system, to detect fecal contamination on leafy greens. Fresh bovine feces were applied to the surfaces of washed loose baby spinach leaves. A hyperspectral line-scan imaging system was used to acquire hyperspectral fluorescence images of the contaminated leaves. Hyperspectral image analysis resulted in the selection of the 666 nm and 688 nm wavebands for a multispectral algorithm to rapidly detect feces on leafy greens, by use of the ratio of fluorescence intensities measured at those two wavebands (666 nm over 688 nm). The algorithm successfully distinguished most of the lowly diluted fecal spots (0.05 g feces/ml water and 0.025 g feces/ml water) and some of the highly diluted spots (0.0125 g feces/ml water and 0.00625 g feces/ml water) from the clean spinach leaves. The results showed the potential of the multispectral algorithm with line-scan imaging system for application to automated food processing lines for food safety inspection of leafy green vegetables.

  8. Evidence of Selection against Complex Mitotic-Origin Aneuploidy during Preimplantation Development

    PubMed Central

    McCoy, Rajiv C.; Demko, Zachary P.; Ryan, Allison; Banjevic, Milena; Hill, Matthew; Sigurjonsson, Styrmir; Rabinowitz, Matthew; Petrov, Dmitri A.

    2015-01-01

    Whole-chromosome imbalances affect over half of early human embryos and are the leading cause of pregnancy loss. While these errors frequently arise in oocyte meiosis, many such whole-chromosome abnormalities affecting cleavage-stage embryos are the result of chromosome missegregation occurring during the initial mitotic cell divisions. The first wave of zygotic genome activation at the 4–8 cell stage results in the arrest of a large proportion of embryos, the vast majority of which contain whole-chromosome abnormalities. Thus, the full spectrum of meiotic and mitotic errors can only be detected by sampling after the initial cell divisions, but prior to this selective filter. Here, we apply 24-chromosome preimplantation genetic screening (PGS) to 28,052 single-cell day-3 blastomere biopsies and 18,387 multi-cell day-5 trophectoderm biopsies from 6,366 in vitro fertilization (IVF) cycles. We precisely characterize the rates and patterns of whole-chromosome abnormalities at each developmental stage and distinguish errors of meiotic and mitotic origin without embryo disaggregation, based on informative chromosomal signatures. We show that mitotic errors frequently involve multiple chromosome losses that are not biased toward maternal or paternal homologs. This outcome is characteristic of spindle abnormalities and chaotic cell division detected in previous studies. In contrast to meiotic errors, our data also show that mitotic errors are not significantly associated with maternal age. PGS patients referred due to previous IVF failure had elevated rates of mitotic error, while patients referred due to recurrent pregnancy loss had elevated rates of meiotic error, controlling for maternal age. These results support the conclusion that mitotic error is the predominant mechanism contributing to pregnancy losses occurring prior to blastocyst formation. This high-resolution view of the full spectrum of whole-chromosome abnormalities affecting early embryos provides insight

  9. The early origins of food preferences: targeting the critical windows of development.

    PubMed

    Gugusheff, Jessica Rose; Ong, Zhi Yi; Muhlhausler, Beverly Sara

    2015-02-01

    The nutritional environment to which an individual is exposed during the perinatal period plays a crucial role in determining his or her future metabolic health outcomes. Studies in rodent models have demonstrated that excess maternal intake of high-fat and/or high-sugar "junk foods" during pregnancy and lactation can alter the development of the central reward pathway, particularly the opioid and dopamine systems, and program an increased preference for junk foods in the offspring. More recently, there have been attempts to define the critical windows of development during which the opioid and dopamine systems within the reward pathway are most susceptible to alteration and to determine whether it is possible to reverse these effects through nutritional interventions applied later in development. This review discusses the progress made to date in these areas, highlights the apparent importance of sex in determining these effects, and considers the potential implications of the findings from rodent models in the human context.

  10. The design and development of signal-processing algorithms for an airborne x-band Doppler weather radar

    NASA Technical Reports Server (NTRS)

    Nicholson, Shaun R.

    1994-01-01

    Improved measurements of precipitation will aid our understanding of the role of latent heating on global circulations. Spaceborne meteorological sensors such as the planned precipitation radar and microwave radiometers on the Tropical Rainfall Measurement Mission (TRMM) provide for the first time a comprehensive means of making these global measurements. Pre-TRMM activities include development of precipitation algorithms using existing satellite data, computer simulations, and measurements from limited aircraft campaigns. Since the TRMM radar will be the first spaceborne precipitation radar, there is limited experience with such measurements, and only recently have airborne radars become available that can attempt to address the issue of the limitations of a spaceborne radar. There are many questions regarding how much attenuation occurs in various cloud types and the effect of cloud vertical motions on the estimation of precipitation rates. The EDOP program being developed by NASA GSFC will provide data useful for testing both rain-retrieval algorithms and the importance of vertical motions on the rain measurements. The purpose of this report is to describe the design and development of real-time embedded parallel algorithms used by EDOP to extract reflectivity and Doppler products (velocity, spectrum width, and signal-to-noise ratio) as the first step in the aforementioned goals.

  11. Accreditation in the USA: Origins, Developments and Future Prospects. Improving the Managerial Effectiveness of Higher Education.

    ERIC Educational Resources Information Center

    El-Khawas, Elaine

    This study analyzes the accreditation experience in the United States with special emphasis on the issues and decisions that surrounded the development of evaluation procedures and standards. Attention is given to the relationship between accrediting agencies and governmental agencies, the effect of accrediting requirements on the way that…

  12. The Origins and Development of the Diffusion of Innovations Paradigm as an Example of Scientific Growth.

    ERIC Educational Resources Information Center

    Valente, Thomas W.; Rogers, Everett M.

    1995-01-01

    Describes some of the history of rural sociological research on the diffusion of agricultural innovations, and shows how research followed (and deviated from) the Kuhnian concept of paradigm development. Examines the Iowa Hybrid Seed Corn Study which contributed to the rise of sociological diffusion research. (103 references) (AEF)

  13. Developing and Implementing an Interdisciplinary Origins Course at a State University

    ERIC Educational Resources Information Center

    Miller, Keith; Totten, Iris

    2009-01-01

    A truly interdisciplinary course was successfully developed and taught that presented an overview of the historical sciences with an emphasis on the nature of scientific inquiry and its relationship to other ways of knowing. The course included contributions from faculty in physics, biology, geology, philosophy, and English. (Contains 2 figures.)

  14. Identity Development Theories in Student Affairs: Origins, Current Status, and New Approaches

    ERIC Educational Resources Information Center

    Torres, Vasti; Jones, Susan R.; Renn, Kristen A.

    2009-01-01

    This article focuses on understanding how identity development is conceptualized in student affairs. The need to understand the person, context, and interactions between the two advances identity theories as relevant to student affairs practice. The more practitioners understand how students make meaning of their identities, the better they are…

  15. Engines of Economic Development: The Origins and Evolution of Iowa's Comprehensive Community Colleges

    ERIC Educational Resources Information Center

    Friedel, Janice

    2010-01-01

    One of the most remarkable developments in American education in the past half century has been the creation and rapid growth of the nation's community colleges. Built on the curricular pillars of vocational education, transfer programs, and community education, community colleges today are considered the "engines of statewide economic…

  16. Cultural or Political? Origin and Development of Educational Policy of the Tibetan "Neidi" Education in China

    ERIC Educational Resources Information Center

    Zhiyong, Zhu; Meng, Deng

    2015-01-01

    In order to cultivate talents and speed up development in Tibet, Tibetan "Neidi" Classes/Schools were established in other parts of China from the mid-1980s with the approval and support of the Chinese central government. The authors provide details about the 20-year existence of the "Neidi" Classes/Schools, including student…

  17. Geology of the Thaumasia region, Mars: Plateau development, valley origins, and magmatic evolution

    USGS Publications Warehouse

    Dohm, J.M.; Tanaka, K.L.

    1999-01-01

    rock occurs there. The overall volcanotectonic history at Thaumasia fits into a model for Tharsis as a whole in which long-lived Syria Planum-centered activity is ringed by a few significant, shorter-lived centers of activity like the Thaumasia plateau. Valley formation, like tectonism in the region, peaked during the Noachian and declined substantially during the Hesperian and Amazonian. Temporal and spatial associations of single erosional valleys and valley networks with volcanoes, rift systems, and large impact craters suggest that the majority of valleys formed by hydrothermal, deformational, and seismic-induced processes. The origin of scattered, mainly Noachian valleys is more conjectural; possible explanations include local precipitation, seismic disturbance of aquifers, or unrecognized intrusions. ?? 1999 Elsevier Science Ltd. All rights reserved.

  18. Atmospheric Correction, Vicarious Calibration and Development of Algorithms for Quantifying Cyanobacteria Blooms from Oceansat-1 OCM Satellite Data

    NASA Astrophysics Data System (ADS)

    Dash, P.; Walker, N. D.; Mishra, D. R.; Hu, C.; D'Sa, E. J.; Pinckney, J. L.

    2011-12-01

    Cyanobacteria represent a major harmful algal group in fresh to brackish water environments. Lac des Allemands, a freshwater lake located southwest of New Orleans, Louisiana on the upper end of the Barataria Estuary, provides a natural laboratory for remote characterization of cyanobacteria blooms because of their seasonal occurrence. The Ocean Colour Monitor (OCM) sensor provides radiance measurements similar to SeaWiFS but with higher spatial resolution. However, OCM does not have a standard atmospheric correction procedure, and it is difficult to find a detailed description of the entire atmospheric correction procedure for ocean (or lake) in one place. Atmospheric correction of satellite data over small lakes and estuaries (Case 2 waters) is also challenging due to difficulties in estimation of aerosol scattering accurately in these areas. Therefore, an atmospheric correction procedure was written for processing OCM data, based on the extensive work done for SeaWiFS. Since OCM-retrieved radiances were abnormally low in the blue wavelength region, a vicarious calibration procedure was also developed. Empirical inversion algorithms were developed to convert the OCM remote sensing reflectance (Rrs) at bands centered at 510.6 and 556.4 nm to concentrations of phycocyanin (PC), the primary cyanobacterial pigment. A holistic approach was followed to minimize the influence of other optically active constituents on the PC algorithm. Similarly, empirical algorithms to estimate chlorophyll a (Chl a) concentrations were developed using OCM bands centered at 556.4 and 669 nm. The best PC algorithm (R2=0.7450, p<0.0001, n=72) yielded a root mean square error (RMSE) of 36.92 μg/L with a relative RMSE of 10.27% (PC from 2.75-363.50 μg/L, n=48). The best algorithm for Chl a (R2=0.7510, p<0.0001, n=72) produced an RMSE of 31.19 μg/L with a relative RMSE of 16.56% (Chl a from 9.46-212.76 μg/L, n=48). While more field data are required to further validate the long

  19. Ocean Observations with EOS/MODIS: Algorithm Development and Post Launch Studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.; Conboy, B. (Technical Monitor)

    1999-01-01

    Significant accomplishments made during the present reporting period include: 1) Installed spectral optimization algorithm in the SeaDas image processing environment and successfully processed SeaWiFS imagery. The results were superior to the standard SeaWiFS algorithm (the MODIS prototype) in a turbid atmosphere off the US East Coast, but similar in a clear (typical) oceanic atmosphere; 2) Inverted ACE-2 LIDAR measurements coupled with sun photometer-derived aerosol optical thickness to obtain the vertical profile of aerosol optical thickness. The profile was validated with simultaneous aircraft measurements; and 3) Obtained LIDAR and CIMEL measurements of typical maritime and mineral dust-dominated marine atmosphere in the U.S. Virgin Islands. Contemporaneous SeaWiFS imagery were also acquired.

  20. Transform methods for developing parallel algorithms for cyclic-block signal processing

    NASA Astrophysics Data System (ADS)

    Marshall, T. G., Jr.

    A class of FIR and IIR single and multirate parallel filtering algorithms is introduced in which blocks of inputs and outputs are processed on-the-fly in a cyclic manner. There is no inherent latency introduced by the decomposition procedure giving the parallelism, the system latency being primarily due to the component processors. The structure is particularly well-suited for systems in which the component processors are the familiar DSP chips optimized for convolution although other component structures can be accommodated. In particular, the automatic data shifting feature of the TMS320 series processors can be utilized in these algorithms. A transform notation, introduced for digital filter banks, is recast in the desired form for this application. The resulting structure of the system, in this notation, is a circulant matrix for FIR filtering or a related matrix in other cases. The cyclic properties of the system and useful implementation flexibility result from this matrix structure.

  1. On the development and application of a continuous-discrete recursive prediction error algorithm.

    PubMed

    Stigter, J D; Beck, M B

    2004-10-01

    Recursive state and parameter reconstruction is a well-established field in control theory. In the current paper we derive a continuous-discrete version of recursive prediction error algorithm and apply the filter in an environmental and biological setting as a possible alternative to the well-known extended Kalman filter. The framework from which the derivation is started is the so-called 'innovations-format' of the (continuous time) system model, including (discrete time) measurements. After the algorithm has been motivated and derived, it is subsequently applied to hypothetical and 'real-life' case studies including reconstruction of biokinetic parameters and parameters characterizing the dynamics of a river in the United Kingdom. Advantages and characteristics of the method are discussed.

  2. Predicting pregnancy rate following multiple embryo transfers using algorithms developed through static image analysis.

    PubMed

    Tian, Yun; Wang, Wei; Yin, Yabo; Wang, Weizhou; Duan, Fuqing; Zhao, Shifeng

    2017-02-16

    Single-embryo image assessment involves a high degree of inaccuracy because of the imprecise labelling of the transferred embryo images. In this study, we considered the entire transfer cycle to predict the implantation potential of embryos, and propose a novel algorithm based on a combination of local binary pattern texture feature and Adaboost classifiers to predict pregnancy rate. The first step of the proposed method was to extract the features of the embryo images using the local binary pattern operator. After this, multiple embryo images in a transfer cycle were considered as one entity, and the pregnancy rate was predicted using three classifiers: the Real Adaboost, Gentle Adaboost, and Modest Adaboost. Finally, the pregnancy rate was determined via the majority vote rule based on classification results of the three Adaboost classifiers. The proposed algorithm was verified to have a good predictive performance and may assist the embryologist and clinician to select embryos to transfer and in turn improve pregnancy rate.

  3. Smart respiratory monitoring: clinical development and validation of the IPI™ (Integrated Pulmonary Index) algorithm.

    PubMed

    Ronen, M; Weissbrod, R; Overdyk, F J; Ajizian, S

    2017-04-01

    Continuous electronic monitoring of patient respiratory status frequently includes PetCO2 (end tidal CO2), RR (respiration rate), SpO2 (arterial oxygen saturation), and PR (pulse rate). Interpreting and integrating these vital signs as numbers or waveforms is routinely done by anesthesiologists and intensivists but is challenging for clinicians in low acuity areas such as medical wards, where continuous electronic respiratory monitoring is becoming more common place. We describe a heuristic algorithm that simplifies the interpretation of these four parameters in assessing a patient's respiratory status, the Integrated Pulmonary Index (IPI). The IPI algorithm is a mathematical model combining SpO2, RR, PR, and PetCO2 into a single value between 1 and 10 that summarizes the adequacy of ventilation and oxygenation at that point in time. The algorithm was designed using a fuzzy logic inference model to incorporate expert clinical opinions. The algorithm was verified by comparison to experts' scoring of clinical scenarios. The validity of the index was tested in a retrospective analysis of continuous SpO2, RR, PR, and PetCO2 readings obtained from 523 patients in a variety of clinical settings. IPI correlated well with expert interpretation of the continuous respiratory data (R = 0.83, p < 0.001), with agreement of -0.5 ± 1.4. Receiver operating curves analysis resulted in high levels of sensitivity (ranging from 0.83 to 1.00), and corresponding specificity (ranging from 0.96 to 0.74), based on IPI thresholds 3-6. The IPI reliably interpreted the respiratory status of patients in multiple areas of care using off-line continuous respiratory data. Further prospective studies are required to evaluate IPI in real time in clinical settings.

  4. Intermediate Level Computer Vision Processing Algorithm Development for the Content Addressable Array Parallel Processor.

    DTIC Science & Technology

    1986-11-29

    Madison, Wiscon- sin, August 1982. [161 Fitzpatrick, D. T., Foderaro, J. K., Katevenis, M . G. H., Landman, H. A.. Patterson, D. A., Peek, J. B ., Peshkess...October 18-22, 1982. [33] Levitan , S. P., Parallel Algorithms and Architectures: A Programmer’s Per- 35 AN I%. . m ,,-1we, V .r V . , - .7...e. . . e. ** -! ~ * ~ - . . . . . 0.Wty C^11Cri m . op~ bo* pa, U FILE- copy(4 REPORT DOCUMENTATION PAGE e PQTSIC%.RSTV C6AUSIPCATION 16

  5. Ocean observations with EOS/MODIS: Algorithm development and post launch studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1996-01-01

    An investigation of the influence of stratospheric aerosol on the performance of the atmospheric correction algorithm is nearly complete. The results indicate how the performance of the algorithm is degraded if the stratospheric aerosol is ignored. Use of the MODIS 1380 nm band to effect a correction for stratospheric aerosols was also studied. Simple algorithms such as subtracting the reflectance at 1380 nm from the visible and near infrared bands can significantly reduce the error; however, only if the diffuse transmittance of the aerosol layer is taken into account. The atmospheric correction code has been modified for use with absorbing aerosols. Tests of the code showed that, in contrast to non absorbing aerosols, the retrievals were strongly influenced by the vertical structure of the aerosol, even when the candidate aerosol set was restricted to a set appropriate to the absorbing aerosol. This will further complicate the problem of atmospheric correction in an atmosphere with strongly absorbing aerosols. Our whitecap radiometer system and solar aureole camera were both tested at sea and performed well. Investigation of a technique to remove the effects of residual instrument polarization sensitivity were initiated and applied to an instrument possessing (approx.) 3-4 times the polarization sensitivity expected for MODIS. Preliminary results suggest that for such an instrument, elimination of the polarization effect is possible at the required level of accuracy by estimating the polarization of the top-of-atmosphere radiance to be that expected for a pure Rayleigh scattering atmosphere. This may be of significance for design of a follow-on MODIS instrument. W.M. Balch participated on two month-long cruises to the Arabian sea, measuring coccolithophore abundance, production, and optical properties. A thorough understanding of the relationship between calcite abundance and light scatter, in situ, will provide the basis for a generic suspended calcite algorithm.

  6. Development and evaluation of a data-adaptive alerting algorithm for univariate temporal biosurveillance data.

    PubMed

    Elbert, Yevgeniy; Burkom, Howard S

    2009-11-20

    This paper discusses further advances in making robust predictions with the Holt-Winters forecasts for a variety of syndromic time series behaviors and introduces a control-chart detection approach based on these forecasts. Using three collections of time series data, we compare biosurveillance alerting methods with quantified measures of forecast agreement, signal sensitivity, and time-to-detect. The study presents practical rules for initialization and parameterization of biosurveillance time series. Several outbreak scenarios are used for detection comparison. We derive an alerting algorithm from forecasts using Holt-Winters-generalized smoothing for prospective application to daily syndromic time series. The derived algorithm is compared with simple control-chart adaptations and to more computationally intensive regression modeling methods. The comparisons are conducted on background data from both authentic and simulated data streams. Both types of background data include time series that vary widely by both mean value and cyclic or seasonal behavior. Plausible, simulated signals are added to the background data for detection performance testing at signal strengths calculated to be neither too easy nor too hard to separate the compared methods. Results show that both the sensitivity and the timeliness of the Holt-Winters-based algorithm proved to be comparable or superior to that of the more traditional prediction methods used for syndromic surveillance.

  7. [Assessment of the impact of GMO of plant origin on rat progeny development in 3 generations].

    PubMed

    Tyshko, N V; Zhminchenko, V M; Pashorina, V A; Seliaskin, K E; Saprykin, V P; Utembaeva, N T; Tutel'ian, V A

    2011-01-01

    The publication presents the results of assessment of impact of genetically modified (GM) maize Liberty Link on prenatal and postnatal development of progeny of 3 generations of Wistar rats. A total of 630 adult animals and 2837 pups were used in the experiment. The animals were divided into 5 groups which got the diets with inclusion of maize: the animals of the experimental group got the diet with the GM-maize, animals of the control group - with near isogenic conventional analogue of the GM-maize, animals of the 1st, 2nd and 3rd reference groups - conventional varieties of maize ROSS 144 MV, ROSS 197 MVW, Dokuchayevskaya 250 MV respectively. The maize was included in the diet at maximum possible level not violating the balance of basic nutrients. Analysis of the data obtained during the study did not reveal any impact of GM-maize on rat progeny development.

  8. Origin and development of plasma membrane derived invaginations in Vinca rosea l.

    NASA Technical Reports Server (NTRS)

    Mahlberg, P.; Walkinshaw, C.; Olson, K.

    1971-01-01

    The occurrence, morphology, and possible ontogeny of plasma-membrane-related structures are described which can develop into invaginations or intravacuolar formations. An underlying study of meristematic tissues from the shoot of Vinca rosea supports the interpretation that endocytosis does occur in plant cells and that it is appropriate to refer to these structures as endocytoses. The function of these invaginations or their content remains to be elucidated.

  9. The Origins and Development of the National Training Center 1976-1984

    DTIC Science & Technology

    1992-01-01

    sophisticated instrumentation needed for engagement simulation was already under development. This was true of fe Army’s Multiple Iraegrated Laser Engagement...program also came from General Dynamics/Electronics ( GD /E), which was responsible for the installation and testing of a position location system, and...businesses. Thereupon, AMEX Systems Corp., a minority-owned small business in California, examined the RFP. AMEX solicited support from SAI and GD /E in

  10. NASA Astrophysics Cosmic Origins (COR) and Physics of the Cosmos (PCOS) Strategic Technology Development Program

    NASA Astrophysics Data System (ADS)

    Pham, Thai; Seery, Bernard D.

    2015-01-01

    The COR and PCOS Program Offices (PO) reside at the NASA Goddard Space Flight Center (GSFC), serving as the NASA Astrophysics Division's implementation arm for matters relating to the two programs. One aspect of the PO's activities is managing the COR and PCOS Strategic Astrophysics Technology (SAT) program, helping mature technologies to enable and enhance future astrophysics missions.The PO is guided by the National Research Council's 'New Worlds, New Horizons in Astronomy and Astrophysics' Decadal Survey report, and NASA's Astrophysics Implementation Plan. Strategic goals include dark energy; gravitational waves; X-ray observatories, e.g., US participation in ATHENA; Inflation probe; and a large UV/Visible telescope.To date, 51 COR and 65 PCOS SAT proposals have been received, of which 11 COR and 18 PCOS projects were funded. Notable successes include maturation of a new far-IR detector, later adopted by the SOFIA HAWC instrument; maturation of the H4RG near-IR detector, adopted by WFIRST; development of an antenna-coupled transition-edge superconducting bolometer, a technology deployed by BICEP2 that allowed measurement of B-mode polarization in the CMB signal, a possible signature of Inflation; and finally, the REXIS instrument on OSIRIS-REx is incorporating CCDs with directly deposited optical blocking filters developed by another SAT-funded project.We discuss our technology development process, with community input and strategic prioritization informing calls for SAT proposals and guiding investment decisions. We also present results of this year's technology gap prioritization and showcase our current portfolio of technology development projects. These include five newly selected projects, kicking off in FY 2015.For more information, visit the COR Program website at cor.gsfc.nasa.gov and the PCOS website at pcos.gsfc.nasa.gov.

  11. The U.S. Navy’s Consultant Development and Qualification Program: Origin and Issues.

    DTIC Science & Technology

    1984-03-01

    attended an d bcught with him HRMC Norfolk’s Professional Qualificaticn and Development Program. Prior to this task force meeting each HPMC /D had its...has completed qualification criteria for the intern level and possesses a basic understanding and knowledge of OD principles and exhibits minimum...basic understanding and knowledge of OD principles and exhibits minimum required ability to employ appropriate skills. SPECIALIST (CERTIFIED) - Works

  12. Cell chirality: its origin and roles in left–right asymmetric development

    PubMed Central

    Inaki, Mikiko; Liu, Jingyang

    2016-01-01

    An item is chiral if it cannot be superimposed on its mirror image. Most biological molecules are chiral. The homochirality of amino acids ensures that proteins are chiral, which is essential for their functions. Chirality also occurs at the whole-cell level, which was first studied mostly in ciliates, single-celled protozoans. Ciliates show chirality in their cortical structures, which is not determined by genetics, but by ‘cortical inheritance’. These studies suggested that molecular chirality directs whole-cell chirality. Intriguingly, chirality in cellular structures and functions is also found in metazoans. In Drosophila, intrinsic cell chirality is observed in various left–right (LR) asymmetric tissues, and appears to be responsible for their LR asymmetric morphogenesis. In other invertebrates, such as snails and Caenorhabditis elegans, blastomere chirality is responsible for subsequent LR asymmetric development. Various cultured cells of vertebrates also show intrinsic chirality in their cellular behaviours and intracellular structural dynamics. Thus, cell chirality may be a general property of eukaryotic cells. In Drosophila, cell chirality drives the LR asymmetric development of individual organs, without establishing the LR axis of the whole embryo. Considering that organ-intrinsic LR asymmetry is also reported in vertebrates, this mechanism may contribute to LR asymmetric development across phyla. This article is part of the themed issue ‘Provocative questions in left–right asymmetry’. PMID:27821533

  13. On the Ethnic Origins of African Development: Chiefs and Precolonial Political Centralization

    PubMed Central

    Michalopoulos, Stelios; Papaioannou, Elias

    2015-01-01

    We report on recent findings of a fruitful research agenda that explores the importance of ethnic-specific traits in shaping African development. First, using recent surveys from Sub-Saharan African countries, we document that individuals identify with their ethnic group as often as with the nation pointing to the salience of ethnicity. Second, we focus on the various historical and contemporary functions of tribal leaders (chiefs) and illustrate their influence on various aspects of the economy and the polity. Third, we elaborate on a prominent dimension of ethnicity, that of the degree of complexity of pre-colonial political organization. Building on insights from the African historiography, we review recent works showing a strong association between pre-colonial centralization and contemporary comparative development both across and within countries. We also document that the link between pre-colonial political centralization and regional development -as captured by satellite images of light density at night-is particularly strong in areas outside the vicinity of the capitals, where due to population mixing and the salience of national institutions ethnic traits play a lesser role. Overall, our evidence is supportive to theories and narratives on the presence of a “dual” economic and institutional environment in Africa. PMID:27011760

  14. Cell chirality: its origin and roles in left-right asymmetric development.

    PubMed

    Inaki, Mikiko; Liu, Jingyang; Matsuno, Kenji

    2016-12-19

    An item is chiral if it cannot be superimposed on its mirror image. Most biological molecules are chiral. The homochirality of amino acids ensures that proteins are chiral, which is essential for their functions. Chirality also occurs at the whole-cell level, which was first studied mostly in ciliates, single-celled protozoans. Ciliates show chirality in their cortical structures, which is not determined by genetics, but by 'cortical inheritance'. These studies suggested that molecular chirality directs whole-cell chirality. Intriguingly, chirality in cellular structures and functions is also found in metazoans. In Drosophila, intrinsic cell chirality is observed in various left-right (LR) asymmetric tissues, and appears to be responsible for their LR asymmetric morphogenesis. In other invertebrates, such as snails and Caenorhabditis elegans, blastomere chirality is responsible for subsequent LR asymmetric development. Various cultured cells of vertebrates also show intrinsic chirality in their cellular behaviours and intracellular structural dynamics. Thus, cell chirality may be a general property of eukaryotic cells. In Drosophila, cell chirality drives the LR asymmetric development of individual organs, without establishing the LR axis of the whole embryo. Considering that organ-intrinsic LR asymmetry is also reported in vertebrates, this mechanism may contribute to LR asymmetric development across phyla.This article is part of the themed issue 'Provocative questions in left-right asymmetry'.

  15. The Food Production Environment and the Development of Antimicrobial Resistance in Human Pathogens of Animal Origin

    PubMed Central

    Lekshmi, Manjusha; Ammini, Parvathi; Kumar, Sanath; Varela, Manuel F.

    2017-01-01

    Food-borne pathogens are a serious human health concern worldwide, and the emergence of antibiotic-resistant food pathogens has further confounded this problem. Once-highly-efficacious antibiotics are gradually becoming ineffective against many important pathogens, resulting in severe treatment crises. Among several reasons for the development and spread of antimicrobial resistance, their overuse in animal food production systems for purposes other than treatment of infections is prominent. Many pathogens of animals are zoonotic, and therefore any development of resistance in pathogens associated with food animals can spread to humans through the food chain. Human infections by antibiotic-resistant pathogens such as Campylobacter spp., Salmonella spp., Escherichia coli and Staphylococcus aureus are increasing. Considering the human health risk due to emerging antibiotic resistance in food animal–associated bacteria, many countries have banned the use of antibiotic growth promoters and the application in animals of antibiotics critically important in human medicine. Concerted global efforts are necessary to minimize the use of antimicrobials in food animals in order to control the development of antibiotic resistance in these systems and their spread to humans via food and water. PMID:28335438

  16. The Food Production Environment and the Development of Antimicrobial Resistance in Human Pathogens of Animal Origin.

    PubMed

    Lekshmi, Manjusha; Ammini, Parvathi; Kumar, Sanath; Varela, Manuel F

    2017-03-14

    Food-borne pathogens are a serious human health concern worldwide, and the emergence of antibiotic-resistant food pathogens has further confounded this problem. Once-highly-efficacious antibiotics are gradually becoming ineffective against many important pathogens, resulting in severe treatment crises. Among several reasons for the development and spread of antimicrobial resistance, their overuse in animal food production systems for purposes other than treatment of infections is prominent. Many pathogens of animals are zoonotic, and therefore any development of resistance in pathogens associated with food animals can spread to humans through the food chain. Human infections by antibiotic-resistant pathogens such as Campylobacter spp., Salmonella spp., Escherichia coli and Staphylococcus aureus are increasing. Considering the human health risk due to emerging antibiotic resistance in food animal-associated bacteria, many countries have banned the use of antibiotic growth promoters and the application in animals of antibiotics critically important in human medicine. Concerted global efforts are necessary to minimize the use of antimicrobials in food animals in order to control the development of antibiotic resistance in these systems and their spread to humans via food and water.

  17. 1 + 1 = 3: Development and validation of a SNP-based algorithm to identify genetic contributions from three distinct inbred mouse strains.

    PubMed

    Gorham, James D; Ranson, Matthew S; Smith, Janebeth C; Gorham, Beverly J; Muirhead, Kristen-Ashley

    2012-12-01

    State-of-the-art, genome-wide assessment of mouse genetic background uses single nucleotide polymorphism (SNP) PCR. As SNP analysis can use multiplex testing, it is amenable to high-throughput analysis and is the preferred method for shared resource facilities that offer genetic background assessment of mouse genomes. However, a typical individual SNP query yields only two alleles (A vs. B), limiting the application of this methodology to distinguishing contributions from no more than two inbred mouse strains. By contrast, simple sequence length polymorphism (SSLP) analysis yields multiple alleles but is not amenable to high-throughput testing. We sought to devise a SNP-based technique to identify donor strain origins when three distinct mouse strains potentially contribute to the genetic makeup of an individual mouse. A computational approach was used to devise a three-strain analysis (3SA) algorithm that would permit identification of three genetic backgrounds while still using a binary-output SNP platform. A panel of 15 mosaic mice with contributions from BALB/c, C57Bl/6, and DBA/2 genetic backgrounds was bred and analyzed using a genome-wide SNP panel using 1449 markers. The 3SA algorithm was applied and then validated using SSLP. The 3SA algorithm assigned 85% of 1449 SNPs as informative for the C57Bl/6, BALB/c, or DBA/2 backgrounds, respectively. Testing the panel of 15 F2 mice, the 3SA algorithm predicted donor strain origins genome-wide. Donor strain origins predicted by the 3SA algorithm correlated perfectly with results from individual SSLP markers located on five different chromosomes (n=70 tests). We have established and validated an analysis algorithm based on binary SNP data that can successfully identify the donor strain origins of chromosomal regions in mice that are bred from three distinct inbred mouse strains.

  18. Development and validation of a segmentation-free polyenergetic algorithm for dynamic perfusion computed tomography.

    PubMed

    Lin, Yuan; Samei, Ehsan

    2016-07-01

    Dynamic perfusion imaging can provide the morphologic details of the scanned organs as well as the dynamic information of blood perfusion. However, due to the polyenergetic property of the x-ray spectra, beam hardening effect results in undesirable artifacts and inaccurate CT values. To address this problem, this study proposes a segmentation-free polyenergetic dynamic perfusion imaging algorithm (pDP) to provide superior perfusion imaging. Dynamic perfusion usually is composed of two phases, i.e., a precontrast phase and a postcontrast phase. In the precontrast phase, the attenuation properties of diverse base materials (e.g., in a thorax perfusion exam, base materials can include lung, fat, breast, soft tissue, bone, and metal implants) can be incorporated to reconstruct artifact-free precontrast images. If patient motions are negligible or can be corrected by registration, the precontrast images can then be employed as a priori information to derive linearized iodine projections from the postcontrast images. With the linearized iodine projections, iodine perfusion maps can be reconstructed directly without the influence of various influential factors, such as iodine location, patient size, x-ray spectrum, and background tissue type. A series of simulations were conducted on a dynamic iodine calibration phantom and a dynamic anthropomorphic thorax phantom to validate the proposed algorithm. The simulations with the dynamic iodine calibration phantom showed that the proposed algorithm could effectively eliminate the beam hardening effect and enable quantitative iodine map reconstruction across various influential factors. The error range of the iodine concentration factors ([Formula: see text]) was reduced from [Formula: see text] for filtered back-projection (FBP) to [Formula: see text] for pDP. The quantitative results of the simulations with the dynamic anthropomorphic thorax phantom indicated that the maximum error of iodine concentrations can be reduced from

  19. Development and evaluation of a modis vegetation index compositing algorithm for long-term climate studies

    NASA Astrophysics Data System (ADS)

    Solano Barajas, Ramon

    The acquisition of remote sensing data having an investigated quality level constitutes an important step to advance our understanding of the vegetation response to environmental factors. Spaceborne sensors introduce additional challenges that should be addressed to assure that derived findings are based on real phenomena, and not biased or misguided by instrument features or processing artifacts. As a consequence, updates to incorporate new advances and user requirements are regularly found on most cutting edge systems such as the Moderate Resolution Imaging Spectroradiometer (MODIS) system. In this dissertation, the objective was to design, characterize and assess any possible departure from current values, a MODIS vegetation index (VI) algorithm for restoring the continuity 16-day 1-km product, based on the new 8-day 500-m MODIS surface reflectance (SR) product scheduled for the forthcoming MODIS Collection 6 (C6). Additionally, the impact of increasing the time resolution (by reducing the compositing period) from 16 to 8 days for the future basic MODIS C6 VI product was also assessed. The performance of the proposed algorithm was evaluated using high quality reference data and known biophysical relationships at several spatial and temporal scales. Firstly, it was evaluated using data from the AERONET-based Surface Reflectance Validation Network (ASRVN), FLUXNET-derived ecosystem gross primary productivity (GPP) and an analysis of the seasonality parameters derived from current Collection 5 (C5) and proxy C6 VI collections. The performance of the 8-day VI version was evaluated and contrasted with current 16-day using the reported correlation of the Enhanced Vegetation Index (EVI) with the GPP derived from CO2 flux measurements. Secondly, we performed an analysis at spatial level using entire images (or "tiles") to assess the Bidirectional Reflectance Distribution Function (BRDF) effects on the VI product, as these can cause biases on the SR and VIs from scanning

  20. Placental development during early pregnancy in sheep: effects of embryo origin on vascularization.

    PubMed

    Grazul-Bilska, Anna T; Johnson, Mary Lynn; Borowicz, Pawel P; Bilski, Jerzy J; Cymbaluk, Taylor; Norberg, Spencer; Redmer, Dale A; Reynolds, Lawrence P

    2014-05-01

    Utero-placental growth and vascular development are critical for pregnancy establishment that may be altered by various factors including assisted reproductive technologies (ART), nutrition, or others, leading to compromised pregnancy. We hypothesized that placental vascularization and expression of angiogenic factors are altered early in pregnancies after transfer of embryos created using selected ART methods. Pregnancies were achieved through natural mating (NAT), or transfer of embryos from NAT (NAT-ET), or IVF or in vitro activation (IVA). Placental tissues were collected on day 22 of pregnancy. In maternal caruncles (CAR), vascular cell proliferation was less (P<0.05) for IVA than other groups. Compared with NAT, density of blood vessels was less (P<0.05) for IVF and IVA in fetal membranes (FM) and for NAT-ET, IVF, and IVA in CAR. In FM, mRNA expression was decreased (P<0.01-0.08) in NAT-ET, IVF, and IVA compared with NAT for vascular endothelial growth factor (VEGF) and its receptor FLT1, placental growth factor (PGF), neuropilin 1 (NP1) and NP2, angiopoietin 1 (ANGPT1) and ANGPT2, endothelial nitric oxide synthase 3 (NOS3), hypoxia-inducible factor 1A (HIF1A), fibroblast growth factor 2 (FGF2), and its receptor FGFR2. In CAR, mRNA expression was decreased (P<0.01-0.05) in NAT-ET, IVF, and IVA compared with NAT for VEGF, FLT1, PGF, ANGPT1, and TEK. Decreased mRNA expression for 12 of 14 angiogenic factors across FM and CAR in NAT-ET, IVF, and IVA pregnancies was associated with reduced placental vascular development, which would lead to poor placental function and compromised fetal and placental growth and development.

  1. Practical, Asymmetric Route to Sitagliptin and Derivatives: Development and Origin of Diastereoselectivity

    PubMed Central

    2016-01-01

    The development of a practical and scalable process for the asymmetric synthesis of sitagliptin is reported. Density functional theory calculations reveal that two noncovalent interactions are responsible for the high diastereoselection. The first is an intramolecular hydrogen bond between the enamide NH and the boryl mesylate S=O, consistent with MsOH being crucial for high selectivity. The second is a novel C–H···F interaction between the aryl C5-fluoride and the methyl of the mesylate ligand. PMID:25799267

  2. The Canadian Healthy Infant Longitudinal Development (CHILD) Study: examining developmental origins of allergy and asthma.

    PubMed

    Subbarao, Padmaja; Anand, Sonia S; Becker, Allan B; Befus, A Dean; Brauer, Michael; Brook, Jeffrey R; Denburg, Judah A; HayGlass, Kent T; Kobor, Michael S; Kollmann, Tobias R; Kozyrskyj, Anita L; Lou, W Y Wendy; Mandhane, Piushkumar J; Miller, Gregory E; Moraes, Theo J; Pare, Peter D; Scott, James A; Takaro, Tim K; Turvey, Stuart E; Duncan, Joanne M; Lefebvre, Diana L; Sears, Malcolm R

    2015-10-01

    The Canadian Healthy Infant Longitudinal Development (CHILD) birth cohort study recruited 3624 pregnant women, most partners and 3542 eligible offspring. We hypothesise that early life physical and psychosocial environments, immunological, physiological, nutritional, hormonal and metabolic influences interact with genetics influencing allergic diseases, including asthma. Environmental and biological sampling, innate and adaptive immune responses, gene expression, DNA methylation, gut microbiome and nutrition studies complement repeated environmental and clinical assessments to age 5. This rich data set, linking prenatal and postnatal environments, diverse biological samples and rigorous phenotyping, will inform early developmental pathways to allergy, asthma and other chronic inflammatory diseases.

  3. Development and tuning of an original search engine for patent libraries in medicinal chemistry

    PubMed Central

    2014-01-01

    Background The large increase in the size of patent collections has led to the need of efficient search strategies. But the development of advanced text-mining applications dedicated to patents of the biomedical field remains rare, in particular to address the needs of the pharmaceutical & biotech industry, which intensively uses patent libraries for competitive intelligence and drug development. Methods We describe here the development of an advanced retrieval engine to search information in patent collections in the field of medicinal chemistry. We investigate and combine different strategies and evaluate their respective impact on the performance of the search engine applied to various search tasks, which covers the putatively most frequent search behaviours of intellectual property officers in medical chemistry: 1) a prior art search task; 2) a technical survey task; and 3) a variant of the technical survey task, sometimes called known-item search task, where a single patent is targeted. Results The optimal tuning of our engine resulted in a top-precision of 6.76% for the prior art search task, 23.28% for the technical survey task and 46.02% for the variant of the technical survey task. We observed that co-citation boosting was an appropriate strategy to improve prior art search tasks, while IPC classification of queries was improving retrieval effectiveness for technical survey tasks. Surprisingly, the use of the full body of the patent was always detrimental for search effectiveness. It was also observed that normalizing biomedical entities using curated dictionaries had simply no impact on the search tasks we evaluate. The search engine was finally implemented as a web-application within Novartis Pharma. The application is briefly described in the report. Conclusions We have presented the development of a search engine dedicated to patent search, based on state of the art methods applied to patent corpora. We have shown that a proper tuning of the system to

  4. Drowsiness/alertness algorithm development and validation using synchronized EEG and cognitive performance to individualize a generalized model

    PubMed Central

    Johnson, Robin R.; Popovic, Djordje P.; Olmstead, Richard E.; Stikic, Maja; Levendowski, Daniel J.; Berka, Chris

    2011-01-01

    A great deal of research over the last century has focused on drowsiness/alertness detection, as fatigue-related physical and cognitive impairments pose a serious risk to public health and safety. Available drowsiness/alertness detection solutions are unsatisfactory for a number of reasons: 1) lack of generalizability, 2) failure to address individual variability in generalized models, and/or 3) they lack a portable, un-tethered application. The current study aimed to address these issues, and determine if an individualized electroencephalography (EEG) based algorithm could be defined to track performance decrements associated with sleep loss, as this is the first step in developing a field deployable drowsiness/alertness detection system. The results indicated that an EEG-based algorithm, individualized using a series of brief "identification" tasks, was able to effectively track performance decrements associated with sleep deprivation. Future development will address the need for the algorithm to predict performance decrements due to sleep loss, and provide field applicability. PMID:21419826

  5. Drowsiness/alertness algorithm development and validation using synchronized EEG and cognitive performance to individualize a generalized model.

    PubMed

    Johnson, Robin R; Popovic, Djordje P; Olmstead, Richard E; Stikic, Maja; Levendowski, Daniel J; Berka, Chris

    2011-05-01

    A great deal of research over the last century has focused on drowsiness/alertness detection, as fatigue-related physical and cognitive impairments pose a serious risk to public health and safety. Available drowsiness/alertness detection solutions are unsatisfactory for a number of reasons: (1) lack of generalizability, (2) failure to address individual variability in generalized models, and/or (3) lack of a portable, un-tethered application. The current study aimed to address these issues, and determine if an individualized electroencephalography (EEG) based algorithm could be defined to track performance decrements associated with sleep loss, as this is the first step in developing a field deployable drowsiness/alertness detection system. The results indicated that an EEG-based algorithm, individualized using a series of brief "identification" tasks, was able to effectively track performance decrements associated with sleep deprivation. Future development will address the need for the algorithm to predict performance decrements due to sleep loss, and provide field applicability.

  6. Development of a remote sensing algorithm for cyanobacterial phycocyanin pigment in the Baltic Sea using neural network approach

    NASA Astrophysics Data System (ADS)

    Riha, Stefan; Krawczyk, Harald

    2011-11-01

    Water quality monitoring in the Baltic Sea is of high ecological importance for all its neighbouring countries. They are highly interested in a regular monitoring of water quality parameters of their regional zones. A special attention is paid to the occurrence and dissemination of algae blooms. Among the appearing blooms the possibly toxicological or harmful cyanobacteria cultures are a special case of investigation, due to their specific optical properties and due to the negative influence on the ecological state of the aquatic system. Satellite remote sensing, with its high temporal and spatial resolution opportunities, allows the frequent observations of large areas of the Baltic Sea with special focus on its two seasonal algae blooms. For a better monitoring of the cyanobacteria dominated summer blooms, adapted algorithms are needed which take into account the special optical properties of blue-green algae. Chlorophyll-a standard algorithms typically fail in a correct recognition of these occurrences. To significantly improve the opportunities of observation and propagation of the cyanobacteria blooms, the Marine Remote Sensing group of DLR has started the development of a model based inversion algorithm that includes a four component bio-optical water model for Case2 waters, which extends the commonly calculated parameter set chlorophyll, Suspended Matter and CDOM with an additional parameter for the estimation of phycocyanin absorption. It was necessary to carry out detailed optical laboratory measurements with different cyanobacteria cultures, occurring in the Baltic Sea, for the generation of a specific bio-optical model. The inversion of satellite remote sensing data is based on an artificial Neural Network technique. This is a model based multivariate non-linear inversion approach. The specifically designed Neural Network is trained with a comprehensive dataset of simulated reflectance values taking into account the laboratory obtained specific optical

  7. [Elementary exploration of the origin and development of marine Chinese materia medica].

    PubMed

    Guan, Hua-Shi; Fu, Xian-Jun; Wu, Qiang-Ming; Wang, Chang-Yun; Wang, Yu; Jiang, Deng-Zhao

    2009-05-01

    According to archaeological discoveries, humans began to make use of marine natural resources early in the Palaeolithic era. In the Spring and Autumn period and Warring States period, they began to use marine life as medicines and also had simple cognitions on their efficacy and processing. In the Qin and Han dynasties, people further deepened the understanding of the marine Chinese materia medica and created prescriptions making use of marine drugs. In the Tang and Song period, the number of marine Chinese materia medica species and corresponding prescriptions apparently increased. The cognitions of the property, flavor, efficacy as well as the compatible principle of marine Chinese materia medica was further deepened and the scope of their treatment also significantly expanded. In the Ming and Qing dynasties, the cognition of the marine Chinese materia medica was mainly the conclusions of the previous experience. After the founding of the People's Republic of China (PRC), with the development of science and technologies, the ability of exploiting and utilizing the marine Chinese materia medica by people dramatically increased, and the species of marine Chinese materia medica reached more than one thousand. However, the development of marine Chinese materia medica is confronted with new problems; although the number of species of marine Chinese materia medica increased, the understanding of their property and flavor is obviously lagging behind, which seriously affects the clinical application of marine Chinese materia medica.

  8. Data and software tools for gamma radiation spectral threat detection and nuclide identification algorithm development and evaluation

    NASA Astrophysics Data System (ADS)

    Portnoy, David; Fisher, Brian; Phifer, Daniel

    2015-06-01

    The detection of radiological and nuclear threats is extremely important to national security. The federal government is spending significant resources developing new detection systems and attempting to increase the performance of existing ones. The detection of illicit radionuclides that may pose a radiological or nuclear threat is a challenging problem complicated by benign radiation sources (e.g., cat litter and medical treatments), shielding, and large variations in background radiation. Although there is a growing acceptance within the community that concentrating efforts on algorithm development (independent of the specifics of fully assembled systems) has the potential for significant overall system performance gains, there are two major hindrances to advancements in gamma spectral analysis algorithms under the current paradigm: access to data and common performance metrics along with baseline performance measures. Because many of the signatures collected during performance measurement campaigns are classified, dissemination to algorithm developers is extremely limited. This leaves developers no choice but to collect their own data if they are lucky enough to have access to material and sensors. This is often combined with their own definition of metrics for measuring performance. These two conditions make it all but impossible for developers and external reviewers to make meaningful comparisons between algorithms. Without meaningful comparisons, performance advancements become very hard to achieve and (more importantly) recognize. The objective of this work is to overcome these obstacles by developing and freely distributing real and synthetically generated gamma-spectra data sets as well as software tools for performance evaluation with associated performance baselines to national labs, academic institutions, government agencies, and industry. At present, datasets for two tracks, or application domains, have been developed: one that includes temporal

  9. Some computational challenges of developing efficient parallel algorithms for data-dependent computations in thermal-hydraulics supercomputer applications

    SciTech Connect

    Woodruff, S.B.

    1992-01-01

    The Transient Reactor Analysis Code (TRAC), which features a two- fluid treatment of thermal-hydraulics, is designed to model transients in water reactors and related facilities. One of the major computational costs associated with TRAC and similar codes is calculating constitutive coefficients. Although the formulations for these coefficients are local the costs are flow-regime- or data-dependent; i.e., the computations needed for a given spatial node often vary widely as a function of time. Consequently, poor load balancing will degrade efficiency on either vector or data parallel architectures when the data are organized according to spatial location. Unfortunately, a general automatic solution to the load-balancing problem associated with data-dependent computations is not yet available for massively parallel architectures. This document discusses why developers algorithms, such as a neural net representation, that do not exhibit algorithms, such as a neural net representation, that do not exhibit load-balancing problems.

  10. Study report on interfacing major physiological subsystem models: An approach for developing a whole-body algorithm

    NASA Technical Reports Server (NTRS)

    Fitzjerrell, D. G.; Grounds, D. J.; Leonard, J. I.

    1975-01-01

    Using a whole body algorithm simulation model, a wide variety and large number of stresses as well as different stress levels were simulated including environmental disturbances, metabolic changes, and special experimental situations. Simulation of short term stresses resulted in simultaneous and integrated responses from the cardiovascular, respiratory, and thermoregulatory subsystems and the accuracy of a large number of responding variables was verified. The capability of simulating significantly longer responses was demonstrated by validating a four week bed rest study. In this case, the long term subsystem model was found to reproduce many experimentally observed changes in circulatory dynamics, body fluid-electrolyte regulation, and renal function. The value of systems analysis and the selected design approach for developing a whole body algorithm was demonstrated.

  11. Waveband selection and algorithm development to distinguish fecal contamination using multispectral imaging with solar light

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Fecal contamination in fresh produce fields caused by animals or livestock entering the fields can lead to outbreaks of foodbourne illnesses. E.coli O157:H7 originating in the intestines of animals can transfer onto leafy greens via fecal matter. Leafy greens are often eaten fresh without thermal tr...

  12. Watershed model calibration framework developed using an influence coefficient algorithm and a genetic algorithm and analysis of pollutant discharge characteristics and load reduction in a TMDL planning area.

    PubMed

    Cho, Jae Heon; Lee, Jong Ho

    2015-11-01

    Manual calibration is common in rainfall-runoff model applications. However, rainfall-runoff models include several complicated parameters; thus, significant time and effort are required to manually calibrate the parameters individually and repeatedly. Automatic calibration has relative merit regarding time efficiency and objectivity but shortcomings regarding understanding indigenous processes in the basin. In this study, a watershed model calibration framework was developed using an influence coefficient algorithm and genetic algorithm (WMCIG) to automatically calibrate the distributed models. The optimization problem used to minimize the sum of squares of the normalized residuals of the observed and predicted values was solved using a genetic algorithm (GA). The final model parameters were determined from the iteration with the smallest sum of squares of the normalized residuals of all iterations. The WMCIG was applied to a Gomakwoncheon watershed located in an area that presents a total maximum daily load (TMDL) in Korea. The proportion of urbanized area in this watershed is low, and the diffuse pollution loads of nutrients such as phosphorus are greater than the point-source pollution loads because of the concentration of rainfall that occurs during the summer. The pollution discharges from the watershed were estimated for each land-use type, and the seasonal variations of the pollution loads were analyzed. Consecutive flow measurement gauges have not been installed in this area, and it is difficult to survey the flow and water quality in this area during the frequent heavy rainfall that occurs during the wet season. The Hydrological Simulation Program-Fortran (HSPF) model was used to calculate the runoff flow and water quality in this basin. Using the water quality results, a load duration curve was constructed for the basin, the exceedance frequency of the water quality standard was calculated for each hydrologic condition class, and the percent reduction

  13. Sixty years of the Interamerican Society of Psychology (SIP): origins and development.

    PubMed

    Gallegos, Miguel

    2013-01-01

    This paper presents a historical overview of the Interamerican Society of Psychology, which was founded on December 17, 1951, in Mexico City. Firstly, the historical circumstances of the foundation period are presented, as well as the people who made this organization possible, and the state of psychology on the American continent at that time. Secondly, the most important activities that the Interamerican Society of Psychology has developed during its 60 years are mentioned, such as the publication of books and scientific journals, the creation of several task forces and the Interamerican Congresses of Psychology. Basically, the purpose of this paper is to review the history of the Interamerican Society of Psychology through the recovery and use of various documentary sources.

  14. Climate change and the origin and development of rice cultivation in the Yangtze River basin, China.

    PubMed

    Yasuda, Yoshinori

    2008-11-01

    The forest hunter-gatherers of the middle Yangtze River basin, who were the first to invent pottery and led a sedentary lifestyle, may have begun to cultivate rice during the Bølling-Allerød interstadial global warming period. The earliest rice cultivation may have dated back to 14,000 calibrated (cal.) years before present (YBP). The global warming at 9000 cal. YBP in the early Holocene brought the development of the rice cultivation to the middle Yangtze River basin. On the other hand, ancient rice-cultivating and piscatorial society met a crisis at 4200-4000 cal. YBP that was characterized by a significant cooling of the climate. This climate deterioration led the northern wheat/barley-cultivating pastoral people to migrate to the south and invade, ultimately bringing about the collapse of the rice-cultivating and piscatorial society in the Yangtze River basin.

  15. Family-centered theory: origins, development, barriers, and supports to implementation in rehabilitation medicine.

    PubMed

    Bamm, Elena L; Rosenbaum, Peter

    2008-08-01

    The concept of family-centered care was introduced to the public more than 4 decades ago, stressing the importance of the family in children's well being. Since then, family-centered values and practices have been widely implemented in child health. The purpose of this article is to offer an overview of the development and evolution of family-centered theory as an underlying conceptual foundation for contemporary health services. The focus includes key concepts, accepted definitions, barriers, and supports that can influence successful implementation, and discussion of the valid quantitative measures of family-centeredness currently available to evaluate service delivery. The article also provides the foundation, and proposes questions, for future research.

  16. Evolutionary origins and development of saw-teeth on the sawfish and sawshark rostrum (Elasmobranchii; Chondrichthyes)

    PubMed Central

    Welten, Monique; Smith, Moya Meredith; Underwood, Charlie; Johanson, Zerina

    2015-01-01

    A well-known characteristic of chondrichthyans (e.g. sharks, rays) is their covering of external skin denticles (placoid scales), but less well understood is the wide morphological diversity that these skin denticles can show. Some of the more unusual of these are the tooth-like structures associated with the elongate cartilaginous rostrum ‘saw’ in three chondrichthyan groups: Pristiophoridae (sawsharks; Selachii), Pristidae (sawfish; Batoidea) and the fossil Sclerorhynchoidea (Batoidea). Comparative topographic and developmental studies of the ‘saw-teeth’ were undertaken in adults and embryos of these groups, by means of three-dimensional-rendered volumes from X-ray computed tomography. This provided data on development and relative arrangement in embryos, with regenerative replacement in adults. Saw-teeth are morphologically similar on the rostra of the Pristiophoridae and the Sclerorhynchoidea, with the same replacement modes, despite the lack of a close phylogenetic relationship. In both, tooth-like structures develop under the skin of the embryos, aligned with the rostrum surface, before rotating into lateral position and then attaching through a pedicel to the rostrum cartilage. As well, saw-teeth are replaced and added to as space becomes available. By contrast, saw-teeth in Pristidae insert into sockets in the rostrum cartilage, growing continuously and are not replaced. Despite superficial similarity to oral tooth developmental organization, saw-tooth spatial initiation arrangement is associated with rostrum growth. Replacement is space-dependent and more comparable to that of dermal skin denticles. We suggest these saw-teeth represent modified dermal denticles and lack the ‘many-for-one’ replacement characteristic of elasmobranch oral dentitions. PMID:26473044

  17. Evolutionary origins and development of saw-teeth on the sawfish and sawshark rostrum (Elasmobranchii; Chondrichthyes).

    PubMed

    Welten, Monique; Smith, Moya Meredith; Underwood, Charlie; Johanson, Zerina

    2015-09-01

    A well-known characteristic of chondrichthyans (e.g. sharks, rays) is their covering of external skin denticles (placoid scales), but less well understood is the wide morphological diversity that these skin denticles can show. Some of the more unusual of these are the tooth-like structures associated with the elongate cartilaginous rostrum 'saw' in three chondrichthyan groups: Pristiophoridae (sawsharks; Selachii), Pristidae (sawfish; Batoidea) and the fossil Sclerorhynchoidea (Batoidea). Comparative topographic and developmental studies of the 'saw-teeth' were undertaken in adults and embryos of these groups, by means of three-dimensional-rendered volumes from X-ray computed tomography. This provided data on development and relative arrangement in embryos, with regenerative replacement in adults. Saw-teeth are morphologically similar on the rostra of the Pristiophoridae and the Sclerorhynchoidea, with the same replacement modes, despite the lack of a close phylogenetic relationship. In both, tooth-like structures develop under the skin of the embryos, aligned with the rostrum surface, before rotating into lateral position and then attaching through a pedicel to the rostrum cartilage. As well, saw-teeth are replaced and added to as space becomes available. By contrast, saw-teeth in Pristidae insert into sockets in the rostrum cartilage, growing continuously and are not replaced. Despite superficial similarity to oral tooth developmental organization, saw-tooth spatial initiation arrangement is associated with rostrum growth. Replacement is space-dependent and more comparable to that of dermal skin denticles. We suggest these saw-teeth represent modified dermal denticles and lack the 'many-for-one' replacement characteristic of elasmobranch oral dentitions.

  18. Origin and structural development of the LaSalle Arch, Louisiana

    SciTech Connect

    Lawless, P.N. )

    1990-05-01

    The LaSalle arch is a basement high separating the Louisiana and Mississippi interior salt basins. Using reflection seismic data, an area located on the southern end of the LaSalle arch was shown to be composed of relict Paleozoic continental crust that was left behind and partially rifted during the breakup of Pangea during the Triassic. Rifting preferentially occurred to the north of a Paleozoic thrust fault nose, and crustal extension took place in a northeast-southwest direction. The LaSalle arch, as seen in post-Triassic stratigraphy, formed by a two-part process. The western limb developed syndepositionally due to differential subsidence, and the eastern limb developed due to relative regional tilting to the east after deposition of the Claibornian Sparta Formation. The LaSalle arch acted as only a minor impediment to sediment transport with a very low relief except during the Tayloran Stage of the Upper Cretaceous. A single truncational unconformity in post-Triassic stratigraphy is present in the Taylora Demopolis Formation, indicating a period of relatively major uplift by the LaSalle arch. This contrast, with the Sabine arch in eastern Texas; the Sabine arch experienced uplift during the Eagle Fordian and Sabinian stages. A recently proposed hypothesis calling for overthrusting in the Western Cordillera as the mechanism for uplift on the Sabine arch cannot explain movement of the LaSalle arch because horizontal stress would predict synchronous uplift of basement highs. A more satisfactory uplift mechanism calls upon lateral heat flow from the mantle as the driving force for uplift.

  19. SeaWiFS Technical Report Series. Volume 42; Satellite Primary Productivity Data and Algorithm Development: A Science Plan for Mission to Planet Earth

    NASA Technical Reports Server (NTRS)

    Falkowski, Paul G.; Behrenfeld, Michael J.; Esaias, Wayne E.; Balch, William; Campbell, Janet W.; Iverson, Richard L.; Kiefer, Dale A.; Morel, Andre; Yoder, James A.; Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor)

    1998-01-01

    Two issues regarding primary productivity, as it pertains to the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Program and the National Aeronautics and Space Administration (NASA) Mission to Planet Earth (MTPE) are presented in this volume. Chapter 1 describes the development of a science plan for deriving primary production for the world ocean using satellite measurements, by the Ocean Primary Productivity Working Group (OPPWG). Chapter 2 presents discussions by the same group, of algorithm classification, algorithm parameterization and data availability, algorithm testing and validation, and the benefits of a consensus primary productivity algorithm.

  20. Placental development during early pregnancy in sheep: effects of embryo origin on fetal and placental growth and global methylation.

    PubMed

    Grazul-Bilska, Anna T; Johnson, Mary Lynn; Borowicz, Pawel P; Baranko, Loren; Redmer, Dale A; Reynolds, Lawrence P

    2013-01-01

    The origin of embryos including those created through assisted reproductive technologies might have profound effects on placental and fetal development, possibly leading to compromised pregnancies associated with poor placental development. To determine the effects of embryo origin on fetal size, and maternal and fetal placental cellular proliferation and global methylation, pregnancies were achieved through natural mating (NAT), or transfer of embryos generated through in vivo (NAT-ET), IVF, or in vitro activation (IVA). On Day 22 of pregnancy, fetuses were measured and placental tissues were collected to immunologically detect Ki67 (a marker of proliferating cells) and 5-methyl cytosine followed by image analysis, and determine mRNA expression for three DNA methyltransferases. Fetal length and labeling index (proportion of proliferating cells) in maternal caruncles (maternal placenta) and fetal membranes (fetal placenta) were less (P < 0.001) in NAT-ET, IVF, and IVA than in NAT. In fetal membranes, expression of 5-methyl cytosine was greater (P < 0.02) in IVF and IVA than in NAT. In maternal caruncles, mRNA expression for DNMT1 was greater (P < 0.01) in IVA compared with the other groups, but DNMT3A expression was less (P < 0.04) in NAT-ET and IVA than in NAT. In fetal membranes, expression of mRNA for DNMT3A was greater (P < 0.01) in IVA compared with the other groups, and was similar in NAT, NAT-ET, and IVF groups. Thus, embryo origin might have specific effects on growth and function of ovine uteroplacental and fetal tissues through regulation of tissue growth, DNA methylation, and likely other mechanisms. These data provide a foundation for determining expression of specific factors regulating placental and fetal tissue growth and function in normal and compromised pregnancies, including those achieved with assisted reproductive technologies.

  1. On the Late Development and Possible Astronomical Origin of the Gyroscope

    NASA Astrophysics Data System (ADS)

    Brecher, Kenneth

    2013-01-01

    The invention of the gyroscope is usually attributed to the French physicist Jean-Bernard-Leon Foucault in the year 1852. He certainly created the word and also used his gyroscope to demonstrate the rotation of the Earth. However, the gyroscope was actually invented around 1812 by the German scientist Johann Bohnenberger who called his device simply the “machine”. Bohnenberger was a professor of astronomy and mathematics and published a book about astronomy in 1811. Several other scientists, including American physicist Walter R. Johnson (who called his apparatus the “rotascope”), independently invented the gyroscope. Each of these devices employed a central object (sphere or disc) that could spin on a shaft. This object was placed between three independent gimbals, two of which could move freely. Bohnenberger’s “machine” has much the same appearance as an armillary sphere. Those astronomical devices had been produced for at least the preceding three centuries and were widely dispersed and well known throughout Europe. They were used to display the apparent motion of celestial bodies. However, armillary spheres were used only as simulations of celestial appearances, not as actual demonstrations of physical phenomena. It is not known if the inertial properties of armillary spheres (and also of terrestrial and celestial globes) had been studied before about 1800. Nonetheless, as a matter of practice, gimbal systems similar to those found in gyroscopes were used on ships to level oil lamps at least as early as the sixteenth century AD. And the ideas behind armillary spheres date back at least a millennium before that. So why did the invention of the gyroscope in its modern form take such a long time when the individual underlying components had been around and utilized for some two millennia? Perhaps because the understanding of angular momentum, including its conservation, was not developed until the start of the 19th century and also because the

  2. MinUrals: Mineral resources of the Urals -- origin, development, and environmental impact

    NASA Astrophysics Data System (ADS)

    Leistel, J. M.; Minurals Team

    2003-04-01

    The MinUrals project (supported by the European Commission under the 5th F.P.- INCO2 - contract ICA2-CT-2000-10011) is focusing on the South Urals mining sector, in order to improve local socio-economic conditions, through: 1) The reinterpretation of the geodynamics of South Urals and of the different types of ore deposits and the development of tools for mineral exploration (new geophysical and geochemical technology). The convergence setting and the formation of arc, fore-arc and back-arc systems explain the volcano-sedimentary and structural features. This geodynamic setting largely controls the distribution and characteristics of the different types of mineralisation; 2) The evaluation of local mining-related risks to the environment, with a development of methodologies for assessing and reducing the environmental impact and localizing areas of high metal potential/low environmental constraints. Three pilote sites were investigated: Sibay and Uchaly (with mining installations), and Karabash (with mining installations and smelter); 3) The implementation of a Geographical Information System taking into account the mineral potential and the environmental constraints that, through data ranking and combining the key parameters of the areas with high metal potential and environmental constraints, will enable the production of a Mineral Potential and Environmental Constraints Map of the South Urals; 4) The elaboration of recommendations for a suitable environmentally-aware mining-industry legislation, based on a comparison with the European legislation, to be adressed to the Commission on the demarcation of powers and subjects between the federal government, governments of the subjects of the Russian Federation and local authorities. More information can be found on the project web sites [http://minurals.brgm.fr] or [http://www.nhm.ac.uk/mineralogy/minurals/minurals.htm] or [http://www.anrb.ru/geol/MinUrals] or [http://minurals.ilmeny.ac.ru] MinUrals Team (*): Aug

  3. A multi-channel feedback algorithm for the development of active liners to reduce noise in flow duct applications

    NASA Astrophysics Data System (ADS)

    Mazeaud, B.; Galland, M.-A.

    2007-10-01

    The present paper deals with the design and development of the active part of a hybrid acoustic treatment combining porous material properties and active control techniques. Such an acoustic system was developed to reduce evolutionary tones in flow duct applications. Attention was particularly focused on the optimization process of the controller part of the hybrid cell. A piezo-electric transducer combining efficiency and compactness was selected as a secondary source. A digital adaptive feedback control algorithm was specially developed in order to operate independently cell by cell, and to facilitate a subsequent increase in the liner surface. An adaptive bandpass filter was used to prevent the development of instabilities due to the coupling occurring between cells. Special care was taken in the development of such systems for time-varying primary signals. An automatic frequency detection loop was therefore introduced in the control algorithm, enabling the continuous adaptation of the bandpass filtering. The multi-cell structure was experimentally validated for a four-cell system located on a duct wall in the presence of flow. Substantial noise reduction was obtained throughout the 0.7-2.5 kHz frequency range, with flow velocities up to 50 m/s.

  4. Stream-reach Identification for New Run-of-River Hydropower Development through a Merit Matrix Based Geospatial Algorithm

    SciTech Connect

    Pasha, M. Fayzul K.; Yeasmin, Dilruba; Kao, Shih-Chieh; Hadjerioua, Boualem; Wei, Yaxing; Smith, Brennan T

    2014-01-01

    Even after a century of development, the total hydropower potential from undeveloped rivers is still considered to be abundant in the United States. However, unlike evaluating hydropower potential at existing hydropower plants or non-powered dams, locating a feasible new hydropower plant involves many unknowns, and hence the total undeveloped potential is harder to quantify. In light of the rapid development of multiple national geospatial datasets for topography, hydrology, and environmental characteristics, a merit matrix based geospatial algorithm is proposed to help identify possible hydropower stream-reaches for future development. These hydropower stream-reaches sections of natural streams with suitable head, flow, and slope for possible future development are identified and compared using three different scenarios. A case study was conducted in the Alabama-Coosa-Tallapoosa (ACT) and Apalachicola-Chattahoochee-Flint (ACF) hydrologic subregions. It was found that a merit matrix based algorithm, which is based on the product of hydraulic head, annual mean flow, and average channel slope, can help effectively identify stream-reaches with high power density and small surface inundation. The identified stream-reaches can then be efficiently evaluated for their potential environmental impact, land development cost, and other competing water usage in detailed feasibility studies . Given that the selected datasets are available nationally (at least within the conterminous US), the proposed methodology will have wide applicability across the country.

  5. Development of an apnea detection algorithm based on temporal analysis of thoracic respiratory effort signal

    NASA Astrophysics Data System (ADS)

    Dell'Aquila, C. R.; Cañadas, G. E.; Correa, L. S.; Laciar, E.

    2016-04-01

    This work describes the design of an algorithm for detecting apnea episodes, based on analysis of thorax respiratory effort signal. Inspiration and expiration time, and range amplitude of respiratory cycle were evaluated. For range analysis the standard deviation statistical tool was used over respiratory signal temporal windows. The validity of its performance was carried out in 8 records of Apnea-ECG database that has annotations of apnea episodes. The results are: sensitivity (Se) 73%, specificity (Sp) 83%. These values can be improving eliminating artifact of signal records.

  6. The original SPF10 LiPA25 algorithm is more sensitive and suitable for epidemiologic HPV research than the SPF10 INNO-LiPA Extra.

    PubMed

    Geraets, Daan T; Struijk, Linda; Kleter, Bernhard; Molijn, Anco; van Doorn, Leen-Jan; Quint, Wim G V; Colau, Brigitte

    2015-04-01

    Two commercial HPV tests target the same 65 bp fragment of the human papillomavirus genome (designated SPF10): the original HPV SPF10 PCR-DEIA-LiPA25 system, version 1, (LiPA25) and the INNO-LiPA HPV Genotyping Extra (INNO-LiPA). The original SPF10 LiPA25 system was designed to have high analytical sensitivity and applied in HPV vaccine and epidemiology studies worldwide. But due to apparent similarities, this test can be easily confused with INNO-LiPA, a more recent assay of which the intended use, i.e., epidemiological or clinical, is currently unclear. The aim was to compare the analytical sensitivity of SPF10 LiPA25 to that of INNO-LiPA on the level of general HPV detection and genotyping. HPV testing by both assays was performed on the same DNA isolated from cervical swab (n = 365) and biopsy (n = 42) specimens. In cervical swabs, SPF10 LiPA25 and INNO-LiPA identified 35.3% and 29.3% multiple infections, 52.6% and 51.5% single infections, and no HPV type in 12.1% and 19.2%, respectively. Genotyping results were 64.7% identical, 26.0% compatible and 9.3% discordant between both methods. SPF10 LiPA25 detected significantly more genotypes (p < 0.001). The higher analytical sensitivity of SPF10 LiPA25 was confirmed by the MPTS123 genotyping assay. HPV positivity by the general probes in SPF10 DEIA was significantly higher (87.9%) than by those on INNO-LiPA (77.0%) (kappa = 0.592, p < 0.001). In cervical biopsies, SPF10 LiPA25 and INNO-LiPA identified 21.4% and 9.5% multiple types, 76.2% and 81.0% single types, and no type in 2.4% and 9.5%, respectively. Between both tests, the identification of genotypes was 76.3% identical, 14.3% compatible and 9.5% discordant. Overall, significantly more genotypes were detected by SPF10 LiPA25 (kappa = 0.853, p = 0.022). HPV positivity was higher by the SPF10 DEIA (97.6%) than by the INNO-LiPA strip (92.9%). These results demonstrate that SPF10 LiPA25 is more suitable for HPV genotyping in epidemiologic and vaccine

  7. ORIGIN, DEVELOPMENT, AND NATURE OF INTRANUCLEAR RODLETS AND ASSOCIATED BODIES IN CHICKEN SYMPATHETIC NEURONS

    PubMed Central

    Masurovsky, Edmund B.; Benitez, Helena H.; Kim, Seung U.; Murray, Margaret R.

    1970-01-01

    Correlative data are presented here on the developmental history, dynamics, histochemistry, and fine structure of intranuclear rodlets in chicken sympathetic neurons from in vivo material and long-term organized tissue cultures. The rodlets consist of bundles of ∼70 ± 10 A proteinaceous filaments closely associated with ∼0.4–0.8 µ spheroidal, granulofibrillar (gf) bodies of a related nature. These bodies are already present in the developing embryo a week or more in advance of the rodlets. In early formative stages rodlets consist of small clusters of aligned filaments contiguous with the gf-bodies. As neuronal differentiation progresses these filaments increase in number and become organized into well-ordered polyhedral arrays. Time-lapse cinemicrography reveals transient changes in rodlet contour associated with intrinsic factors, changes in form and position of the nucleolus with respect to the rodlet, and activity of the gf-bodies. With the electron microscope filaments may be seen extending between the nucleolus, gf-bodies, and rodlets; nucleoli display circumscribed regions with fine structural features and staining reactions reminiscent of those of gf-bodies, We suggest that the latter may be derivatives of the nucleolus and that the two may act together in the assemblage and functional dynamics of the rodlet. The egress of rodlet filaments into the cytoplasm raises the possibility that these might represent a source of the cell's filamentous constituents. PMID:4901373

  8. The origins and development of the diffusion of innovations paradigm as an example of scientific growth.

    PubMed

    Valente, T W; Rogers, E M

    1995-03-01

    Diffusion is the process by which an innovation is communicated through certain channels over time among members of a social system. The diffusion of innovations is a communication theory which has laid the groundwork for behavior change models across the social sciences, representing a widely applicable perspective. The diffusion of innovations paradigm began with the 1943 publication of the results of an hybrid seed corn study conducted by Bryce Ryan and Neal C. Gross, rural sociologists at Iowa State University. The diffusion paradigm spread among midwestern rural sociological researchers in the 1950s and 1960s, and then to a larger, interdisciplinary field of diffusi