Science.gov

Sample records for algorithm developed earlier

  1. Algorithms for Developing Test Questions from Sentences in Instructional Materials: An Extension of an Earlier Study.

    ERIC Educational Resources Information Center

    Roid, Gale H.; And Others

    An earlier study was extended and replicated to examine the feasibility of generating multiple-choice test questions by transforming sentences from prose instructional material. In the first study, a computer-based algorithm was used to analyze prose subject matter and to identify high-information words. Sentences containing selected words were…

  2. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  3. Messy genetic algorithms: Recent developments

    SciTech Connect

    Kargupta, H.

    1996-09-01

    Messy genetic algorithms define a rare class of algorithms that realize the need for detecting appropriate relations among members of the search domain in optimization. This paper reviews earlier works in messy genetic algorithms and describes some recent developments. It also describes the gene expression messy GA (GEMGA)--an {Omicron}({Lambda}{sup {kappa}}({ell}{sup 2} + {kappa})) sample complexity algorithm for the class of order-{kappa} delineable problems (problems that can be solved by considering no higher than order-{kappa} relations) of size {ell} and alphabet size {Lambda}. Experimental results are presented to demonstrate the scalability of the GEMGA.

  4. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  5. Motivation before meaning: motivational information encoded in meerkat alarm calls develops earlier than referential information.

    PubMed

    Hollén, Linda I; Manser, Marta B

    2007-06-01

    In contrast to historical assumptions about the affective nature of animal vocalizations, it is now clear that many vertebrates are capable of producing specific alarm calls in response to different predators, calls that provide information that goes beyond the motivational state of a caller. However, although these calls function referentially, it does not mean that they are devoid of motivational content. Studies on meerkats (Suricata suricatta) directly support this conclusion. The acoustic structure of their alarm calls simultaneously encodes information that is both motivational (level of urgency) and referential (predator specific). In this study, we investigated whether alarm calls of young meerkats undergo developmental modification and whether the motivational or the referential aspect of calls changes more over time. We found that, based on their acoustic structure, calls of young showed a high correct assignment to low- and high-urgency contexts but, in contrast to adults, low assignment to specific predator types. However, the discrimination among predator types was better in high-urgency than in low-urgency contexts. Our results suggest that acoustic features related to level of urgency are expressed earlier than those related to predator-specific information and may support the idea that referential calls evolve from motivational signals. PMID:17479462

  6. Planning steps forward in development: in girls earlier than in boys.

    PubMed

    Unterrainer, Josef M; Ruh, Nina; Loosli, Sandra V; Heinze, Katharina; Rahm, Benjamin; Kaller, Christoph P

    2013-01-01

    The development of planning ability in children initially aged four and five was examined longitudinally with a retest-interval of 12 months using the Tower of London task. As expected, problems to solve straightforward without mental look-ahead were mastered by most, even the youngest children. Problems demanding look-ahead were more difficult and accuracy improved significantly with age and over time. This development was strongly moderated by sex: In contrast to coeval boys, four year old girls showed an impressive performance enhancement at age five, reaching the performance of six year olds, whereas four year old boys lagged behind and caught up with girls at the age of six, the typical age of school enrollment. This sex-specific development of planning was clearly separated from overall intelligence: young boys showed a steeper increase in raw intelligence scores than girls, whereas in the older groups scores developed similarly. The observed sex differences in planning development are evident even within a narrow time window of twelve months and may relate to differences in maturational trajectories for girls and boys in dorsolateral prefrontal cortex. PMID:24312240

  7. Solar Occultation Retrieval Algorithm Development

    NASA Technical Reports Server (NTRS)

    Lumpe, Jerry D.

    2004-01-01

    This effort addresses the comparison and validation of currently operational solar occultation retrieval algorithms, and the development of generalized algorithms for future application to multiple platforms. initial development of generalized forward model algorithms capable of simulating transmission data from of the POAM II/III and SAGE II/III instruments. Work in the 2" quarter will focus on: completion of forward model algorithms, including accurate spectral characteristics for all instruments, and comparison of simulated transmission data with actual level 1 instrument data for specific occultation events.

  8. Developing dataflow algorithms

    SciTech Connect

    Hiromoto, R.E. ); Bohm, A.P.W. . Dept. of Computer Science)

    1991-01-01

    Our goal is to study the performance of a collection of numerical algorithms written in Id which is available to users of Motorola's dataflow machine Monsoon. We will study the dataflow performance of these implementations first under the parallel profiling simulator Id World, and second in comparison with actual dataflow execution on the Motorola Monsoon. This approach will allow us to follow the computational and structural details of the parallel algorithms as implemented on dataflow systems. When running our programs on the Id World simulator we will examine the behaviour of algorithms at dataflow graph level, where each instruction takes one timestep and data becomes available at the next. This implies that important machine level phenomena such as the effect that global communication time may have on the computation are not addressed. These phenomena will be addressed when we run our programs on the Monsoon hardware. Potential ramifications for compilation techniques, functional programming style, and program efficiency are significant to this study. In a later stage of our research we will compare the efficiency of Id programs to programs written in other languages. This comparison will be of a rather qualitative nature as there are too many degrees of freedom in a language implementation for a quantitative comparison to be of interest. We begin our study by examining one routine that exhibit different computational characteristics. This routine and its corresponding characteristics is Fast Fourier Transforms; computational parallelism and data dependences between the butterfly shuffles.

  9. Genetic Contribution to the Development of Radiographic Knee Osteoarthritis in a Population Presenting with Nonacute Knee Symptoms a Decade Earlier

    PubMed Central

    Huétink, Kasper; van der Voort, Paul; Bloem, Johan L.; Nelissen, Rob G. H. H.; Meulenbelt, Ingrid

    2016-01-01

    This study examined the contribution of the osteoarthritis (OA) susceptibility genes ASPN, GDF5, DIO2, and the 7q22 region to the development of radiographic knee OA in patients with a mean age of 40.6 ± 7.9 years (standard deviation) and who suffered from nonacute knee complaints a decade earlier. Dose–response associations of four single nucleotide polymorphisms(SNPs) in the susceptibility genes were determined by comparing 36 patients who showed the development of OA on radiographs (Kellgren and Lawrence score ≥1) with 88 patients having normal cartilage with no development of OA on radiographs. Multivariate logistic regression analysis including the variables such as age, gender, body mass index, and reported knee trauma was performed. A dose–response association of DIO2 SNP rs225014: odds ratio (OR) 2.3, 95% confidence interval (CI) 1.1–4.5 (P = 0.019) and GDF5 SNP rs143383: OR 2.0, 95% CI 1.1–3.8 (P = 0.031) was observed with knee OA development. The ASPN and 7q22 SNPs were not associated with OA development. PMID:27158223

  10. Multisensor data fusion algorithm development

    SciTech Connect

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  11. Developing Scoring Algorithms

    Cancer.gov

    We developed scoring procedures to convert screener responses to estimates of individual dietary intake for fruits and vegetables, dairy, added sugars, whole grains, fiber, and calcium using the What We Eat in America 24-hour dietary recall data from the 2003-2006 NHANES.

  12. SMAP's Radar OBP Algorithm Development

    NASA Technical Reports Server (NTRS)

    Le, Charles; Spencer, Michael W.; Veilleux, Louise; Chan, Samuel; He, Yutao; Zheng, Jason; Nguyen, Kayla

    2009-01-01

    An approach for algorithm specifications and development is described for SMAP's radar onboard processor with multi-stage demodulation and decimation bandpass digital filter. Point target simulation is used to verify and validate the filter design with the usual radar performance parameters. Preliminary FPGA implementation is also discussed.

  13. ALGORITHM DEVELOPMENT FOR SPATIAL OPERATORS.

    USGS Publications Warehouse

    Claire, Robert W.

    1984-01-01

    An approach is given that develops spatial operators about the basic geometric elements common to spatial data structures. In this fashion, a single set of spatial operators may be accessed by any system that reduces its operands to such basic generic representations. Algorithms based on this premise have been formulated to perform operations such as separation, overlap, and intersection. Moreover, this generic approach is well suited for algorithms that exploit concurrent properties of spatial operators. The results may provide a framework for a geometry engine to support fundamental manipulations within a geographic information system.

  14. STAR Algorithm Integration Team - Facilitating operational algorithm development

    NASA Astrophysics Data System (ADS)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  15. An Autopsied Case of Malignant Sarcomatoid Pleural Mesothelioma in Which Chest Pain Developed Several Months Earlier without Abnormality on Imaging

    PubMed Central

    Yaguchi, Daizo; Ichikawa, Motoshi; Inoue, Noriko; Kobayashi, Daisuke; Matsuura, Akinobu; Shizu, Masato; Imai, Naoyuki; Watanabe, Kazuko

    2015-01-01

    The patient experienced chest pain for about 7 months, but a diagnosis could not be made until after death. He was diagnosed with malignant sarcomatoid pleural mesothelioma on autopsy. In this case report, difficult aspects of the diagnosis are discussed. The 70-year-old Japanese man was a driver who transported ceramic-related products. Right chest pain developed in July 2013, but no abnormality was detected on a chest computed tomography (CT) performed in September 2013, and the pain was managed as right intercostal neuralgia. A chest CT performed in late October 2013 revealed a right pleural effusion, and the patient was referred to our hospital in early November 2013. Thoracentesis was performed, but the cytology was negative, and no diagnosis could be made. Close examination was postponed because the patient developed a subarachnoid hemorrhage. He underwent 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) after discharge from the neurosurgery department, and extensive right pleural thickening and 18F-FDG accumulation in this region were observed. Based on these findings, malignant pleural mesothelioma was suspected, and a thoracoscopy was performed under local anesthesia in early December 2013, but no definite diagnosis could be made. The patient selected best supportive care and died about 7 months after the initial development of right chest pain. The disease was definitively diagnosed as malignant sarcomatoid pleural mesothelioma by a pathological autopsy. When chronic chest pain of unknown cause is observed and past exposure to asbestos is suspected, actions to prevent delay in diagnosis should be taken, including testing for suspicion of malignant pleural mesothelioma. PMID:26600776

  16. An Autopsied Case of Malignant Sarcomatoid Pleural Mesothelioma in Which Chest Pain Developed Several Months Earlier without Abnormality on Imaging.

    PubMed

    Yaguchi, Daizo; Ichikawa, Motoshi; Inoue, Noriko; Kobayashi, Daisuke; Matsuura, Akinobu; Shizu, Masato; Imai, Naoyuki; Watanabe, Kazuko

    2015-01-01

    The patient experienced chest pain for about 7 months, but a diagnosis could not be made until after death. He was diagnosed with malignant sarcomatoid pleural mesothelioma on autopsy. In this case report, difficult aspects of the diagnosis are discussed. The 70-year-old Japanese man was a driver who transported ceramic-related products. Right chest pain developed in July 2013, but no abnormality was detected on a chest computed tomography (CT) performed in September 2013, and the pain was managed as right intercostal neuralgia. A chest CT performed in late October 2013 revealed a right pleural effusion, and the patient was referred to our hospital in early November 2013. Thoracentesis was performed, but the cytology was negative, and no diagnosis could be made. Close examination was postponed because the patient developed a subarachnoid hemorrhage. He underwent (18)F-fluorodeoxyglucose positron emission tomography ((18)F-FDG PET) after discharge from the neurosurgery department, and extensive right pleural thickening and (18)F-FDG accumulation in this region were observed. Based on these findings, malignant pleural mesothelioma was suspected, and a thoracoscopy was performed under local anesthesia in early December 2013, but no definite diagnosis could be made. The patient selected best supportive care and died about 7 months after the initial development of right chest pain. The disease was definitively diagnosed as malignant sarcomatoid pleural mesothelioma by a pathological autopsy. When chronic chest pain of unknown cause is observed and past exposure to asbestos is suspected, actions to prevent delay in diagnosis should be taken, including testing for suspicion of malignant pleural mesothelioma. PMID:26600776

  17. High atmospheric temperatures and 'ambient incubation' drive embryonic development and lead to earlier hatching in a passerine bird.

    PubMed

    Griffith, Simon C; Mainwaring, Mark C; Sorato, Enrico; Beckmann, Christa

    2016-02-01

    Tropical and subtropical species typically experience relatively high atmospheric temperatures during reproduction, and are subject to climate-related challenges that are largely unexplored, relative to more extensive work conducted in temperate regions. We studied the effects of high atmospheric and nest temperatures during reproduction in the zebra finch. We characterized the temperature within nests in a subtropical population of this species in relation to atmospheric temperature. Temperatures within nests frequently exceeded the level at which embryo's develop optimally, even in the absence of parental incubation. We experimentally manipulated internal nest temperature to demonstrate that an average difference of 6°C in the nest temperature during the laying period reduced hatching time by an average of 3% of the total incubation time, owing to 'ambient incubation'. Given the avian constraint of laying a single egg per day, the first eggs of a clutch are subject to prolonged effects of nest temperature relative to later laid eggs, potentially increasing hatching asynchrony. While birds may ameliorate the negative effects of ambient incubation on embryonic development by varying the location and design of their nests, high atmospheric temperatures are likely to constitute an important selective force on avian reproductive behaviour and physiology in subtropical and tropical regions, particularly in the light of predicted climate change that in many areas is leading to a higher frequency of hot days during the periods when birds breed. PMID:26998315

  18. High atmospheric temperatures and ‘ambient incubation’ drive embryonic development and lead to earlier hatching in a passerine bird

    PubMed Central

    Griffith, Simon C.; Mainwaring, Mark C.; Sorato, Enrico; Beckmann, Christa

    2016-01-01

    Tropical and subtropical species typically experience relatively high atmospheric temperatures during reproduction, and are subject to climate-related challenges that are largely unexplored, relative to more extensive work conducted in temperate regions. We studied the effects of high atmospheric and nest temperatures during reproduction in the zebra finch. We characterized the temperature within nests in a subtropical population of this species in relation to atmospheric temperature. Temperatures within nests frequently exceeded the level at which embryo’s develop optimally, even in the absence of parental incubation. We experimentally manipulated internal nest temperature to demonstrate that an average difference of 6°C in the nest temperature during the laying period reduced hatching time by an average of 3% of the total incubation time, owing to ‘ambient incubation’. Given the avian constraint of laying a single egg per day, the first eggs of a clutch are subject to prolonged effects of nest temperature relative to later laid eggs, potentially increasing hatching asynchrony. While birds may ameliorate the negative effects of ambient incubation on embryonic development by varying the location and design of their nests, high atmospheric temperatures are likely to constitute an important selective force on avian reproductive behaviour and physiology in subtropical and tropical regions, particularly in the light of predicted climate change that in many areas is leading to a higher frequency of hot days during the periods when birds breed. PMID:26998315

  19. A Unifying Multibody Dynamics Algorithm Development Workbench

    NASA Technical Reports Server (NTRS)

    Ziegler, John L.

    2005-01-01

    The development of new and efficient algorithms for multibody dynamics has been an important research area. These algorithms are used for modeling, simulation, and control of systems such as spacecraft, robotic systems, automotive applications, the human body, manufacturing operations, and micro-electromechanical systems (MEMS). At JPL's Dynamics and Real Time Simulation (DARTS) Laboratory we have developed software that serves as a computational workbench for these algorithms. This software utilizes the mathematical perspective of the spatial operator algebra, which allows the development of dynamics algorithms and new insights into multibody dynamics.

  20. Evolutionary development of path planning algorithms

    SciTech Connect

    Hage, M

    1998-09-01

    This paper describes the use of evolutionary software techniques for developing both genetic algorithms and genetic programs. Genetic algorithms are evolved to solve a specific problem within a fixed and known environment. While genetic algorithms can evolve to become very optimized for their task, they often are very specialized and perform poorly if the environment changes. Genetic programs are evolved through simultaneous training in a variety of environments to develop a more general controller behavior that operates in unknown environments. Performance of genetic programs is less optimal than a specially bred algorithm for an individual environment, but the controller performs acceptably under a wider variety of circumstances. The example problem addressed in this paper is evolutionary development of algorithms and programs for path planning in nuclear environments, such as Chernobyl.

  1. Development and Testing of Data Mining Algorithms for Earth Observation

    NASA Technical Reports Server (NTRS)

    Glymour, Clark

    2005-01-01

    The new algorithms developed under this project included a principled procedure for classification of objects, events or circumstances according to a target variable when a very large number of potential predictor variables is available but the number of cases that can be used for training a classifier is relatively small. These "high dimensional" problems require finding a minimal set of variables -called the Markov Blanket-- sufficient for predicting the value of the target variable. An algorithm, the Markov Blanket Fan Search, was developed, implemented and tested on both simulated and real data in conjunction with a graphical model classifier, which was also implemented. Another algorithm developed and implemented in TETRAD IV for time series elaborated on work by C. Granger and N. Swanson, which in turn exploited some of our earlier work. The algorithms in question learn a linear time series model from data. Given such a time series, the simultaneous residual covariances, after factoring out time dependencies, may provide information about causal processes that occur more rapidly than the time series representation allow, so called simultaneous or contemporaneous causal processes. Working with A. Monetta, a graduate student from Italy, we produced the correct statistics for estimating the contemporaneous causal structure from time series data using the TETRAD IV suite of algorithms. Two economists, David Bessler and Kevin Hoover, have independently published applications using TETRAD style algorithms to the same purpose. These implementations and algorithmic developments were separately used in two kinds of studies of climate data: Short time series of geographically proximate climate variables predicting agricultural effects in California, and longer duration climate measurements of temperature teleconnections.

  2. Passive microwave algorithm development and evaluation

    NASA Technical Reports Server (NTRS)

    Petty, Grant W.

    1995-01-01

    The scientific objectives of this grant are: (1) thoroughly evaluate, both theoretically and empirically, all available Special Sensor Microwave Imager (SSM/I) retrieval algorithms for column water vapor, column liquid water, and surface wind speed; (2) where both appropriate and feasible, develop, validate, and document satellite passive microwave retrieval algorithms that offer significantly improved performance compared with currently available algorithms; and (3) refine and validate a novel physical inversion scheme for retrieving rain rate over the ocean. This report summarizes work accomplished or in progress during the first year of a three year grant. The emphasis during the first year has been on the validation and refinement of the rain rate algorithm published by Petty and on the analysis of independent data sets that can be used to help evaluate the performance of rain rate algorithms over remote areas of the ocean. Two articles in the area of global oceanic precipitation are attached.

  3. An earlier de motu cordis.

    PubMed Central

    Daly, Walter J.

    2004-01-01

    Thirteenth century medical science, like medieval scholarship in general, was directed at reconciliation of Greek philosophy/science with prevailing medieval theology and philosophy. Peter of Spain [later Pope John XXI] was the leading medical scholar of his time. Peter wrote a long book on the soul. Imbedded in it was a chapter on the motion of the heart. Peter's De Motu was based on his own medical experience and Galen's De Usu Partium and De Usu Respirationis and De Usu Pulsuum. This earlier De Motu defines a point on the continuum of intellectual development leading to us and into the future. Thirteenth century scholarship relied on past authority to a degree that continues to puzzle and beg explanation. Images Fig. 1 PMID:17060956

  4. Aerosol Exposure to Rift Valley Fever Virus Causes Earlier and More Severe Neuropathology in the Murine Model, which Has Important Implications for Therapeutic Development

    PubMed Central

    Reed, Christopher; Lin, Kenny; Wilhelmsen, Catherine; Friedrich, Brian; Nalca, Aysegul; Keeney, Ashley; Donnelly, Ginger; Shamblin, Joshua; Hensley, Lisa E.; Olinger, Gene; Smith, Darci R.

    2013-01-01

    Rift Valley fever virus (RVFV) is an important mosquito-borne veterinary and human pathogen that can cause severe disease including acute-onset hepatitis, delayed-onset encephalitis, retinitis and blindness, or a hemorrhagic syndrome. Currently, no licensed vaccine or therapeutics exist to treat this potentially deadly disease. Detailed studies describing the pathogenesis of RVFV following aerosol exposure have not been completed and candidate therapeutics have not been evaluated following an aerosol exposure. These studies are important because while mosquito transmission is the primary means for human infection, it can also be transmitted by aerosol or through mucosal contact. Therefore, we directly compared the pathogenesis of RVFV following aerosol exposure to a subcutaneous (SC) exposure in the murine model by analyzing survival, clinical observations, blood chemistry, hematology, immunohistochemistry, and virus titration of tissues. Additionally, we evaluated the effectiveness of the nucleoside analog ribavirin administered prophylactically to treat mice exposed by aerosol and SC. The route of exposure did not significantly affect the survival, chemistry or hematology results of the mice. Acute hepatitis occurred despite the route of exposure. However, the development of neuropathology occurred much earlier and was more severe in mice exposed by aerosol compared to SC exposed mice. Mice treated with ribavirin and exposed SC were partially protected, whereas treated mice exposed by aerosol were not protected. Early and aggressive viral invasion of brain tissues following aerosol exposure likely played an important role in ribavirin's failure to prevent mortality among these animals. Our results highlight the need for more candidate antivirals to treat RVFV infection, especially in the case of a potential aerosol exposure. Additionally, our study provides an account of the key pathogenetic differences in RVF disease following two potential exposure routes and

  5. Infrared algorithm development for ocean observations

    NASA Technical Reports Server (NTRS)

    Brown, Otis B.

    1995-01-01

    Efforts continue under this contract to develop algorithms for the computation of sea surface temperature (SST) from MODIS infrared retrievals. This effort includes radiative transfer modeling, comparison of in situ and satellite observations, development and evaluation of processing and networking methodologies for algorithm computation and data accession, evaluation of surface validation approaches for IR radiances, and participation in MODIS (project) related activities. Efforts in this contract period have focused on radiative transfer modeling, evaluation of atmospheric correction methodologies, involvement in field studies, production and evaluation of new computer networking strategies, and objective analysis approaches.

  6. Further development of an improved altimeter wind speed algorithm

    NASA Technical Reports Server (NTRS)

    Chelton, Dudley B.; Wentz, Frank J.

    1986-01-01

    A previous altimeter wind speed retrieval algorithm was developed on the basis of wind speeds in the limited range from about 4 to 14 m/s. In this paper, a new approach which gives a wind speed model function applicable over the range 0 to 21 m/s is used. The method is based on comparing 50 km along-track averages of the altimeter normalized radar cross section measurements with neighboring off-nadir scatterometer wind speed measurements. The scatterometer winds are constructed from 100 km binned measurements of radar cross section and are located approximately 200 km from the satellite subtrack. The new model function agrees very well with earlier versions up to wind speeds of 14 m/s, but differs significantly at higher wind speeds. The relevance of these results to the Geosat altimeter launched in March 1985 is discussed.

  7. Algorithm Development Library for Environmental Satellite Missions

    NASA Astrophysics Data System (ADS)

    Smith, D. C.; Grant, K. D.; Miller, S. W.; Jamilkowski, M. L.

    2012-12-01

    science will need to migrate into the operational system. In addition, as new techniques are found to improve, supplement, or replace existing products, these changes will also require implementation into the operational system. In the past, operationalizing science algorithms and integrating them into active systems often required months of work. In order to significantly shorten the time and effort required for this activity, Raytheon has developed the Algorithm Development Library (ADL). The ADL enables scientist and researchers to develop algorithms on their own platforms, and provide these to Raytheon in a form that can be rapidly integrated directly into the operational baseline. As the JPSS CGS is a multi-mission ground system, algorithms are not restricted to Suomi NPP or JPSS missions. The ADL provides a development environment that any environmental remote sensing mission scientist can use to create algorithms that will plug into a JPSS CGS instantiation. This paper describes the ADL and how scientists and researchers can use it in their own environments.

  8. Global Precipitation Measurement: GPM Microwave Imager (GMI) Algorithm Development Approach

    NASA Technical Reports Server (NTRS)

    Stocker, Erich Franz

    2009-01-01

    This slide presentation reviews the approach to the development of the Global Precipitation Measurement algorithm. This presentation includes information about the responsibilities for the development of the algorithm, and the calibration. Also included is information about the orbit, and the sun angle. The test of the algorithm code will be done with synthetic data generated from the Precipitation Processing System (PPS).

  9. New developments in astrodynamics algorithms for autonomous rendezvous

    NASA Technical Reports Server (NTRS)

    Klumpp, Allan R.

    1991-01-01

    A the core of any autonomous rendezvous guidance system must be two algorithms for solving Lambert's and Kepler's problems, the two fundamental problems in classical astrodynamics. Lambert's problem is to determine the trajectory connecting specified initial and terminal position vectors in a specified transfer time. The solution is the initial and terminal velocity vectors. Kepler's problem is to determine the trajectory that stems from a given initial state (position and velocity). The solution is the state of an earlier or later specified time. To be suitable for flight software, astrodynamics algorithms must be totally reliable, compact, and fast. Although solving Lambert's and Kepler's problems has challenged some of the world's finest minds for over two centuries, only in the last year have algorithms appeared that satisfy all three requirements just stated. This paper presents an evaluation of the most highly regarded Lambert and Kepler algorithms.

  10. Comparisons between in vitro whole cell imaging and in vivo zebrafish-based approaches for identifying potential human hepatotoxicants earlier in pharmaceutical development.

    PubMed

    Hill, Adrian; Mesens, Natalie; Steemans, Margino; Xu, Jinghai James; Aleo, Michael D

    2012-02-01

    Drug-induced liver injury (DILI) is a major cause of attrition during both the early and later stages of the drug development and marketing process. Reducing or eliminating drug-induced severe liver injury, especially those that lead to liver transplants or death, would be tremendously beneficial for patients. Therefore, developing new pharmaceuticals that have the highest margins and attributes of hepatic safety would be a great accomplishment. Given the current low productivity of pharmaceutical companies and the high costs of bringing new medicines to market, any early screening assay(s) to identify and eliminate pharmaceuticals with the potential to cause severe liver injury in humans would be of economic value as well. The present review discusses the background, proof-of-concept, and validation studies associated with high-content screening (HCS) by two major pharmaceutical companies (Pfizer Inc and Jansen Pharmaceutical Companies of Johnson & Johnson) for detecting compounds with the potential to cause human DILI. These HCS assays use fluorescent-based markers of cell injury in either human hepatocytes or HepG2 cells. In collaboration with Evotec, an independent contract lab, these two companies also independently evaluated larval zebrafish as an early-stage in vivo screen for hepatotoxicity in independently conducted, blinded assessments. Details about this model species, the need for bioanalysis, and, specifically, the outcome of the phenotypic-based zebrafish screens are presented. Comparing outcomes in zebrafish against both HCS assays suggests an enhanced detection for hepatotoxicants of most DILI concern when used in combination with each other, based on the U.S. Food and Drug Administration DILI classification list. PMID:22242931

  11. Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms

    NASA Technical Reports Server (NTRS)

    Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)

    2000-01-01

    In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.

  12. Development of Speckle Interferometry Algorithm and System

    SciTech Connect

    Shamsir, A. A. M.; Jafri, M. Z. M.; Lim, H. S.

    2011-05-25

    Electronic speckle pattern interferometry (ESPI) method is a wholefield, non destructive measurement method widely used in the industries such as detection of defects on metal bodies, detection of defects in intergrated circuits in digital electronics components and in the preservation of priceless artwork. In this research field, this method is widely used to develop algorithms and to develop a new laboratory setup for implementing the speckle pattern interferometry. In speckle interferometry, an optically rough test surface is illuminated with an expanded laser beam creating a laser speckle pattern in the space surrounding the illuminated region. The speckle pattern is optically mixed with a second coherent light field that is either another speckle pattern or a smooth light field. This produces an interferometric speckle pattern that will be detected by sensor to count the change of the speckle pattern due to force given. In this project, an experimental setup of ESPI is proposed to analyze a stainless steel plate using 632.8 nm (red) wavelength of lights.

  13. Development of Speckle Interferometry Algorithm and System

    NASA Astrophysics Data System (ADS)

    Shamsir, A. A. M.; Jafri, M. Z. M.; Lim, H. S.

    2011-05-01

    Electronic speckle pattern interferometry (ESPI) method is a wholefield, non destructive measurement method widely used in the industries such as detection of defects on metal bodies, detection of defects in intergrated circuits in digital electronics components and in the preservation of priceless artwork. In this research field, this method is widely used to develop algorithms and to develop a new laboratory setup for implementing the speckle pattern interferometry. In speckle interferometry, an optically rough test surface is illuminated with an expanded laser beam creating a laser speckle pattern in the space surrounding the illuminated region. The speckle pattern is optically mixed with a second coherent light field that is either another speckle pattern or a smooth light field. This produces an interferometric speckle pattern that will be detected by sensor to count the change of the speckle pattern due to force given. In this project, an experimental setup of ESPI is proposed to analyze a stainless steel plate using 632.8 nm (red) wavelength of lights.

  14. Connected-Health Algorithm: Development and Evaluation.

    PubMed

    Vlahu-Gjorgievska, Elena; Koceski, Saso; Kulev, Igor; Trajkovik, Vladimir

    2016-04-01

    Nowadays, there is a growing interest towards the adoption of novel ICT technologies in the field of medical monitoring and personal health care systems. This paper proposes design of a connected health algorithm inspired from social computing paradigm. The purpose of the algorithm is to give a recommendation for performing a specific activity that will improve user's health, based on his health condition and set of knowledge derived from the history of the user and users with similar attitudes to him. The algorithm could help users to have bigger confidence in choosing their physical activities that will improve their health. The proposed algorithm has been experimentally validated using real data collected from a community of 1000 active users. The results showed that the recommended physical activity, contributed towards weight loss of at least 0.5 kg, is found in the first half of the ordered list of recommendations, generated by the algorithm, with the probability > 0.6 with 1 % level of significance. PMID:26922593

  15. A Developed ESPRIT Algorithm for DOA Estimation

    NASA Astrophysics Data System (ADS)

    Fayad, Youssef; Wang, Caiyun; Cao, Qunsheng; Hafez, Alaa El-Din Sayed

    2015-05-01

    A novel algorithm for estimating direction of arrival (DOAE) for target, which aspires to contribute to increase the estimation process accuracy and decrease the calculation costs, has been carried out. It has introduced time and space multiresolution in Estimation of Signal Parameter via Rotation Invariance Techniques (ESPRIT) method (TS-ESPRIT) to realize subspace approach that decreases errors caused by the model's nonlinearity effect. The efficacy of the proposed algorithm is verified by using Monte Carlo simulation, the DOAE accuracy has evaluated by closed-form Cramér-Rao bound (CRB) which reveals that the proposed algorithm's estimated results are better than those of the normal ESPRIT methods leading to the estimator performance enhancement.

  16. Developing an Enhanced Lightning Jump Algorithm for Operational Use

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2009-01-01

    Overall Goals: 1. Build on the lightning jump framework set through previous studies. 2. Understand what typically occurs in nonsevere convection with respect to increases in lightning. 3. Ultimately develop a lightning jump algorithm for use on the Geostationary Lightning Mapper (GLM). 4 Lightning jump algorithm configurations were developed (2(sigma), 3(sigma), Threshold 10 and Threshold 8). 5 algorithms were tested on a population of 47 nonsevere and 38 severe thunderstorms. Results indicate that the 2(sigma) algorithm performed best over the entire thunderstorm sample set with a POD of 87%, a far of 35%, a CSI of 59% and a HSS of 75%.

  17. Development of a two wheeled self balancing robot with speech recognition and navigation algorithm

    NASA Astrophysics Data System (ADS)

    Rahman, Md. Muhaimin; Ashik-E-Rasul, Haq, Nowab. Md. Aminul; Hassan, Mehedi; Hasib, Irfan Mohammad Al; Hassan, K. M. Rafidh

    2016-07-01

    This paper is aimed to discuss modeling, construction and development of navigation algorithm of a two wheeled self balancing mobile robot in an enclosure. In this paper, we have discussed the design of two of the main controller algorithms, namely PID algorithms, on the robot model. Simulation is performed in the SIMULINK environment. The controller is developed primarily for self-balancing of the robot and also it's positioning. As for the navigation in an enclosure, template matching algorithm is proposed for precise measurement of the robot position. The navigation system needs to be calibrated before navigation process starts. Almost all of the earlier template matching algorithms that can be found in the open literature can only trace the robot. But the proposed algorithm here can also locate the position of other objects in an enclosure, like furniture, tables etc. This will enable the robot to know the exact location of every stationary object in the enclosure. Moreover, some additional features, such as Speech Recognition and Object Detection, are added. For Object Detection, the single board Computer Raspberry Pi is used. The system is programmed to analyze images captured via the camera, which are then processed through background subtraction, followed by active noise reduction.

  18. Motion Cueing Algorithm Development: Piloted Performance Testing of the Cueing Algorithms

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    The relative effectiveness in simulating aircraft maneuvers with both current and newly developed motion cueing algorithms was assessed with an eleven-subject piloted performance evaluation conducted on the NASA Langley Visual Motion Simulator (VMS). In addition to the current NASA adaptive algorithm, two new cueing algorithms were evaluated: the optimal algorithm and the nonlinear algorithm. The test maneuvers included a straight-in approach with a rotating wind vector, an offset approach with severe turbulence and an on/off lateral gust that occurs as the aircraft approaches the runway threshold, and a takeoff both with and without engine failure after liftoff. The maneuvers were executed with each cueing algorithm with added visual display delay conditions ranging from zero to 200 msec. Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. Piloted performance parameters for the approach maneuvers, the vertical velocity upon touchdown and the runway touchdown position, were also analyzed but did not show any noticeable difference among the cueing algorithms. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach were less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.

  19. Development and application of multispectral algorithms for defect apple inspection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This research developed and evaluated the multispectral algorithm derived from hyperspectral line-scan imaging system which equipped with an electron-multiplying-charge-coupled-device camera and an imaging spectrograph for the detection of defect Red Delicious apples. The algorithm utilized the fluo...

  20. SSME structural computer program development: BOPACE theoretical manual, addendum. [algorithms

    NASA Technical Reports Server (NTRS)

    1975-01-01

    An algorithm developed and incorporated into BOPACE for improving the convergence and accuracy of the inelastic stress-strain calculations is discussed. The implementation of separation of strains in the residual-force iterative procedure is defined. The elastic-plastic quantities used in the strain-space algorithm are defined and compared with previous quantities.

  1. Advances in fracture algorithm development in GRIM

    NASA Astrophysics Data System (ADS)

    Cullis, I.; Church, P.; Greenwood, P.; Huntington-Thresher, W.; Reynolds, M.

    2003-09-01

    The numerical treatment of fracture processes has long been a major challenge in any hydrocode, but has been particularly acute in Eulerian Hydrocodes. This is due to the difficulties in establishing a consistent process for treating failure and the post failure treatment, which is complicated by advection, mixed cell and interface issues, particularly post failure. This alone increase the complexity of incorporating and validating a failure model compared to a Lagrange hydrocode, where the numerical treatment is much simpler. This paper outlines recent significant progress in the incorporation of fracture models in GRIM and the advection of damage across cell boundaries within the mesh. This has allowed a much more robust treatment of fracture in an Eulerian frame of reference and has greatly expanded the scope of tractable dynamic fracture scenarios. The progress has been possible due to a careful integration of the fracture algorithm within the numerical integration scheme to maintain a consistent representation of the physics. The paper describes various applications, which demonstrate the robustness and efficiency of the scheme and highlight some of the future challenges.

  2. Development and Evaluation of Algorithms for Breath Alcohol Screening

    PubMed Central

    Ljungblad, Jonas; Hök, Bertil; Ekström, Mikael

    2016-01-01

    Breath alcohol screening is important for traffic safety, access control and other areas of health promotion. A family of sensor devices useful for these purposes is being developed and evaluated. This paper is focusing on algorithms for the determination of breath alcohol concentration in diluted breath samples using carbon dioxide to compensate for the dilution. The examined algorithms make use of signal averaging, weighting and personalization to reduce estimation errors. Evaluation has been performed by using data from a previously conducted human study. It is concluded that these features in combination will significantly reduce the random error compared to the signal averaging algorithm taken alone. PMID:27043576

  3. Development and Evaluation of Algorithms for Breath Alcohol Screening.

    PubMed

    Ljungblad, Jonas; Hök, Bertil; Ekström, Mikael

    2016-01-01

    Breath alcohol screening is important for traffic safety, access control and other areas of health promotion. A family of sensor devices useful for these purposes is being developed and evaluated. This paper is focusing on algorithms for the determination of breath alcohol concentration in diluted breath samples using carbon dioxide to compensate for the dilution. The examined algorithms make use of signal averaging, weighting and personalization to reduce estimation errors. Evaluation has been performed by using data from a previously conducted human study. It is concluded that these features in combination will significantly reduce the random error compared to the signal averaging algorithm taken alone. PMID:27043576

  4. Algorithm development for Maxwell's equations for computational electromagnetism

    NASA Technical Reports Server (NTRS)

    Goorjian, Peter M.

    1990-01-01

    A new algorithm has been developed for solving Maxwell's equations for the electromagnetic field. It solves the equations in the time domain with central, finite differences. The time advancement is performed implicitly, using an alternating direction implicit procedure. The space discretization is performed with finite volumes, using curvilinear coordinates with electromagnetic components along those directions. Sample calculations are presented of scattering from a metal pin, a square and a circle to demonstrate the capabilities of the new algorithm.

  5. JPSS Cryosphere Algorithms: Integration and Testing in Algorithm Development Library (ADL)

    NASA Astrophysics Data System (ADS)

    Tsidulko, M.; Mahoney, R. L.; Meade, P.; Baldwin, D.; Tschudi, M. A.; Das, B.; Mikles, V. J.; Chen, W.; Tang, Y.; Sprietzer, K.; Zhao, Y.; Wolf, W.; Key, J.

    2014-12-01

    JPSS is a next generation satellite system that is planned to be launched in 2017. The satellites will carry a suite of sensors that are already on board the Suomi National Polar-orbiting Partnership (S-NPP) satellite. The NOAA/NESDIS/STAR Algorithm Integration Team (AIT) works within the Algorithm Development Library (ADL) framework which mimics the operational JPSS Interface Data Processing Segment (IDPS). The AIT contributes in development, integration and testing of scientific algorithms employed in the IDPS. This presentation discusses cryosphere related activities performed in ADL. The addition of a new ancillary data set - NOAA Global Multisensor Automated Snow/Ice data (GMASI) - with ADL code modifications is described. Preliminary GMASI impact on the gridded Snow/Ice product is estimated. Several modifications to the Ice Age algorithm that demonstrates mis-classification of ice type for certain areas/time periods are tested in the ADL. Sensitivity runs for day time, night time and terminator zone are performed and presented. Comparisons between the original and modified versions of the Ice Age algorithm are also presented.

  6. Development and Comparison of Warfarin Dosing Algorithms in Stroke Patients

    PubMed Central

    Cho, Sun-Mi; Lee, Kyung-Yul; Choi, Jong Rak

    2016-01-01

    Purpose The genes for cytochrome P450 2C9 (CYP2C9) and vitamin K epoxide reductase complex subunit 1 (VKORC1) have been identified as important genetic determinants of warfarin dosing and have been studied. We developed warfarin algorithm for Korean patients with stroke and compared the accuracy of warfarin dose prediction algorithms based on the pharmacogenetics. Materials and Methods A total of 101 patients on stable maintenance dose of warfarin were enrolled. Warfarin dosing algorithm was developed using multiple linear regression analysis. The performance of all the algorithms was characterized with coefficient of determination, determined by linear regression, and the mean of percent deviation was used to predict doses from the actual dose. In addition, we compared the performance of the algorithms using percentage of predicted dose falling within ±20% of clinically observed doses and dividing the patients into a low-dose group (≤3 mg/day), an intermediate-dose group (3–7 mg/day), and high-dose group (≥7 mg/day). Results A new developed algorithms including the variables of age, body weight, and CYP2C9 and VKORC1 genotype. Our algorithm accounted for 51% of variation in the warfarin stable dose, and performed best in predicting dose within 20% of actual dose and intermediate-dose group. Conclusion Our warfarin dosing algorithm may be useful for Korean patients with stroke. Further studies to elucidate clinical utility of genotype-guided dosing and find the additional genetic association are necessary. PMID:26996562

  7. Infrared Algorithm Development for Ocean Observations with EOS/MODIS

    NASA Technical Reports Server (NTRS)

    Brown, Otis B.

    1997-01-01

    Efforts continue under this contract to develop algorithms for the computation of sea surface temperature (SST) from MODIS infrared measurements. This effort includes radiative transfer modeling, comparison of in situ and satellite observations, development and evaluation of processing and networking methodologies for algorithm computation and data accession, evaluation of surface validation approaches for IR radiances, development of experimental instrumentation, and participation in MODIS (project) related activities. Activities in this contract period have focused on radiative transfer modeling, evaluation of atmospheric correction methodologies, undertake field campaigns, analysis of field data, and participation in MODIS meetings.

  8. System development of the Screwworm Eradication Data System (SEDS) algorithm

    NASA Technical Reports Server (NTRS)

    Arp, G.; Forsberg, F.; Giddings, L.; Phinney, D.

    1976-01-01

    The use of remotely sensed data is reported in the eradication of the screwworm and in the study of the role of the weather in the activity and development of the screwworm fly. As a result, the Screwworm Eradication Data System (SEDS) algorithm was developed.

  9. Infrared algorithm development for ocean observations with EOS/MODIS

    NASA Technical Reports Server (NTRS)

    Brown, Otis B.

    1994-01-01

    Efforts continue under this contract to develop algorithms for the computation of sea surface temperature (SST) from MODIS infrared retrievals. This effort includes radiative transfer modeling, comparison of in situ and satellite observations, development and evaluation of processing and networking methodologies for algorithm computation and data accession, evaluation of surface validation approaches for IR radiances, and participation in MODIS (project) related activities. Efforts in this contract period have focused on radiative transfer modeling and evaluation of atmospheric path radiance efforts on SST estimation, exploration of involvement in ongoing field studies, evaluation of new computer networking strategies, and objective analysis approaches.

  10. On the development of protein pKa calculation algorithms

    SciTech Connect

    Carstensen, Tommy; Farrell, Damien; Huang, Yong; Baker, Nathan A.; Nielsen, Jens E.

    2011-12-01

    Protein pKa calculation algorithms are typically developed to reproduce experimental pKa values and provide us with a better understanding of the fundamental importance of electrostatics for protein structure and function. However, the approximations and adjustable parameters employed in almost all pKa calculation methods means that there is the risk that pKa calculation algorithms are 'over-fitted' to the available datasets, and that these methods therefore do not model protein physics realistically. We employ simulations of the protein pKa calculation algorithm development process to show that careful optimization procedures and non-biased experimental datasets must be applied to ensure a realistic description of the underlying physical terms. We furthermore investigate the effect of experimental noise and find a significant effect on the pKa calculation algorithm optimization landscape. Finally, we comment on strategies for ensuring the physical realism of protein pKa calculation algorithms and we assess the overall state of the field with a view to predicting future directions of development.

  11. Development and Application of a Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Fulton, Christopher E.; Maul, William A.; Sowers, T. Shane

    2007-01-01

    This paper describes the development and initial demonstration of a Portable Health Algorithms Test (PHALT) System that is being developed by researchers at the NASA Glenn Research Center (GRC). The PHALT System was conceived as a means of evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT System allows systems health management algorithms to be developed in a graphical programming environment; to be tested and refined using system simulation or test data playback; and finally, to be evaluated in a real-time hardware-in-the-loop mode with a live test article. In this paper, PHALT System development is described through the presentation of a functional architecture, followed by the selection and integration of hardware and software. Also described is an initial real-time hardware-in-the-loop demonstration that used sensor data qualification algorithms to diagnose and isolate simulated sensor failures in a prototype Power Distribution Unit test-bed. Success of the initial demonstration is highlighted by the correct detection of all sensor failures and the absence of any real-time constraint violations.

  12. Decision making algorithm for development strategy of information systems

    NASA Astrophysics Data System (ADS)

    Derman, Galyna Y.; Nikitenko, Olena D.; Kotyra, Andrzej; Bazarova, Madina; Kassymkhanova, Dana

    2015-12-01

    The paper presents algorithm of decision making for development strategy of information systems. The process of development is planned taking into account the internal and external factors of the enterprise which affect the prospects of development of both the information system and the whole enterprise. The initial state of the system must be taken into account. The total risk is the criterion for selecting the strategy. The risk is calculated using statistical and fuzzy data of system's parameters. These data are summarized by means of the function of uncertainty. The software for the realization of the algorithm of decision making on choosing the development strategy of information system is developed and created in this paper.

  13. REVIEW ARTICLE: EIT reconstruction algorithms: pitfalls, challenges and recent developments

    NASA Astrophysics Data System (ADS)

    Lionheart, William R. B.

    2004-02-01

    We review developments, issues and challenges in electrical impedance tomography (EIT) for the 4th Conference on Biomedical Applications of Electrical Impedance Tomography, held at Manchester in 2003. We focus on the necessity for three-dimensional data collection and reconstruction, efficient solution of the forward problem, and both present and future reconstruction algorithms. We also suggest common pitfalls or 'inverse crimes' to avoid.

  14. Algorithm integration using ADL (Algorithm Development Library) for improving CrIMSS EDR science product quality

    NASA Astrophysics Data System (ADS)

    Das, B.; Wilson, M.; Divakarla, M. G.; Chen, W.; Barnet, C.; Wolf, W.

    2013-05-01

    Algorithm Development Library (ADL) is a framework that mimics the operational system IDPS (Interface Data Processing Segment) that is currently being used to process data from instruments aboard Suomi National Polar-orbiting Partnership (S-NPP) satellite. The satellite was launched successfully in October 2011. The Cross-track Infrared and Microwave Sounder Suite (CrIMSS) consists of the Advanced Technology Microwave Sounder (ATMS) and Cross-track Infrared Sounder (CrIS) instruments that are on-board of S-NPP. These instruments will also be on-board of JPSS (Joint Polar Satellite System) that will be launched in early 2017. The primary products of the CrIMSS Environmental Data Record (EDR) include global atmospheric vertical temperature, moisture, and pressure profiles (AVTP, AVMP and AVPP) and Ozone IP (Intermediate Product from CrIS radiances). Several algorithm updates have recently been proposed by CrIMSS scientists that include fixes to the handling of forward modeling errors, a more conservative identification of clear scenes, indexing corrections for daytime products, and relaxed constraints between surface temperature and air temperature for daytime land scenes. We have integrated these improvements into the ADL framework. This work compares the results from ADL emulation of future IDPS system incorporating all the suggested algorithm updates with the current official processing results by qualitative and quantitative evaluations. The results prove these algorithm updates improve science product quality.

  15. Development, Comparisons and Evaluation of Aerosol Retrieval Algorithms

    NASA Astrophysics Data System (ADS)

    de Leeuw, G.; Holzer-Popp, T.; Aerosol-cci Team

    2011-12-01

    The Climate Change Initiative (cci) of the European Space Agency (ESA) has brought together a team of European Aerosol retrieval groups working on the development and improvement of aerosol retrieval algorithms. The goal of this cooperation is the development of methods to provide the best possible information on climate and climate change based on satellite observations. To achieve this, algorithms are characterized in detail as regards the retrieval approaches, the aerosol models used in each algorithm, cloud detection and surface treatment. A round-robin intercomparison of results from the various participating algorithms serves to identify the best modules or combinations of modules for each sensor. Annual global datasets including their uncertainties will then be produced and validated. The project builds on 9 existing algorithms to produce spectral aerosol optical depth (AOD and Ångström exponent) as well as other aerosol information; two instruments are included to provide the absorbing aerosol index (AAI) and stratospheric aerosol information. The algorithms included are: - 3 for ATSR (ORAC developed by RAL / Oxford university, ADV developed by FMI and the SU algorithm developed by Swansea University ) - 2 for MERIS (BAER by Bremen university and the ESA standard handled by HYGEOS) - 1 for POLDER over ocean (LOA) - 1 for synergetic retrieval (SYNAER by DLR ) - 1 for OMI retreival of the absorbing aerosol index with averaging kernel information (KNMI) - 1 for GOMOS stratospheric extinction profile retrieval (BIRA) The first seven algorithms aim at the retrieval of the AOD. However, each of the algorithms used differ in their approach, even for algorithms working with the same instrument such as ATSR or MERIS. To analyse the strengths and weaknesses of each algorithm several tests are made. The starting point for comparison and measurement of improvements is a retrieval run for 1 month, September 2008. The data from the same month are subsequently used for

  16. Developing and Implementing the Data Mining Algorithms in RAVEN

    SciTech Connect

    Sen, Ramazan Sonat; Maljovec, Daniel Patrick; Alfonsi, Andrea; Rabiti, Cristian

    2015-09-01

    The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantification analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.

  17. Development of microwave rainfall retrieval algorithm for climate applications

    NASA Astrophysics Data System (ADS)

    KIM, J. H.; Shin, D. B.

    2014-12-01

    With the accumulated satellite datasets for decades, it is possible that satellite-based data could contribute to sustained climate applications. Level-3 products from microwave sensors for climate applications can be obtained from several algorithms. For examples, the Microwave Emission brightness Temperature Histogram (METH) algorithm produces level-3 rainfalls directly, whereas the Goddard profiling (GPROF) algorithm first generates instantaneous rainfalls and then temporal and spatial averaging process leads to level-3 products. The rainfall algorithm developed in this study follows a similar approach to averaging instantaneous rainfalls. However, the algorithm is designed to produce instantaneous rainfalls at an optimal resolution showing reduced non-linearity in brightness temperature (TB)-rain rate(R) relations. It is found that the resolution tends to effectively utilize emission channels whose footprints are relatively larger than those of scattering channels. This algorithm is mainly composed of a-priori databases (DBs) and a Bayesian inversion module. The DB contains massive pairs of simulated microwave TBs and rain rates, obtained by WRF (version 3.4) and RTTOV (version 11.1) simulations. To improve the accuracy and efficiency of retrieval process, data mining technique is additionally considered. The entire DB is classified into eight types based on Köppen climate classification criteria using reanalysis data. Among these sub-DBs, only one sub-DB which presents the most similar physical characteristics is selected by considering the thermodynamics of input data. When the Bayesian inversion is applied to the selected DB, instantaneous rain rate with 6 hours interval is retrieved. The retrieved monthly mean rainfalls are statistically compared with CMAP and GPCP, respectively.

  18. Oscillation Detection Algorithm Development Summary Report and Test Plan

    SciTech Connect

    Zhou, Ning; Huang, Zhenyu; Tuffner, Francis K.; Jin, Shuangshuang

    2009-10-03

    Small signal stability problems are one of the major threats to grid stability and reliability in California and the western U.S. power grid. An unstable oscillatory mode can cause large-amplitude oscillations and may result in system breakup and large-scale blackouts. There have been several incidents of system-wide oscillations. Of them, the most notable is the August 10, 1996 western system breakup produced as a result of undamped system-wide oscillations. There is a great need for real-time monitoring of small-signal oscillations in the system. In power systems, a small-signal oscillation is the result of poor electromechanical damping. Considerable understanding and literature have been developed on the small-signal stability problem over the past 50+ years. These studies have been mainly based on a linearized system model and eigenvalue analysis of its characteristic matrix. However, its practical feasibility is greatly limited as power system models have been found inadequate in describing real-time operating conditions. Significant efforts have been devoted to monitoring system oscillatory behaviors from real-time measurements in the past 20 years. The deployment of phasor measurement units (PMU) provides high-precision time-synchronized data needed for estimating oscillation modes. Measurement-based modal analysis, also known as ModeMeter, uses real-time phasor measure-ments to estimate system oscillation modes and their damping. Low damping indicates potential system stability issues. Oscillation alarms can be issued when the power system is lightly damped. A good oscillation alarm tool can provide time for operators to take remedial reaction and reduce the probability of a system breakup as a result of a light damping condition. Real-time oscillation monitoring requires ModeMeter algorithms to have the capability to work with various kinds of measurements: disturbance data (ringdown signals), noise probing data, and ambient data. Several measurement

  19. Development of a biomimetic robotic fish and its control algorithm.

    PubMed

    Yu, Junzhi; Tan, Min; Wang, Shuo; Chen, Erkui

    2004-08-01

    This paper is concerned with the design of a robotic fish and its motion control algorithms. A radio-controlled, four-link biomimetic robotic fish is developed using a flexible posterior body and an oscillating foil as a propeller. The swimming speed of the robotic fish is adjusted by modulating joint's oscillating frequency, and its orientation is tuned by different joint's deflections. Since the motion control of a robotic fish involves both hydrodynamics of the fluid environment and dynamics of the robot, it is very difficult to establish a precise mathematical model employing purely analytical methods. Therefore, the fish's motion control task is decomposed into two control systems. The online speed control implements a hybrid control strategy and a proportional-integral-derivative (PID) control algorithm. The orientation control system is based on a fuzzy logic controller. In our experiments, a point-to-point (PTP) control algorithm is implemented and an overhead vision system is adopted to provide real-time visual feedback. The experimental results confirm the effectiveness of the proposed algorithms. PMID:15462446

  20. SMMR Simulator radiative transfer calibration model. 2: Algorithm development

    NASA Technical Reports Server (NTRS)

    Link, S.; Calhoon, C.; Krupp, B.

    1980-01-01

    Passive microwave measurements performed from Earth orbit can be used to provide global data on a wide range of geophysical and meteorological phenomena. A Scanning Multichannel Microwave Radiometer (SMMR) is being flown on the Nimbus-G satellite. The SMMR Simulator duplicates the frequency bands utilized in the spacecraft instruments through an amalgamate of radiometer systems. The algorithm developed utilizes data from the fall 1978 NASA CV-990 Nimbus-G underflight test series and subsequent laboratory testing.

  1. Computational Fluid Dynamics. [numerical methods and algorithm development

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.

  2. Development of an Inverse Algorithm for Resonance Inspection

    SciTech Connect

    Lai, Canhai; Xu, Wei; Sun, Xin

    2012-10-01

    Resonance inspection (RI), which employs the natural frequency spectra shift between the good and the anomalous part populations to detect defects, is a non-destructive evaluation (NDE) technique with many advantages such as low inspection cost, high testing speed, and broad applicability to structures with complex geometry compared to other contemporary NDE methods. It has already been widely used in the automobile industry for quality inspections of safety critical parts. Unlike some conventionally used NDE methods, the current RI technology is unable to provide details, i.e. location, dimension, or types, of the flaws for the discrepant parts. Such limitation severely hinders its wide spread applications and further development. In this study, an inverse RI algorithm based on maximum correlation function is proposed to quantify the location and size of flaws for a discrepant part. A dog-bone shaped stainless steel sample with and without controlled flaws are used for algorithm development and validation. The results show that multiple flaws can be accurately pinpointed back using the algorithms developed, and the prediction accuracy decreases with increasing flaw numbers and decreasing distance between flaws.

  3. Earlier snowmelt and warming lead to earlier but not necessarily more plant growth.

    PubMed

    Livensperger, Carolyn; Steltzer, Heidi; Darrouzet-Nardi, Anthony; Sullivan, Patrick F; Wallenstein, Matthew; Weintraub, Michael N

    2016-01-01

    Climate change over the past ∼50 years has resulted in earlier occurrence of plant life-cycle events for many species. Across temperate, boreal and polar latitudes, earlier seasonal warming is considered the key mechanism leading to earlier leaf expansion and growth. Yet, in seasonally snow-covered ecosystems, the timing of spring plant growth may also be cued by snowmelt, which may occur earlier in a warmer climate. Multiple environmental cues protect plants from growing too early, but to understand how climate change will alter the timing and magnitude of plant growth, experiments need to independently manipulate temperature and snowmelt. Here, we demonstrate that altered seasonality through experimental warming and earlier snowmelt led to earlier plant growth, but the aboveground production response varied among plant functional groups. Earlier snowmelt without warming led to early leaf emergence, but often slowed the rate of leaf expansion and had limited effects on aboveground production. Experimental warming alone had small and inconsistent effects on aboveground phenology, while the effect of the combined treatment resembled that of early snowmelt alone. Experimental warming led to greater aboveground production among the graminoids, limited changes among deciduous shrubs and decreased production in one of the dominant evergreen shrubs. As a result, we predict that early onset of the growing season may favour early growing plant species, even those that do not shift the timing of leaf expansion. PMID:27075181

  4. Earlier snowmelt and warming lead to earlier but not necessarily more plant growth

    PubMed Central

    Livensperger, Carolyn; Steltzer, Heidi; Darrouzet-Nardi, Anthony; Sullivan, Patrick F.; Wallenstein, Matthew; Weintraub, Michael N.

    2016-01-01

    Climate change over the past ∼50 years has resulted in earlier occurrence of plant life-cycle events for many species. Across temperate, boreal and polar latitudes, earlier seasonal warming is considered the key mechanism leading to earlier leaf expansion and growth. Yet, in seasonally snow-covered ecosystems, the timing of spring plant growth may also be cued by snowmelt, which may occur earlier in a warmer climate. Multiple environmental cues protect plants from growing too early, but to understand how climate change will alter the timing and magnitude of plant growth, experiments need to independently manipulate temperature and snowmelt. Here, we demonstrate that altered seasonality through experimental warming and earlier snowmelt led to earlier plant growth, but the aboveground production response varied among plant functional groups. Earlier snowmelt without warming led to early leaf emergence, but often slowed the rate of leaf expansion and had limited effects on aboveground production. Experimental warming alone had small and inconsistent effects on aboveground phenology, while the effect of the combined treatment resembled that of early snowmelt alone. Experimental warming led to greater aboveground production among the graminoids, limited changes among deciduous shrubs and decreased production in one of the dominant evergreen shrubs. As a result, we predict that early onset of the growing season may favour early growing plant species, even those that do not shift the timing of leaf expansion. PMID:27075181

  5. The development of a whole-body algorithm

    NASA Technical Reports Server (NTRS)

    Kay, F. J.

    1973-01-01

    The whole-body algorithm is envisioned as a mathematical model that utilizes human physiology to simulate the behavior of vital body systems. The objective of this model is to determine the response of selected body parameters within these systems to various input perturbations, or stresses. Perturbations of interest are exercise, chemical unbalances, gravitational changes and other abnormal environmental conditions. This model provides for a study of man's physiological response in various space applications, underwater applications, normal and abnormal workloads and environments, and the functioning of the system with physical impairments or decay of functioning components. Many methods or approaches to the development of a whole-body algorithm are considered. Of foremost concern is the determination of the subsystems to be included, the detail of the subsystems and the interaction between the subsystems.

  6. Developing Information Power Grid Based Algorithms and Software

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This exploratory study initiated our effort to understand performance modeling on parallel systems. The basic goal of performance modeling is to understand and predict the performance of a computer program or set of programs on a computer system. Performance modeling has numerous applications, including evaluation of algorithms, optimization of code implementations, parallel library development, comparison of system architectures, parallel system design, and procurement of new systems. Our work lays the basis for the construction of parallel libraries that allow for the reconstruction of application codes on several distinct architectures so as to assure performance portability. Following our strategy, once the requirements of applications are well understood, one can then construct a library in a layered fashion. The top level of this library will consist of architecture-independent geometric, numerical, and symbolic algorithms that are needed by the sample of applications. These routines should be written in a language that is portable across the targeted architectures.

  7. The development of algorithms in electrical impedance computerized tomography.

    PubMed

    Shie, J R; Li, C J; Lin, J T

    2000-01-01

    Electrical Impedance Computerized Tomography (EICT) is an imaging method to reconstruct the impedance distribution inside of domain through the boundary injected current and display the impedance contrast ratio as an image. This paper concentrates on developing two algorithms to enhance the quality of the conductivity image. The two algorithms are "Fine-Mesh Conversion Method" and "Sub-Domain EICT Method". "Fine-Mesh Conversion Method" is a numerical calibration process to find a coarse mesh impedance network that behaves like a fine mesh network in terms of giving similar voltages under the same current excitations. "Sub-Domain EICT" solves a higher resolution EICT with the cost of a lower resolution EICT by combining "Fine-Mesh Conversion Method", and a Fuzzy Logic Inference Systems (FLIS) classifier. PMID:10834231

  8. Development of clustering algorithms for Compressed Baryonic Matter experiment

    NASA Astrophysics Data System (ADS)

    Kozlov, G. E.; Ivanov, V. V.; Lebedev, A. A.; Vassiliev, Yu. O.

    2015-05-01

    A clustering problem for the coordinate detectors in the Compressed Baryonic Matter (CBM) experiment is discussed. Because of the high interaction rate and huge datasets to be dealt with, clustering algorithms are required to be fast and efficient and capable of processing events with high track multiplicity. At present there are two different approaches to the problem. In the first one each fired pad bears information about its charge, while in the second one a pad can or cannot be fired, thus rendering the separation of overlapping clusters a difficult task. To deal with the latter, two different clustering algorithms were developed, integrated into the CBMROOT software environment, and tested with various types of simulated events. Both of them are found to be highly efficient and accurate.

  9. Algorithm for automatic forced spirometry quality assessment: technological developments.

    PubMed

    Melia, Umberto; Burgos, Felip; Vallverdú, Montserrat; Velickovski, Filip; Lluch-Ariet, Magí; Roca, Josep; Caminal, Pere

    2014-01-01

    We hypothesized that the implementation of automatic real-time assessment of quality of forced spirometry (FS) may significantly enhance the potential for extensive deployment of a FS program in the community. Recent studies have demonstrated that the application of quality criteria defined by the ATS/ERS (American Thoracic Society/European Respiratory Society) in commercially available equipment with automatic quality assessment can be markedly improved. To this end, an algorithm for assessing quality of FS automatically was reported. The current research describes the mathematical developments of the algorithm. An innovative analysis of the shape of the spirometric curve, adding 23 new metrics to the traditional 4 recommended by ATS/ERS, was done. The algorithm was created through a two-step iterative process including: (1) an initial version using the standard FS curves recommended by the ATS; and, (2) a refined version using curves from patients. In each of these steps the results were assessed against one expert's opinion. Finally, an independent set of FS curves from 291 patients was used for validation purposes. The novel mathematical approach to characterize the FS curves led to appropriate FS classification with high specificity (95%) and sensitivity (96%). The results constitute the basis for a successful transfer of FS testing to non-specialized professionals in the community. PMID:25551213

  10. Algorithm for Automatic Forced Spirometry Quality Assessment: Technological Developments

    PubMed Central

    Melia, Umberto; Burgos, Felip; Vallverdú, Montserrat; Velickovski, Filip; Lluch-Ariet, Magí; Roca, Josep; Caminal, Pere

    2014-01-01

    We hypothesized that the implementation of automatic real-time assessment of quality of forced spirometry (FS) may significantly enhance the potential for extensive deployment of a FS program in the community. Recent studies have demonstrated that the application of quality criteria defined by the ATS/ERS (American Thoracic Society/European Respiratory Society) in commercially available equipment with automatic quality assessment can be markedly improved. To this end, an algorithm for assessing quality of FS automatically was reported. The current research describes the mathematical developments of the algorithm. An innovative analysis of the shape of the spirometric curve, adding 23 new metrics to the traditional 4 recommended by ATS/ERS, was done. The algorithm was created through a two-step iterative process including: (1) an initial version using the standard FS curves recommended by the ATS; and, (2) a refined version using curves from patients. In each of these steps the results were assessed against one expert's opinion. Finally, an independent set of FS curves from 291 patients was used for validation purposes. The novel mathematical approach to characterize the FS curves led to appropriate FS classification with high specificity (95%) and sensitivity (96%). The results constitute the basis for a successful transfer of FS testing to non-specialized professionals in the community. PMID:25551213

  11. Collaborative workbench for cyberinfrastructure to accelerate science algorithm development

    NASA Astrophysics Data System (ADS)

    Ramachandran, R.; Maskey, M.; Kuo, K.; Lynnes, C.

    2013-12-01

    There are significant untapped resources for information and knowledge creation within the Earth Science community in the form of data, algorithms, services, analysis workflows or scripts, and the related knowledge about these resources. Despite the huge growth in social networking and collaboration platforms, these resources often reside on an investigator's workstation or laboratory and are rarely shared. A major reason for this is that there are very few scientific collaboration platforms, and those that exist typically require the use of a new set of analysis tools and paradigms to leverage the shared infrastructure. As a result, adoption of these collaborative platforms for science research is inhibited by the high cost to an individual scientist of switching from his or her own familiar environment and set of tools to a new environment and tool set. This presentation will describe an ongoing project developing an Earth Science Collaborative Workbench (CWB). The CWB approach will eliminate this barrier by augmenting a scientist's current research environment and tool set to allow him or her to easily share diverse data and algorithms. The CWB will leverage evolving technologies such as commodity computing and social networking to design an architecture for scalable collaboration that will support the emerging vision of an Earth Science Collaboratory. The CWB is being implemented on the robust and open source Eclipse framework and will be compatible with widely used scientific analysis tools such as IDL. The myScience Catalog built into CWB will capture and track metadata and provenance about data and algorithms for the researchers in a non-intrusive manner with minimal overhead. Seamless interfaces to multiple Cloud services will support sharing algorithms, data, and analysis results, as well as access to storage and computer resources. A Community Catalog will track the use of shared science artifacts and manage collaborations among researchers.

  12. Development of the DPR algorithms for GPM science construction

    NASA Astrophysics Data System (ADS)

    Oki, R.; Shimizu, S.; Kubota, T.; Yoshida, N.; Kachi, M.; Iguchi, T.

    2009-04-01

    The Global Precipitation Measurement (GPM) mission is an international satellite mission for understanding the distribution of global precipitation. It started as a follow-on and expanded mission of the Tropical Rainfall Measuring Mission (TRMM) project. The three-dimensional measurement of precipitation will be achieved by the Dual-frequency Precipitation Radar (DPR) aboard the GPM core-satellite. The DPR, which is being developed by Japan Aerospace Exploration Agency (JAXA) and National Institute of Information and Communications Technology (NICT), consists of two radars; Ku-band precipitation radar at 13.6GHz (KuPR) and Ka-band radar at 35.55GHz (KaPR). The DPR is expected to advance precipitation science by expanding the coverage of observations to higher latitudes than those of the TRMM PR, measuring snow and light rain by the KaPR, and providing drop size distribution information based on the differential attenuation of echoes at two frequencies. Because the GPM core satellite, similar to the TRMM, is in a sun non-synchronous orbit, we can derive information on diurnal cycle of the precipitation over the mid-latitudes in addition to the Tropics. JAXA will promote and contribute to this advance of science by the development of the DPR algorithms. We are developing synthetic DPR Level 1 data from experimental data of the TRMM PR. Moreover, we are trying to validate the algorithms physically by using data sets synthesized from a cloud resolving model by the Japan Meteorological Agency and the satellite radar simulation algorithm by the NICT.

  13. Leadership development in the age of the algorithm.

    PubMed

    Buckingham, Marcus

    2012-06-01

    By now we expect personalized content--it's routinely served up by online retailers and news services, for example. But the typical leadership development program still takes a formulaic, one-size-fits-all approach. And it rarely happens that an excellent technique can be effectively transferred from one leader to all others. Someone trying to adopt a practice from a leader with a different style usually seems stilted and off--a Franken-leader. Breakthrough work at Hilton Hotels and other organizations shows how companies can use an algorithmic model to deliver training tips uniquely suited to each individual's style. It's a five-step process: First, a company must choose a tool with which to identify each person's leadership type. Second, it should assess its best leaders, and third, it should interview them about their techniques. Fourth, it should use its algorithmic model to feed tips drawn from those techniques to developing leaders of the same type. And fifth, it should make the system dynamically intelligent, with user reactions sharpening the content and targeting of tips. The power of this kind of system--highly customized, based on peer-to-peer sharing, and continually evolving--will soon overturn the generic model of leadership development. And such systems will inevitably break through any one organization, until somewhere in the cloud the best leadership tips from all over are gathered, sorted, and distributed according to which ones suit which people best. PMID:22741421

  14. The development of solution algorithms for compressible flows

    NASA Astrophysics Data System (ADS)

    Slack, David Christopher

    Three main topics were examined. The first is the development and comparison of time integration schemes on 2-D unstructured meshes. Both explicit and implicit solution grids are presented. Cell centered and cell vertex finite volume upwind schemes using Roe's approximate Riemann solver are developed. The second topic involves an interactive adaptive remeshing algorithm which uses a frontal grid generator and is compared to a single grid calculation. The final topic examined is the capabilities developed for a structured 3-D code called GASP. The capabilities include: generalized chemistry and thermodynamic modeling, space marching, memory management through the use of binary C I/O, and algebraic and two equation eddy viscosity turbulence modeling. Results are given for Mach 1.7 3-D analytic forebody, a Mach 1.38 axisymmetric nozzle with hydrogen-air combustion, a Mach 14.15 deg ramp, and Mach 0.3 viscous flow over a flat plate.

  15. Raindrop Size Distribution Observation for GPM/DPR algorithm development

    NASA Astrophysics Data System (ADS)

    Nakagawa, Katsuhiro; Hanado, Hiroshi; Nishikawa, Masanori; Nakamura, Kenji; Kaneko, Yuki; Kawamura, Seiji; Iwai, Hironori; Minda, Haruya; Oki, Riko

    2013-04-01

    In order to evaluate and improve the accuracy of rainfall intensity from space-borne radars (TRMM/PR and GPM/DPR), it is important to estimate the rain attenuation, namely the k-Z relationship (k is the specific attenuation, Z is the radar reflectivity) correctly. National Institute of Information and Communications Technology (NICT) developed the mobile precipitation observation system for the dual Ka-band radar field campaign for GPM/DPR algorithm development. The precipitation measurement instruments are installed on the roof of container. The installed instruments for raindrop size distribution (DSD) measurements are 2-dimensional Video disdtrometer (2DVD), Joss-type disdrometer, and Laser Optical disdrometr (Parsival). 2DVD and Persival can measure not only raindrop size distribution but also ice and snow size distribution. Observations using the mobile precipitation observation system were performed in Okinawa Island, in Tsukuba, over the slope of Mt. Fuji, in Nagaoka, and in Sapporo Japan. Using these observed DSD data in the different provinces, the characteristics of DSD itself are analyzed and the k-Z relationship is estimated for evaluation and improvement of the TRMM/PR and GPM/DPR algorithm.

  16. SAR data exploitation: computational technology enabling SAR ATR algorithm development

    NASA Astrophysics Data System (ADS)

    Majumder, Uttam K.; Casteel, Curtis H., Jr.; Buxa, Peter; Minardi, Michael J.; Zelnio, Edmund G.; Nehrbass, John W.

    2007-04-01

    A fundamental issue with synthetic aperture radar (SAR) application development is data processing and exploitation in real-time or near real-time. The power of high performance computing (HPC) clusters, FPGA, and the IBM Cell processor presents new algorithm development possibilities that have not been fully leveraged. In this paper, we will illustrate the capability of SAR data exploitation which was impractical over the last decade due to computing limitations. We can envision that SAR imagery encompassing city size coverage at extremely high levels of fidelity could be processed at near-real time using the above technologies to empower the warfighter with access to critical information for the war on terror, homeland defense, as well as urban warfare.

  17. Development of Topological Correction Algorithms for ADCP Multibeam Bathymetry Measurements

    NASA Astrophysics Data System (ADS)

    Yang, Sung-Kee; Kim, Dong-Su; Kim, Soo-Jeong; Jung, Woo-Yul

    2013-04-01

    Acoustic Doppler Current Profilers (ADCPs) are increasingly popular in the river research and management communities being primarily used for estimation of stream flows. ADCPs capabilities, however, entail additional features that are not fully explored, such as morphologic representation of river or reservoir bed based upon multi-beam depth measurements. In addition to flow velocity, ADCP measurements include river bathymetry information through the depth measurements acquired in individual 4 or 5 beams with a given oblique angle. Such sounding capability indicates that multi-beam ADCPs can be utilized as an efficient depth-sounder to be more capable than the conventional single-beam eco-sounders. The paper introduces the post-processing algorithms required to deal with raw ADCP bathymetry measurements including the following aspects: a) correcting the individual beam depths for tilt (pitch and roll); b) filtering outliers using SMART filters; d) transforming the corrected depths into geographical coordinates by UTM conversion; and, e) tag the beam detecting locations with the concurrent GPS information; f) spatial representation in a GIS package. The developed algorithms are applied for the ADCP bathymetric dataset acquired from Han-Cheon in Juju Island to validate their applicability.

  18. Advanced three-dimensional Eulerian hydrodynamic algorithm development

    SciTech Connect

    Rider, W.J.; Kothe, D.B.; Mosso, S.

    1998-11-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The purpose of this project is to investigate, implement, and evaluate algorithms that have high potential for improving the robustness, fidelity and accuracy of three-dimensional Eulerian hydrodynamic simulations. Eulerian computations are necessary to simulate a number of important physical phenomena ranging from the molding process for metal parts to nuclear weapons safety issues to astrophysical phenomena such as that associated with a Type 2 supernovae. A number of algorithmic issues were explored in the course of this research including interface/volume tracking, surface physics integration, high resolution integration techniques, multilevel iterative methods, multimaterial hydrodynamics and coupling radiation with hydrodynamics. This project combines core strengths of several Laboratory divisions. The project has high institutional benefit given the renewed emphasis on numerical simulations in Science-Based Stockpile Stewardship and the Accelerated Strategic Computing Initiative and LANL`s tactical goals related to high performance computing and simulation.

  19. Stoffenmanager exposure model: development of a quantitative algorithm.

    PubMed

    Tielemans, Erik; Noy, Dook; Schinkel, Jody; Heussen, Henri; Van Der Schaaf, Doeke; West, John; Fransman, Wouter

    2008-08-01

    In The Netherlands, the web-based tool called 'Stoffenmanager' was initially developed to assist small- and medium-sized enterprises to prioritize and control risks of handling chemical products in their workplaces. The aim of the present study was to explore the accuracy of the Stoffenmanager exposure algorithm. This was done by comparing its semi-quantitative exposure rankings for specific substances with exposure measurements collected from several occupational settings to derive a quantitative exposure algorithm. Exposure data were collected using two strategies. First, we conducted seven surveys specifically for validation of the Stoffenmanager. Second, existing occupational exposure data sets were collected from various sources. This resulted in 378 and 320 measurements for solid and liquid scenarios, respectively. The Spearman correlation coefficients between Stoffenmanager scores and exposure measurements appeared to be good for handling solids (r(s) = 0.80, N = 378, P < 0.0001) and liquid scenarios (r(s) = 0.83, N = 320, P < 0.0001). However, the correlation for liquid scenarios appeared to be lower when calculated separately for sets of volatile substances with a vapour pressure >10 Pa (r(s) = 0.56, N = 104, P < 0.0001) and non-volatile substances with a vapour pressure < or =10 Pa (r(s) = 0.53, N = 216, P < 0.0001). The mixed-effect regression models with natural log-transformed Stoffenmanager scores as independent parameter explained a substantial part of the total exposure variability (52% for solid scenarios and 76% for liquid scenarios). Notwithstanding the good correlation, the data show substantial variability in exposure measurements given a certain Stoffenmanager score. The overall performance increases our confidence in the use of the Stoffenmanager as a generic tool for risk assessment. The mixed-effect regression models presented in this paper may be used for assessment of so-called reasonable worst case exposures. This evaluation is

  20. Development and evaluation of thermal model reduction algorithms for spacecraft

    NASA Astrophysics Data System (ADS)

    Deiml, Michael; Suderland, Martin; Reiss, Philipp; Czupalla, Markus

    2015-05-01

    This paper is concerned with the topic of the reduction of thermal models of spacecraft. The work presented here has been conducted in cooperation with the company OHB AG, formerly Kayser-Threde GmbH, and the Institute of Astronautics at Technische Universität München with the goal to shorten and automatize the time-consuming and manual process of thermal model reduction. The reduction of thermal models can be divided into the simplification of the geometry model for calculation of external heat flows and radiative couplings and into the reduction of the underlying mathematical model. For simplification a method has been developed which approximates the reduced geometry model with the help of an optimization algorithm. Different linear and nonlinear model reduction techniques have been evaluated for their applicability in reduction of the mathematical model. Thereby the compatibility with the thermal analysis tool ESATAN-TMS is of major concern, which restricts the useful application of these methods. Additional model reduction methods have been developed, which account to these constraints. The Matrix Reduction method allows the approximation of the differential equation to reference values exactly expect for numerical errors. The summation method enables a useful, applicable reduction of thermal models that can be used in industry. In this work a framework for model reduction of thermal models has been created, which can be used together with a newly developed graphical user interface for the reduction of thermal models in industry.

  1. Developing Information Power Grid Based Algorithms and Software

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This was an exploratory study to enhance our understanding of problems involved in developing large scale applications in a heterogeneous distributed environment. It is likely that the large scale applications of the future will be built by coupling specialized computational modules together. For example, efforts now exist to couple ocean and atmospheric prediction codes to simulate a more complete climate system. These two applications differ in many respects. They have different grids, the data is in different unit systems and the algorithms for inte,-rating in time are different. In addition the code for each application is likely to have been developed on different architectures and tend to have poor performance when run on an architecture for which the code was not designed, if it runs at all. Architectural differences may also induce differences in data representation which effect precision and convergence criteria as well as data transfer issues. In order to couple such dissimilar codes some form of translation must be present. This translation should be able to handle interpolation from one grid to another as well as construction of the correct data field in the correct units from available data. Even if a code is to be developed from scratch, a modular approach will likely be followed in that standard scientific packages will be used to do the more mundane tasks such as linear algebra or Fourier transform operations. This approach allows the developers to concentrate on their science rather than becoming experts in linear algebra or signal processing. Problems associated with this development approach include difficulties associated with data extraction and translation from one module to another, module performance on different nodal architectures, and others. In addition to these data and software issues there exists operational issues such as platform stability and resource management.

  2. Understanding disordered systems through numerical simulation and algorithm development

    NASA Astrophysics Data System (ADS)

    Sweeney, Sean Michael

    Disordered systems arise in many physical contexts. Not all matter is uniform, and impurities or heterogeneities can be modeled by fixed random disorder. Numerous complex networks also possess fixed disorder, leading to applications in transportation systems, telecommunications, social networks, and epidemic modeling, to name a few. Due to their random nature and power law critical behavior, disordered systems are difficult to study analytically. Numerical simulation can help overcome this hurdle by allowing for the rapid computation of system states. In order to get precise statistics and extrapolate to the thermodynamic limit, large systems must be studied over many realizations. Thus, innovative algorithm development is essential in order reduce memory or running time requirements of simulations. This thesis presents a review of disordered systems, as well as a thorough study of two particular systems through numerical simulation, algorithm development and optimization, and careful statistical analysis of scaling properties. Chapter 1 provides a thorough overview of disordered systems, the history of their study in the physics community, and the development of techniques used to study them. Topics of quenched disorder, phase transitions, the renormalization group, criticality, and scale invariance are discussed. Several prominent models of disordered systems are also explained. Lastly, analysis techniques used in studying disordered systems are covered. In Chapter 2, minimal spanning trees on critical percolation clusters are studied, motivated in part by an analytic perturbation expansion by Jackson and Read that I check against numerical calculations. This system has a direct mapping to the ground state of the strongly disordered spin glass. We compute the path length fractal dimension of these trees in dimensions d = {2, 3, 4, 5} and find our results to be compatible with the analytic results suggested by Jackson and Read. In Chapter 3, the random bond Ising

  3. An earlier origin for the Acheulian.

    PubMed

    Lepre, Christopher J; Roche, Hélène; Kent, Dennis V; Harmand, Sonia; Quinn, Rhonda L; Brugal, Jean-Philippe; Texier, Pierre-Jean; Lenoble, Arnaud; Feibel, Craig S

    2011-09-01

    The Acheulian is one of the first defined prehistoric techno-complexes and is characterized by shaped bifacial stone tools. It probably originated in Africa, spreading to Europe and Asia perhaps as early as ∼1 million years (Myr) ago. The origin of the Acheulian is thought to have closely coincided with major changes in human brain evolution, allowing for further technological developments. Nonetheless, the emergence of the Acheulian remains unclear because well-dated sites older than 1.4 Myr ago are scarce. Here we report on the lithic assemblage and geological context for the Kokiselei 4 archaeological site from the Nachukui formation (West Turkana, Kenya) that bears characteristic early Acheulian tools and pushes the first appearance datum for this stone-age technology back to 1.76 Myr ago. Moreover, co-occurrence of Oldowan and Acheulian artefacts at the Kokiselei site complex indicates that the two technologies are not mutually exclusive time-successive components of an evolving cultural lineage, and suggests that the Acheulian was either imported from another location yet to be identified or originated from Oldowan hominins at this vicinity. In either case, the Acheulian did not accompany the first human dispersal from Africa despite being available at the time. This may indicate that multiple groups of hominins distinguished by separate stone-tool-making behaviours and dispersal strategies coexisted in Africa at 1.76 Myr ago. PMID:21886161

  4. Mars Entry Atmospheric Data System Modelling and Algorithm Development

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Beck, Roger E.; OKeefe, Stephen A.; Siemers, Paul; White, Brady; Engelund, Walter C.; Munk, Michelle M.

    2009-01-01

    The Mars Entry Atmospheric Data System (MEADS) is being developed as part of the Mars Science Laboratory (MSL), Entry, Descent, and Landing Instrumentation (MEDLI) project. The MEADS project involves installing an array of seven pressure transducers linked to ports on the MSL forebody to record the surface pressure distribution during atmospheric entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. In particular, the quantities to be estimated from the MEADS pressure measurements include the total pressure, dynamic pressure, Mach number, angle of attack, and angle of sideslip. Secondary objectives are to estimate atmospheric winds by coupling the pressure measurements with the on-board Inertial Measurement Unit (IMU) data. This paper provides details of the algorithm development, MEADS system performance based on calibration, and uncertainty analysis for the aerodynamic and atmospheric quantities of interest. The work presented here is part of the MEDLI performance pre-flight validation and will culminate with processing flight data after Mars entry in 2012.

  5. Algorithm development for Prognostics and Health Management (PHM).

    SciTech Connect

    Swiler, Laura Painton; Campbell, James E.; Doser, Adele Beatrice; Lowder, Kelly S.

    2003-10-01

    This report summarizes the results of a three-year LDRD project on prognostics and health management. System failure over some future time interval (an alternative definition is the capability to predict the remaining useful life of a system). Prognostics are integrated with health monitoring (through inspections, sensors, etc.) to provide an overall PHM capability that optimizes maintenance actions and results in higher availability at a lower cost. Our goal in this research was to develop PHM tools that could be applied to a wide variety of equipment (repairable, non-repairable, manufacturing, weapons, battlefield equipment, etc.) and require minimal customization to move from one system to the next. Thus, our approach was to develop a toolkit of reusable software objects/components and architecture for their use. We have developed two software tools: an Evidence Engine and a Consequence Engine. The Evidence Engine integrates information from a variety of sources in order to take into account all the evidence that impacts a prognosis for system health. The Evidence Engine has the capability for feature extraction, trend detection, information fusion through Bayesian Belief Networks (BBN), and estimation of remaining useful life. The Consequence Engine involves algorithms to analyze the consequences of various maintenance actions. The Consequence Engine takes as input a maintenance and use schedule, spares information, and time-to-failure data on components, then generates maintenance and failure events, and evaluates performance measures such as equipment availability, mission capable rate, time to failure, and cost. This report summarizes the capabilities we have developed, describes the approach and architecture of the two engines, and provides examples of their use. 'Prognostics' refers to the capability to predict the probability of

  6. Toward Developing Genetic Algorithms to Aid in Critical Infrastructure Modeling

    SciTech Connect

    Not Available

    2007-05-01

    Today’s society relies upon an array of complex national and international infrastructure networks such as transportation, telecommunication, financial and energy. Understanding these interdependencies is necessary in order to protect our critical infrastructure. The Critical Infrastructure Modeling System, CIMS©, examines the interrelationships between infrastructure networks. CIMS© development is sponsored by the National Security Division at the Idaho National Laboratory (INL) in its ongoing mission for providing critical infrastructure protection and preparedness. A genetic algorithm (GA) is an optimization technique based on Darwin’s theory of evolution. A GA can be coupled with CIMS© to search for optimum ways to protect infrastructure assets. This includes identifying optimum assets to enforce or protect, testing the addition of or change to infrastructure before implementation, or finding the optimum response to an emergency for response planning. This paper describes the addition of a GA to infrastructure modeling for infrastructure planning. It first introduces the CIMS© infrastructure modeling software used as the modeling engine to support the GA. Next, the GA techniques and parameters are defined. Then a test scenario illustrates the integration with CIMS© and the preliminary results.

  7. Phase 2 development of Great Lakes algorithms for Nimbus-7 coastal zone color scanner

    NASA Technical Reports Server (NTRS)

    Tanis, Fred J.

    1984-01-01

    A series of experiments have been conducted in the Great Lakes designed to evaluate the application of the NIMBUS-7 Coastal Zone Color Scanner (CZCS). Atmospheric and water optical models were used to relate surface and subsurface measurements to satellite measured radiances. Absorption and scattering measurements were reduced to obtain a preliminary optical model for the Great Lakes. Algorithms were developed for geometric correction, correction for Rayleigh and aerosol path radiance, and prediction of chlorophyll-a pigment and suspended mineral concentrations. The atmospheric algorithm developed compared favorably with existing algorithms and was the only algorithm found to adequately predict the radiance variations in the 670 nm band. The atmospheric correction algorithm developed was designed to extract needed algorithm parameters from the CZCS radiance values. The Gordon/NOAA ocean algorithms could not be demonstrated to work for Great Lakes waters. Predicted values of chlorophyll-a concentration compared favorably with expected and measured data for several areas of the Great Lakes.

  8. Efficient algorithm development of CIS speech processing strategy for cochlear implants.

    PubMed

    Ahmad, Talha J; Ali, Hussnain; Ajaz, Muhammad Asim; Khan, Shoab A

    2009-01-01

    Continuous Interleaved Sampling (CIS) is one of the most useful and famous speech processing strategies used in Cochlear Implant speech processors. However, algorithm realization in hardware is a laborious task due to high computation cost of the algorithm. Real-time issues and low-power design demands an optimized realization of algorithm. This paper proposes two techniques to cut the computation cost of CIS by using polyphase filters and by implementing the complete algorithm in frequency domain. About 70% reduction in computation cost can be achieved by using multi-rate, multistage filters; whereas computation cost decreases by a factor of five when the whole algorithm is implemented in frequency domain. Evaluation of the algorithm is done by a laboratory designed algorithm development and evaluation platform. Algorithm flow diagrams and their computation details have been given for comparison. Utilizing the given techniques can remarkably reduce the processor load without any compromise on quality. PMID:19964752

  9. Evolutionary Processes in the Development of Errors in Subtraction Algorithms

    ERIC Educational Resources Information Center

    Fernandez, Ricardo Lopez; Garcia, Ana B. Sanchez

    2008-01-01

    The study of errors made in subtraction is a research subject approached from different theoretical premises that affect different components of the algorithmic process as triggers of their generation. In the following research an attempt has been made to investigate the typology and nature of errors which occur in subtractions and their evolution…

  10. Item Selection for the Development of Short Forms of Scales Using an Ant Colony Optimization Algorithm

    ERIC Educational Resources Information Center

    Leite, Walter L.; Huang, I-Chan; Marcoulides, George A.

    2008-01-01

    This article presents the use of an ant colony optimization (ACO) algorithm for the development of short forms of scales. An example 22-item short form is developed for the Diabetes-39 scale, a quality-of-life scale for diabetes patients, using a sample of 265 diabetes patients. A simulation study comparing the performance of the ACO algorithm and…

  11. Development of a Compound Optimization Approach Based on Imperialist Competitive Algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Qimei; Yang, Zhihong; Wang, Yong

    In this paper, an improved novel approach is developed for the imperialist competitive algorithm to achieve a greater performance. The Nelder-Meand simplex method is applied to execute alternately with the original procedures of the algorithm. The approach is tested on twelve widely-used benchmark functions and is also compared with other relative studies. It is shown that the proposed approach has a faster convergence rate, better search ability, and higher stability than the original algorithm and other relative methods.

  12. The development of a simplified epithelial tissue phantom for the evaluation of an autofluorescence mitigation algorithm

    NASA Astrophysics Data System (ADS)

    Hou, Vivian W.; Yang, Chenying; Nelson, Leonard Y.; Seibel, Eric J.

    2014-03-01

    Previously we developed an ultrathin, flexible, multimodal scanning fiber endoscope (SFE) for concurrent white light and fluorescence imaging. Autofluorescence (AF) arising from endogenous fluorophores (primarily collagen in the esophagus) act as major confounders in fluorescence-aided detection. To address the issue of AF, a real-time mitigation algorithm was developed and has been show to successfully remove AF during SFE imaging. To test our algorithm, we previously developed flexible, color-matched, synthetic phantoms featuring a homogenous distribution of collagen. In order to more rigorously test the AF mitigation algorithm, a phantom that better mimicked the in-vivo distribution of collagen in tissue was developed.

  13. Scientific Knowledge Suppresses but Does Not Supplant Earlier Intuitions

    ERIC Educational Resources Information Center

    Shtulman, Andrew; Valcarcel, Joshua

    2012-01-01

    When students learn scientific theories that conflict with their earlier, naive theories, what happens to the earlier theories? Are they overwritten or merely suppressed? We investigated this question by devising and implementing a novel speeded-reasoning task. Adults with many years of science education verified two types of statements as quickly…

  14. Motion Cueing Algorithm Development: New Motion Cueing Program Implementation and Tuning

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    A computer program has been developed for the purpose of driving the NASA Langley Research Center Visual Motion Simulator (VMS). This program includes two new motion cueing algorithms, the optimal algorithm and the nonlinear algorithm. A general description of the program is given along with a description and flowcharts for each cueing algorithm, and also descriptions and flowcharts for subroutines used with the algorithms. Common block variable listings and a program listing are also provided. The new cueing algorithms have a nonlinear gain algorithm implemented that scales each aircraft degree-of-freedom input with a third-order polynomial. A description of the nonlinear gain algorithm is given along with past tuning experience and procedures for tuning the gain coefficient sets for each degree-of-freedom to produce the desired piloted performance. This algorithm tuning will be needed when the nonlinear motion cueing algorithm is implemented on a new motion system in the Cockpit Motion Facility (CMF) at the NASA Langley Research Center.

  15. Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.

    2005-01-01

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.

  16. Retrieval algorithm development and product validation for TERRA/MOPITT

    NASA Astrophysics Data System (ADS)

    Deeter, M. N.; Martínez-Alonso, S.; Worden, H. M.; Emmons, L. K.; Dean, V.; Mao, D.; Edwards, D. P.; Gille, J. C.

    2014-10-01

    Satellite observations of tropospheric carbon monoxide (CO) are employed in diverse applications including air quality studies, chemical weather forecasting and the characterization of CO emissions through inverse modeling. The TERRA / MOPITT ('Measurements of Pollution in the Troposphere') instrument incorporates a set of gas correlation radiometers to observe CO simultaneously in both a thermal-infrared (TIR) band near 4.7 µm and a near-infrared (NIR) band near 2.3 μm. This multispectral capability is unique to MOPITT. The MOPITT retrieval algorithm for vertical profiles of CO has been refined almost continuously since TERRA was launched at the end of 1999. Retrieval algorithm enhancements are the result of ongoing analyses of instrument performance, improved radiative transfer modeling, and systematic comparisons with correlative data, including in-situ profiles measured from aircraft and products from other satellite instruments. In the following, we describe the methods used to routinely evaluate MOPITT CO profiles. As the satellite instrument with the longest record for CO, methods for assessing the long-term stability are becoming increasingly important.

  17. Design requirements and development of an airborne descent path definition algorithm for time navigation

    NASA Technical Reports Server (NTRS)

    Izumi, K. H.; Thompson, J. L.; Groce, J. L.; Schwab, R. W.

    1986-01-01

    The design requirements for a 4D path definition algorithm are described. These requirements were developed for the NASA ATOPS as an extension of the Local Flow Management/Profile Descent algorithm. They specify the processing flow, functional and data architectures, and system input requirements, and recommended the addition of a broad path revision (reinitialization) function capability. The document also summarizes algorithm design enhancements and the implementation status of the algorithm on an in-house PDP-11/70 computer. Finally, the requirements for the pilot-computer interfaces, the lateral path processor, and guidance and steering function are described.

  18. Update on Development of Mesh Generation Algorithms in MeshKit

    SciTech Connect

    Jain, Rajeev; Vanderzee, Evan; Mahadevan, Vijay

    2015-09-30

    MeshKit uses a graph-based design for coding all its meshing algorithms, which includes the Reactor Geometry (and mesh) Generation (RGG) algorithms. This report highlights the developmental updates of all the algorithms, results and future work. Parallel versions of algorithms, documentation and performance results are reported. RGG GUI design was updated to incorporate new features requested by the users; boundary layer generation and parallel RGG support were added to the GUI. Key contributions to the release, upgrade and maintenance of other SIGMA1 libraries (CGM and MOAB) were made. Several fundamental meshing algorithms for creating a robust parallel meshing pipeline in MeshKit are under development. Results and current status of automated, open-source and high quality nuclear reactor assembly mesh generation algorithms such as trimesher, quadmesher, interval matching and multi-sweeper are reported.

  19. Developments in Human Centered Cueing Algorithms for Control of Flight Simulator Motion Systems

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A.; Telban, Robert J.; Cardullo, Frank M.

    1997-01-01

    The authors conducted further research with cueing algorithms for control of flight simulator motion systems. A variation of the so-called optimal algorithm was formulated using simulated aircraft angular velocity input as a basis. Models of the human vestibular sensation system, i.e. the semicircular canals and otoliths, are incorporated within the algorithm. Comparisons of angular velocity cueing responses showed a significant improvement over a formulation using angular acceleration input. Results also compared favorably with the coordinated adaptive washout algorithm, yielding similar results for angular velocity cues while eliminating false cues and reducing the tilt rate for longitudinal cues. These results were confirmed in piloted tests on the current motion system at NASA-Langley, the Visual Motion Simulator (VMS). Proposed future developments by the authors in cueing algorithms are revealed. The new motion system, the Cockpit Motion Facility (CMF), where the final evaluation of the cueing algorithms will be conducted, is also described.

  20. Clustering algorithm evaluation and the development of a replacement for procedure 1. [for crop inventories

    NASA Technical Reports Server (NTRS)

    Lennington, R. K.; Johnson, J. K.

    1979-01-01

    An efficient procedure which clusters data using a completely unsupervised clustering algorithm and then uses labeled pixels to label the resulting clusters or perform a stratified estimate using the clusters as strata is developed. Three clustering algorithms, CLASSY, AMOEBA, and ISOCLS, are compared for efficiency. Three stratified estimation schemes and three labeling schemes are also considered and compared.

  1. Development of Online Cognitive and Algorithm Tests as Assessment Tools in Introductory Computer Science Courses

    ERIC Educational Resources Information Center

    Avancena, Aimee Theresa; Nishihara, Akinori; Vergara, John Paul

    2012-01-01

    This paper presents the online cognitive and algorithm tests, which were developed in order to determine if certain cognitive factors and fundamental algorithms correlate with the performance of students in their introductory computer science course. The tests were implemented among Management Information Systems majors from the Philippines and…

  2. Inquiry in Development: Efficiency and Effectiveness of Algorithmic Representations in a Laboratory Stituation.

    ERIC Educational Resources Information Center

    Coscarelli, William C.

    This study, part of an instructional development project, explores the effects of three different representations of functional algorithms in an introductory chemistry laboratory. Intact classes were randomly assigned to a flowchart, list, or standard prose representation of the procedures (algorithms). At the completion of 11 laboratory sessions,…

  3. Evaluating Knowledge Structure-Based Adaptive Testing Algorithms and System Development

    ERIC Educational Resources Information Center

    Wu, Huey-Min; Kuo, Bor-Chen; Yang, Jinn-Min

    2012-01-01

    In recent years, many computerized test systems have been developed for diagnosing students' learning profiles. Nevertheless, it remains a challenging issue to find an adaptive testing algorithm to both shorten testing time and precisely diagnose the knowledge status of students. In order to find a suitable algorithm, four adaptive testing…

  4. Geologist's Field Assistant: Developing Image and Spectral Analyses Algorithms for Remote Science Exploration

    NASA Astrophysics Data System (ADS)

    Gulick, V. C.; Morris, R. L.; Bishop, J.; Gazis, P.; Alena, R.; Sierhuis, M.

    2002-03-01

    We are developing science analyses algorithms to interface with a Geologist's Field Assistant device to allow robotic or human remote explorers to better sense their surroundings during limited surface excursions. Our algorithms will interpret spectral and imaging data obtained by various sensors.

  5. Algorithm development for the control design of flexible structures

    NASA Technical Reports Server (NTRS)

    Skelton, R. E.

    1983-01-01

    The critical problems associated with the control of highly damped flexible structures are outlined. The practical problems include: high performance; assembly in space, configuration changes; on-line controller software design; and lack of test data. Underlying all of these problems is the central problem of modeling errors. To justify the expense of a space structure, the performance requirements will necessarily be very severe. On the other hand, the absence of economical tests precludes the availability of reliable data before flight. A design algorithm is offered which: (1) provides damping for a larger number of modes than the optimal attitude controller controls; (2) coordinates the rate of feedback design with the attitude control design by use of a similar cost function; and (3) provides model reduction and controller reduction decisions which are systematically connected to the mathematical statement of the control objectives and the disturbance models.

  6. Development of new flux splitting schemes. [computational fluid dynamics algorithms

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Steffen, Christopher J., Jr.

    1992-01-01

    Maximizing both accuracy and efficiency has been the primary objective in designing a numerical algorithm for computational fluid dynamics (CFD). This is especially important for solutions of complex three dimensional systems of Navier-Stokes equations which often include turbulence modeling and chemistry effects. Recently, upwind schemes have been well received for their capability in resolving discontinuities. With this in mind, presented are two new flux splitting techniques for upwind differencing. The first method is based on High-Order Polynomial Expansions (HOPE) of the mass flux vector. The second new flux splitting is based on the Advection Upwind Splitting Method (AUSM). The calculation of the hypersonic conical flow demonstrates the accuracy of the splitting in resolving the flow in the presence of strong gradients. A second series of tests involving the two dimensional inviscid flow over a NACA 0012 airfoil demonstrates the ability of the AUSM to resolve the shock discontinuity at transonic speed. A third case calculates a series of supersonic flows over a circular cylinder. Finally, the fourth case deals with tests of a two dimensional shock wave/boundary layer interaction.

  7. AeroADL: applying the integration of the Suomi-NPP science algorithms with the Algorithm Development Library to the calibration and validation task

    NASA Astrophysics Data System (ADS)

    Houchin, J. S.

    2014-09-01

    A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.

  8. Development of an automatic identification algorithm for antibiogram analysis.

    PubMed

    Costa, Luan F R; da Silva, Eduardo S; Noronha, Victor T; Vaz-Moreira, Ivone; Nunes, Olga C; Andrade, Marcelino M de

    2015-12-01

    Routinely, diagnostic and microbiology laboratories perform antibiogram analysis which can present some difficulties leading to misreadings and intra and inter-reader deviations. An Automatic Identification Algorithm (AIA) has been proposed as a solution to overcome some issues associated with the disc diffusion method, which is the main goal of this work. AIA allows automatic scanning of inhibition zones obtained by antibiograms. More than 60 environmental isolates were tested using susceptibility tests which were performed for 12 different antibiotics for a total of 756 readings. Plate images were acquired and classified as standard or oddity. The inhibition zones were measured using the AIA and results were compared with reference method (human reading), using weighted kappa index and statistical analysis to evaluate, respectively, inter-reader agreement and correlation between AIA-based and human-based reading. Agreements were observed in 88% cases and 89% of the tests showed no difference or a <4mm difference between AIA and human analysis, exhibiting a correlation index of 0.85 for all images, 0.90 for standards and 0.80 for oddities with no significant difference between automatic and manual method. AIA resolved some reading problems such as overlapping inhibition zones, imperfect microorganism seeding, non-homogeneity of the circumference, partial action of the antimicrobial, and formation of a second halo of inhibition. Furthermore, AIA proved to overcome some of the limitations observed in other automatic methods. Therefore, AIA may be a practical tool for automated reading of antibiograms in diagnostic and microbiology laboratories. PMID:26513468

  9. Chemotactic and diffusive migration on a nonuniformly growing domain: numerical algorithm development and applications

    NASA Astrophysics Data System (ADS)

    Simpson, Matthew J.; Landman, Kerry A.; Newgreen, Donald F.

    2006-08-01

    A numerical algorithm to simulate chemotactic and/or diffusive migration on a one-dimensional growing domain is developed. The domain growth can be spatially nonuniform and the growth-derived advection term must be discretised. The hyperbolic terms in the conservation equations associated with chemotactic migration and domain growth are accurately discretised using an explicit central scheme. Generality of the algorithm is maintained using an operator split technique to simulate diffusive migration implicitly. The resulting algorithm is applicable for any combination of diffusive and/or chemotactic migration on a growing domain with a general growth-induced velocity field. The accuracy of the algorithm is demonstrated by testing the results against some simple analytical solutions and in an inter-code comparison. The new algorithm demonstrates that the form of nonuniform growth plays a critical role in determining whether a population of migratory cells is able to overcome the domain growth and fully colonise the domain.

  10. Characterizing interplanetary shocks for development and optimization of an automated solar wind shock detection algorithm

    NASA Astrophysics Data System (ADS)

    Cash, M. D.; Wrobel, J. S.; Cosentino, K. C.; Reinard, A. A.

    2014-06-01

    Human evaluation of solar wind data for interplanetary (IP) shock identification relies on both heuristics and pattern recognition, with the former lending itself to algorithmic representation and automation. Such detection algorithms can potentially alert forecasters of approaching shocks, providing increased warning of subsequent geomagnetic storms. However, capturing shocks with an algorithmic treatment alone is challenging, as past and present work demonstrates. We present a statistical analysis of 209 IP shocks observed at L1, and we use this information to optimize a set of shock identification criteria for use with an automated solar wind shock detection algorithm. In order to specify ranges for the threshold values used in our algorithm, we quantify discontinuities in the solar wind density, velocity, temperature, and magnetic field magnitude by analyzing 8 years of IP shocks detected by the SWEPAM and MAG instruments aboard the ACE spacecraft. Although automatic shock detection algorithms have previously been developed, in this paper we conduct a methodical optimization to refine shock identification criteria and present the optimal performance of this and similar approaches. We compute forecast skill scores for over 10,000 permutations of our shock detection criteria in order to identify the set of threshold values that yield optimal forecast skill scores. We then compare our results to previous automatic shock detection algorithms using a standard data set, and our optimized algorithm shows improvements in the reliability of automated shock detection.

  11. MODIS algorithm development and data visualization using ACTS

    NASA Technical Reports Server (NTRS)

    Abbott, Mark R.

    1992-01-01

    The study of the Earth as a system will require the merger of scientific and data resources on a much larger scale than has been done in the past. New methods of scientific research, particularly in the development of geographically dispersed, interdisciplinary teams, are necessary if we are to understand the complexity of the Earth system. Even the planned satellite missions themselves, such as the Earth Observing System, will require much more interaction between researchers and engineers if they are to produce scientifically useful data products. A key component in these activities is the development of flexible, high bandwidth data networks that can be used to move large amounts of data as well as allow researchers to communicate in new ways, such as through video. The capabilities of the Advanced Communications Technology Satellite (ACTS) will allow the development of such networks. The Pathfinder global AVHRR data set and the upcoming SeaWiFS Earthprobe mission would serve as a testbed in which to develop the tools to share data and information among geographically distributed researchers. Our goal is to develop a 'Distributed Research Environment' that can be used as a model for scientific collaboration in the EOS era. The challenge is to unite the advances in telecommunications with the parallel advances in computing and networking.

  12. Deciphering the Minimal Algorithm for Development and Information-genesis

    NASA Astrophysics Data System (ADS)

    Li, Zhiyuan; Tang, Chao; Li, Hao

    During development, cells with identical genomes acquires different fates in a highly organized manner. In order to decipher the principles underlining development, we used C.elegans as the model organism. Based on a large set of microscopy imaging, we first constructed a ``standard worm'' in silico: from the single zygotic cell to about 500 cell stage, the lineage, position, cell-cell contact and gene expression dynamics are quantified for each cell in order to investigate principles underlining these intensive data. Next, we reverse-engineered the possible gene-gene/cell-cell interaction rules that are capable of running a dynamic model recapitulating the early fate decisions during C.elegans development. we further formulized the C.elegans embryogenesis in the language of information genesis. Analysis towards data and model uncovered the global landscape of development in the cell fate space, suggested possible gene regulatory architectures and cell signaling processes, revealed diversity and robustness as the essential trade-offs in development, and demonstrated general strategies in building multicellular organisms.

  13. The development of an algebraic multigrid algorithm for symmetric positive definite linear systems

    SciTech Connect

    Vanek, P.; Mandel, J.; Brezina, M.

    1996-12-31

    An algebraic multigrid algorithm for symmetric, positive definite linear systems is developed based on the concept of prolongation by smoothed aggregation. Coarse levels are generated automatically. We present a set of requirements motivated heuristically by a convergence theory. The algorithm then attempts to satisfy the requirements. Input to the method are the coefficient matrix and zero energy modes, which are determined from nodal coordinates and knowledge of the differential equation. Efficiency of the resulting algorithm is demonstrated by computational results on real world problems from solid elasticity, plate blending, and shells.

  14. The Development of FPGA-Based Pseudo-Iterative Clustering Algorithms

    NASA Astrophysics Data System (ADS)

    Drueke, Elizabeth; Fisher, Wade; Plucinski, Pawel

    2016-03-01

    The Large Hadron Collider (LHC) in Geneva, Switzerland, is set to undergo major upgrades in 2025 in the form of the High-Luminosity Large Hadron Collider (HL-LHC). In particular, several hardware upgrades are proposed to the ATLAS detector, one of the two general purpose detectors. These hardware upgrades include, but are not limited to, a new hardware-level clustering algorithm, to be performed by a field programmable gate array, or FPGA. In this study, we develop that clustering algorithm and compare the output to a Python-implemented topoclustering algorithm developed at the University of Oregon. Here, we present the agreement between the FPGA output and expected output, with particular attention to the time required by the FPGA to complete the algorithm and other limitations set by the FPGA itself.

  15. Applications of feature selection. [development of classification algorithms for LANDSAT data

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.

    1976-01-01

    The use of satellite-acquired (LANDSAT) multispectral scanner (MSS) data to conduct an inventory of some crop of economic interest such as wheat over a large geographical area is considered in relation to the development of accurate and efficient algorithms for data classification. The dimension of the measurement space and the computational load for a classification algorithm is increased by the use of multitemporal measurements. Feature selection/combination techniques used to reduce the dimensionality of the problem are described.

  16. Millimeter-wave imaging radiometer data processing and development of water vapor retrieval algorithms

    NASA Technical Reports Server (NTRS)

    Chang, L. Aron

    1995-01-01

    This document describes the current status of Millimeter-wave Imaging Radiometer (MIR) data processing and the technical development of the first version of a water vapor retrieval algorithm. The algorithm is being used by NASA/GSFC Microwave Sensors Branch, Laboratory for Hydrospheric Processes. It is capable of a three dimensional mapping of moisture fields using microwave data from airborne sensor of MIR and spaceborne instrument of Special Sensor Microwave/T-2 (SSM/T-2).

  17. Ocean observations with EOS/MODIS: Algorithm development and post launch studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1995-01-01

    An investigation of the influence of stratospheric aerosol on the performance of the atmospheric correction algorithm was carried out. The results indicate how the performance of the algorithm is degraded if the stratospheric aerosol is ignored. Use of the MODIS 1380 nm band to effect a correction for stratospheric aerosols was also studied. The development of a multi-layer Monte Carlo radiative transfer code that includes polarization by molecular and aerosol scattering and wind-induced sea surface roughness has been completed. Comparison tests with an existing two-layer successive order of scattering code suggests that both codes are capable of producing top-of-atmosphere radiances with errors usually less than 0.1 percent. An initial set of simulations to study the effects of ignoring the polarization of the the ocean-atmosphere light field, in both the development of the atmospheric correction algorithm and the generation of the lookup tables used for operation of the algorithm, have been completed. An algorithm was developed that can be used to invert the radiance exiting the top and bottom of the atmosphere to yield the columnar optical properties of the atmospheric aerosol under clear sky conditions over the ocean, for aerosol optical thicknesses as large as 2. The algorithm is capable of retrievals with such large optical thicknesses because all significant orders of multiple scattering are included.

  18. Unified Framework for Development, Deployment and Robust Testing of Neuroimaging Algorithms

    PubMed Central

    Joshi, Alark; Scheinost, Dustin; Okuda, Hirohito; Belhachemi, Dominique; Murphy, Isabella; Staib, Lawrence H.; Papademetris, Xenophon

    2011-01-01

    Developing both graphical and command-line user interfaces for neuroimaging algorithms requires considerable effort. Neuroimaging algorithms can meet their potential only if they can be easily and frequently used by their intended users. Deployment of a large suite of such algorithms on multiple platforms requires consistency of user interface controls, consistent results across various platforms and thorough testing. We present the design and implementation of a novel object-oriented framework that allows for rapid development of complex image analysis algorithms with many reusable components and the ability to easily add graphical user interface controls. Our framework also allows for simplified yet robust nightly testing of the algorithms to ensure stability and cross platform interoperability. All of the functionality is encapsulated into a software object requiring no separate source code for user interfaces, testing or deployment. This formulation makes our framework ideal for developing novel, stable and easy-to-use algorithms for medical image analysis and computer assisted interventions. The framework has been both deployed at Yale and released for public use in the open source multi-platform image analysis software—BioImage Suite (bioimagesuite.org). PMID:21249532

  19. Development and application of unified algorithms for problems in computational science

    NASA Technical Reports Server (NTRS)

    Shankar, Vijaya; Chakravarthy, Sukumar

    1987-01-01

    A framework is presented for developing computationally unified numerical algorithms for solving nonlinear equations that arise in modeling various problems in mathematical physics. The concept of computational unification is an attempt to encompass efficient solution procedures for computing various nonlinear phenomena that may occur in a given problem. For example, in Computational Fluid Dynamics (CFD), a unified algorithm will be one that allows for solutions to subsonic (elliptic), transonic (mixed elliptic-hyperbolic), and supersonic (hyperbolic) flows for both steady and unsteady problems. The objectives are: development of superior unified algorithms emphasizing accuracy and efficiency aspects; development of codes based on selected algorithms leading to validation; application of mature codes to realistic problems; and extension/application of CFD-based algorithms to problems in other areas of mathematical physics. The ultimate objective is to achieve integration of multidisciplinary technologies to enhance synergism in the design process through computational simulation. Specific unified algorithms for a hierarchy of gas dynamics equations and their applications to two other areas: electromagnetic scattering, and laser-materials interaction accounting for melting.

  20. Development of an unbiased cloud detection algorithm for a spaceborne multispectral imager

    NASA Astrophysics Data System (ADS)

    Ishida, Haruma; Nakajima, Takashi Y.

    2009-04-01

    A new concept for cloud detection from observations by multispectral spaceborne imagers is proposed, and an algorithm comprising many pixel-by-pixel threshold tests is developed. Since in nature the thickness of clouds tends to vary continuously and the border between cloud and clear sky is thus vague, it is unrealistic to label pixels as either cloudy or clear sky. Instead, the extraction of ambiguous areas is considered to be useful and informative. We refer to the multiple threshold method employed in the MOD35 algorithm that is used for Moderate Resolution Imaging Spectroradiometer (MODIS) standard data analysis, but drastically reconstruct the structure of the algorithm to meet our aim of sustaining the neutral position. The concept of a clear confidence level, which represents certainty of the clear or cloud condition, is applied to design a neutral cloud detection algorithm that is not biased to either clear or cloudy. The use of the clear confidence level with neutral position also makes our algorithm structure very simple. Several examples of cloud detection from satellite data are tested using our algorithm and are validated by visual inspection and comparison to previous cloud mask data. The results indicate that our algorithm is capable of reasonable discrimination between cloudy and clear-sky areas over ocean with and without Sun glint, forest, and desert, and is able to extract areas with ambiguous cloudiness condition.

  1. Development of Fast Algorithms Using Recursion, Nesting and Iterations for Computational Electromagnetics

    NASA Technical Reports Server (NTRS)

    Chew, W. C.; Song, J. M.; Lu, C. C.; Weedon, W. H.

    1995-01-01

    In the first phase of our work, we have concentrated on laying the foundation to develop fast algorithms, including the use of recursive structure like the recursive aggregate interaction matrix algorithm (RAIMA), the nested equivalence principle algorithm (NEPAL), the ray-propagation fast multipole algorithm (RPFMA), and the multi-level fast multipole algorithm (MLFMA). We have also investigated the use of curvilinear patches to build a basic method of moments code where these acceleration techniques can be used later. In the second phase, which is mainly reported on here, we have concentrated on implementing three-dimensional NEPAL on a massively parallel machine, the Connection Machine CM-5, and have been able to obtain some 3D scattering results. In order to understand the parallelization of codes on the Connection Machine, we have also studied the parallelization of 3D finite-difference time-domain (FDTD) code with PML material absorbing boundary condition (ABC). We found that simple algorithms like the FDTD with material ABC can be parallelized very well allowing us to solve within a minute a problem of over a million nodes. In addition, we have studied the use of the fast multipole method and the ray-propagation fast multipole algorithm to expedite matrix-vector multiplication in a conjugate-gradient solution to integral equations of scattering. We find that these methods are faster than LU decomposition for one incident angle, but are slower than LU decomposition when many incident angles are needed as in the monostatic RCS calculations.

  2. Development of the theory and algorithms for synthesis of reflector antenna systems

    NASA Astrophysics Data System (ADS)

    Oliker, Vladimir

    1995-01-01

    The main objective of this work was research and development of the theory and constructive computational algorithms for synthesis of single and dual reflector antenna systems in geometrical optics approximation. During the contracting period a variety of new analytic techniques and computational algorithms have been developed. In particular, for single and dual reflector antenna systems conditions for solvability of the synthesis equations have been established. Numerical algorithms for computing surface data of the reflectors have been developed and successfully tested. In addition, efficient techniques have been developed for computing radiation patterns produced by reflections/refractions off surfaces with arbitrary geometry. These techniques can be used for geometrical optics analysis of complex geometric structures such as aircrafts. They can also be applied to determine effectively the aperture excitations required to produce specified fields at given observation points. The results have a variety of applications in military, civilian, and commercial sectors.

  3. Development of a stereo analysis algorithm for generating topographic maps using interactive techniques of the MPP

    NASA Technical Reports Server (NTRS)

    Strong, James P.

    1987-01-01

    A local area matching algorithm was developed on the Massively Parallel Processor (MPP). It is an iterative technique that first matches coarse or low resolution areas and at each iteration performs matches of higher resolution. Results so far show that when good matches are possible in the two images, the MPP algorithm matches corresponding areas as well as a human observer. To aid in developing this algorithm, a control or shell program was developed for the MPP that allows interactive experimentation with various parameters and procedures to be used in the matching process. (This would not be possible without the high speed of the MPP). With the system, optimal techniques can be developed for different types of matching problems.

  4. Battery algorithm verification and development using hardware-in-the-loop testing

    NASA Astrophysics Data System (ADS)

    He, Yongsheng; Liu, Wei; Koch, Brain J.

    Battery algorithms play a vital role in hybrid electric vehicles (HEVs), plug-in hybrid electric vehicles (PHEVs), extended-range electric vehicles (EREVs), and electric vehicles (EVs). The energy management of hybrid and electric propulsion systems needs to rely on accurate information on the state of the battery in order to determine the optimal electric drive without abusing the battery. In this study, a cell-level hardware-in-the-loop (HIL) system is used to verify and develop state of charge (SOC) and power capability predictions of embedded battery algorithms for various vehicle applications. Two different batteries were selected as representative examples to illustrate the battery algorithm verification and development procedure. One is a lithium-ion battery with a conventional metal oxide cathode, which is a power battery for HEV applications. The other is a lithium-ion battery with an iron phosphate (LiFePO 4) cathode, which is an energy battery for applications in PHEVs, EREVs, and EVs. The battery cell HIL testing provided valuable data and critical guidance to evaluate the accuracy of the developed battery algorithms, to accelerate battery algorithm future development and improvement, and to reduce hybrid/electric vehicle system development time and costs.

  5. Prescription stimulant use is associated with earlier onset of psychosis.

    PubMed

    Moran, Lauren V; Masters, Grace A; Pingali, Samira; Cohen, Bruce M; Liebson, Elizabeth; Rajarethinam, R P; Ongur, Dost

    2015-12-01

    A childhood history of attention deficit hyperactivity disorder (ADHD) is common in psychotic disorders, yet prescription stimulants may interact adversely with the physiology of these disorders. Specifically, exposure to stimulants leads to long-term increases in dopamine release. We therefore hypothesized that individuals with psychotic disorders previously exposed to prescription stimulants will have an earlier onset of psychosis. Age of onset of psychosis (AOP) was compared in individuals with and without prior exposure to prescription stimulants while controlling for potential confounding factors. In a sample of 205 patients recruited from an inpatient psychiatric unit, 40% (n = 82) reported use of stimulants prior to the onset of psychosis. Most participants were prescribed stimulants during childhood or adolescence for a diagnosis of ADHD. AOP was significantly earlier in those exposed to stimulants (20.5 vs. 24.6 years stimulants vs. no stimulants, p < 0.001). After controlling for gender, IQ, educational attainment, lifetime history of a cannabis use disorder or other drugs of abuse, and family history of a first-degree relative with psychosis, the association between stimulant exposure and earlier AOP remained significant. There was a significant gender × stimulant interaction with a greater reduction in AOP for females, whereas the smaller effect of stimulant use on AOP in males did not reach statistical significance. In conclusion, individuals with psychotic disorders exposed to prescription stimulants had an earlier onset of psychosis, and this relationship did not appear to be mediated by IQ or cannabis. PMID:26522870

  6. Earlier Guidance Opportunities: Priorities for the 1970's.

    ERIC Educational Resources Information Center

    Nemec, William E., Ed.

    "Earlier Guidance Opportunities (EGO): Priorities for the 1970's" was the topic for this Ohio Elementary Guidance Conference. In his keynote address entitled "New Perspectives on the Guidance of Younger Children - Can We Afford to Delay Vocational Guidance?" Dr. George E. Hill used EGO to say, "Education Gives Opportunity,""Ego's Grow on…

  7. Research promises earlier warning for grapevine canker diseases

    Technology Transfer Automated Retrieval System (TEKTRAN)

    When it comes to detecting and treating vineyards for grapevine canker diseases (also called trunk diseases), like Botryosphaeria dieback (Bot canker), Esca, Eutypa dieback and Phomopsis dieback, the earlier the better, says plant pathologist Kendra Baumgartner, with the USDA’s Agricultural Research...

  8. TIGER: Development of Thermal Gradient Compensation Algorithms and Techniques

    NASA Technical Reports Server (NTRS)

    Hereford, James; Parker, Peter A.; Rhew, Ray D.

    2004-01-01

    In a wind tunnel facility, the direct measurement of forces and moments induced on the model are performed by a force measurement balance. The measurement balance is a precision-machined device that has strain gages at strategic locations to measure the strain (i.e., deformations) due to applied forces and moments. The strain gages convert the strain (and hence the applied force) to an electrical voltage that is measured by external instruments. To address the problem of thermal gradients on the force measurement balance NASA-LaRC has initiated a research program called TIGER - Thermally-Induced Gradients Effects Research. The ultimate goals of the TIGER program are to: (a) understand the physics of the thermally-induced strain and its subsequent impact on load measurements and (b) develop a robust thermal gradient compensation technique. This paper will discuss the impact of thermal gradients on force measurement balances, specific aspects of the TIGER program (the design of a special-purpose balance, data acquisition and data analysis challenges), and give an overall summary.

  9. Bobcat 2013: a hyperspectral data collection supporting the development and evaluation of spatial-spectral algorithms

    NASA Astrophysics Data System (ADS)

    Kaufman, Jason; Celenk, Mehmet; White, A. K.; Stocker, Alan D.

    2014-06-01

    The amount of hyperspectral imagery (HSI) data currently available is relatively small compared to other imaging modalities, and what is suitable for developing, testing, and evaluating spatial-spectral algorithms is virtually nonexistent. In this work, a significant amount of coincident airborne hyperspectral and high spatial resolution panchromatic imagery that supports the advancement of spatial-spectral feature extraction algorithms was collected to address this need. The imagery was collected in April 2013 for Ohio University by the Civil Air Patrol, with their Airborne Real-time Cueing Hyperspectral Enhanced Reconnaissance (ARCHER) sensor. The target materials, shapes, and movements throughout the collection area were chosen such that evaluation of change detection algorithms, atmospheric compensation techniques, image fusion methods, and material detection and identification algorithms is possible. This paper describes the collection plan, data acquisition, and initial analysis of the collected imagery.

  10. A comparison of three self-tuning control algorithms developed for the Bristol-Babcock controller

    SciTech Connect

    Tapp, P.A.

    1992-04-01

    A brief overview of adaptive control methods relating to the design of self-tuning proportional-integral-derivative (PID) controllers is given. The methods discussed include gain scheduling, self-tuning, auto-tuning, and model-reference adaptive control systems. Several process identification and parameter adjustment methods are discussed. Characteristics of the two most common types of self-tuning controllers implemented by industry (i.e., pattern recognition and process identification) are summarized. The substance of the work is a comparison of three self-tuning proportional-plus-integral (STPI) control algorithms developed to work in conjunction with the Bristol-Babcock PID control module. The STPI control algorithms are based on closed-loop cycling theory, pattern recognition theory, and model-based theory. A brief theory of operation of these three STPI control algorithms is given. Details of the process simulations developed to test the STPI algorithms are given, including an integrating process, a first-order system, a second-order system, a system with initial inverse response, and a system with variable time constant and delay. The STPI algorithms' performance with regard to both setpoint changes and load disturbances is evaluated, and their robustness is compared. The dynamic effects of process deadtime and noise are also considered. Finally, the limitations of each of the STPI algorithms is discussed, some conclusions are drawn from the performance comparisons, and a few recommendations are made. 6 refs.

  11. A comparison of three self-tuning control algorithms developed for the Bristol-Babcock controller

    SciTech Connect

    Tapp, P.A.

    1992-04-01

    A brief overview of adaptive control methods relating to the design of self-tuning proportional-integral-derivative (PID) controllers is given. The methods discussed include gain scheduling, self-tuning, auto-tuning, and model-reference adaptive control systems. Several process identification and parameter adjustment methods are discussed. Characteristics of the two most common types of self-tuning controllers implemented by industry (i.e., pattern recognition and process identification) are summarized. The substance of the work is a comparison of three self-tuning proportional-plus-integral (STPI) control algorithms developed to work in conjunction with the Bristol-Babcock PID control module. The STPI control algorithms are based on closed-loop cycling theory, pattern recognition theory, and model-based theory. A brief theory of operation of these three STPI control algorithms is given. Details of the process simulations developed to test the STPI algorithms are given, including an integrating process, a first-order system, a second-order system, a system with initial inverse response, and a system with variable time constant and delay. The STPI algorithms` performance with regard to both setpoint changes and load disturbances is evaluated, and their robustness is compared. The dynamic effects of process deadtime and noise are also considered. Finally, the limitations of each of the STPI algorithms is discussed, some conclusions are drawn from the performance comparisons, and a few recommendations are made. 6 refs.

  12. Development of a fire detection algorithm for the COMS (Communication Ocean and Meteorological Satellite)

    NASA Astrophysics Data System (ADS)

    Kim, Goo; Kim, Dae Sun; Lee, Yang-Won

    2013-10-01

    The forest fires do much damage to our life in ecological and economic aspects. South Korea is probably more liable to suffer from the forest fire because mountain area occupies more than half of land in South Korea. They have recently launched the COMS(Communication Ocean and Meteorological Satellite) which is a geostationary satellite. In this paper, we developed forest fire detection algorithm using COMS data. Generally, forest fire detection algorithm uses characteristics of 4 and 11 micrometer brightness temperature. Our algorithm additionally uses LST(Land Surface Temperature). We confirmed the result of our fire detection algorithm using statistical data of Korea Forest Service and ASTER(Advanced Spaceborne Thermal Emission and Reflection Radiometer) images. We used the data in South Korea On April 1 and 2, 2011 because there are small and big forest fires at that time. The detection rate was 80% in terms of the frequency of the forest fires and was 99% in terms of the damaged area. Considering the number of COMS's channels and its low resolution, this result is a remarkable outcome. To provide users with the result of our algorithm, we developed a smartphone application for users JSP(Java Server Page). This application can work regardless of the smartphone's operating system. This study can be unsuitable for other areas and days because we used just two days data. To improve the accuracy of our algorithm, we need analysis using long-term data as future work.

  13. Implementation on Landsat Data of a Simple Cloud Mask Algorithm Developed for MODIS Land Bands

    NASA Technical Reports Server (NTRS)

    Oreopoulos, Lazaros; Wilson, Michael J.; Varnai, Tamas

    2010-01-01

    This letter assesses the performance on Landsat-7 images of a modified version of a cloud masking algorithm originally developed for clear-sky compositing of Moderate Resolution Imaging Spectroradiometer (MODIS) images at northern mid-latitudes. While data from recent Landsat missions include measurements at thermal wavelengths, and such measurements are also planned for the next mission, thermal tests are not included in the suggested algorithm in its present form to maintain greater versatility and ease of use. To evaluate the masking algorithm we take advantage of the availability of manual (visual) cloud masks developed at USGS for the collection of Landsat scenes used here. As part of our evaluation we also include the Automated Cloud Cover Assesment (ACCA) algorithm that includes thermal tests and is used operationally by the Landsat-7 mission to provide scene cloud fractions, but no cloud masks. We show that the suggested algorithm can perform about as well as ACCA both in terms of scene cloud fraction and pixel-level cloud identification. Specifically, we find that the algorithm gives an error of 1.3% for the scene cloud fraction of 156 scenes, and a root mean square error of 7.2%, while it agrees with the manual mask for 93% of the pixels, figures very similar to those from ACCA (1.2%, 7.1%, 93.7%).

  14. The development of a scalable parallel 3-D CFD algorithm for turbomachinery. M.S. Thesis Final Report

    NASA Technical Reports Server (NTRS)

    Luke, Edward Allen

    1993-01-01

    Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.

  15. Development of a rule-based algorithm for rice cultivation mapping using Landsat 8 time series

    NASA Astrophysics Data System (ADS)

    Karydas, Christos G.; Toukiloglou, Pericles; Minakou, Chara; Gitas, Ioannis Z.

    2015-06-01

    In the framework of ERMES project (FP7 66983), an algorithm for mapping rice cultivation extents using mediumhigh resolution satellite data was developed. ERMES (An Earth obseRvation Model based RicE information Service) aims to develop a prototype of downstream service for rice yield modelling based on a combination of Earth Observation and in situ data. The algorithm was designed as a set of rules applied on a time series of Landsat 8 images, acquired throughout the rice cultivation season of 2014 from the plain of Thessaloniki, Greece. The rules rely on the use of spectral indices, such as the Normalized Difference Vegetation Index (NDVI), the Normalized Difference Water Index (NDWI), and the Normalized Seasonal Wetness Index (NSWI), extracted from the Landsat 8 dataset. The algorithm is subdivided into two phases: a) a hard classification phase, resulting in a binary map (rice/no-rice), where pixels are judged according to their performance in all the images of the time series, while index thresholds were defined after a trial and error approach; b) a soft classification phase, resulting in a fuzzy map, by assigning scores to the pixels which passed (as `rice') the first phase. Finally, a user-defined threshold of the fuzzy score will discriminate rice from no-rice pixels in the output map. The algorithm was tested in a subset of Thessaloniki plain against a set of selected field data. The results indicated an overall accuracy of the algorithm higher than 97%. The algorithm was also applied in a study are in Spain (Valencia) and a preliminary test indicated a similar performance, i.e. about 98%. Currently, the algorithm is being modified, so as to map rice extents early in the cultivation season (by the end of June), with a view to contribute more substantially to the rice yield prediction service of ERMES. Both algorithm modes (late and early) are planned to be tested in extra Mediterranean study areas, in Greece, Italy, and Spain.

  16. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  17. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  18. Correlation signatures of wet soils and snows. [algorithm development and computer programming

    NASA Technical Reports Server (NTRS)

    Phillips, M. R.

    1972-01-01

    Interpretation, analysis, and development of algorithms have provided the necessary computational programming tools for soil data processing, data handling and analysis. Algorithms that have been developed thus far, are adequate and have been proven successful for several preliminary and fundamental applications such as software interfacing capabilities, probability distributions, grey level print plotting, contour plotting, isometric data displays, joint probability distributions, boundary mapping, channel registration and ground scene classification. A description of an Earth Resources Flight Data Processor, (ERFDP), which handles and processes earth resources data under a users control is provided.

  19. Development of the Tensor CT Algorithm for Strain Tomography Using Bragg-edge Neutron Transmission

    NASA Astrophysics Data System (ADS)

    Sato, Hirotaka; Shiota, Yoshinori; Shinohara, Takenao; Kamiyama, Takashi; Ohnuma, Masato; Furusaka, Michihiro; Kiyanagi, Yoshiaki

    The tensor CT algorithm for strain tomography using the Bragg-edge neutron transmission spectroscopy is presented. Crystal lattice strain is not scalar but is a tensorwhich changesdepending on the observation angle. Therefore, since traditional"scalar" CT algorithms cannot be applied to tomography of strain, the development of a "tensor" CT algorithm is needed. Aiming at further developments in the future, we first developed a ML-EM based versatile tensor tomography using ofa simple algorithm withsmall restriction. The basic concept is to simultaneously reconstruct multiple strain-tensor components (scalar quantities of normal strain and shear strain) existing at a certain position. In the actual CT image reconstruction, it is important to consider the angular dependence of each tensor component. Through the simulation studies on axially-symmetric and axially-asymmetric distributionscomposed of two strain components and experimental demonstration using the axially-symmetric VAMAS standard sample, we found some important points for strain-tensor tomography. The angle-dependent back-projection procedure of ML-EM is indispensable fortomography of each tensor component,butsuch function also causes animage distortion which can average each strain value along each strain direction. Also, we found that the optimization of the angle-dependent back-projection procedure is important for further improvements of the tensor CT algorithm.

  20. Development of a regional rain retrieval algorithm for exclusive mesoscale convective systems over peninsular India

    NASA Astrophysics Data System (ADS)

    Dutta, Devajyoti; Sharma, Sanjay; Das, Jyotirmay; Gairola, R. M.

    2012-06-01

    The present study emphasize the development of a region specific rain retrieval algorithm by taking into accounts the cloud features. Brightness temperatures (Tbs) from various TRMM Microwave Imager (TMI) channels are calibrated with near surface rain intensity as observed from the TRMM - Precipitation Radar. It shows that Tb-R relations during exclusive-Mesoscale Convective System (MCS) events have greater dynamical range compared to combined events of non-MCS and MCS. Increased dynamical range of Tb-R relations for exclusive-MCS events have led to the development of an Artificial Neural Network (ANN) based regional algorithm for rain intensity estimation. By using the exclusive MCSs algorithm, reasonably good improvement in the accuracy of rain intensity estimation is observed. A case study of a comparison of rain intensity estimation by the exclusive-MCS regional algorithm and the global TRMM 2A12 rain product with a Doppler Weather Radar shows significant improvement in rain intensity estimation by the developed regional algorithm.

  1. Development of new two-dosimeter algorithm for effective dose in ICRP Publication 103.

    PubMed

    Kim, Chan Hyeong; Cho, Sungkoo; Jeong, Jong Hwi; Bolch, Wesley E; Reece, Warren D; Poston, John W

    2011-05-01

    The two-dosimeter method, which employs one dosimeter on the chest and the other on the back, determines the effective dose with sufficient accuracy for complex or unknown irradiation geometries. The two-dosimeter method, with a suitable algorithm, neither significantly overestimates (in most cases) nor seriously underestimates the effective dose, not even for extreme exposure geometries. Recently, however, the definition of the effective dose itself was changed in ICRP Publication 103; that is, the organ and tissue configuration employed in calculations of effective dose, along with the related tissue weighting factors, was significantly modified. In the present study, therefore, a two-dosimeter algorithm was developed for the new ICRP 103 definition of effective dose. To that end, first, effective doses and personal dosimeter responses were calculated using the ICRP reference phantoms and the MCNPX code for many incident beam directions. Next, a systematic analysis of the calculated values was performed to determine an optimal algorithm. Finally, the developed algorithm was tested by applying it to beam irradiation geometries specifically selected as extreme exposure geometries, and the results were compared with those for the previous algorithm that had been developed for the effective dose given in ICRP Publication 60. PMID:21451315

  2. Development of algorithms for tsunami detection by High Frequency Radar based on modeling tsunami case studies in the Mediterranean Sea

    NASA Astrophysics Data System (ADS)

    Grilli, Stéphan; Guérin, Charles-Antoine; Grosdidier, Samuel

    2015-04-01

    Where coastal tsunami hazard is governed by near-field sources, Submarine Mass Failures (SMFs) or earthquakes, tsunami propagation times may be too small for a detection based on deep or shallow water buoys. To offer sufficient warning time, it has been proposed by others to implement early warning systems relying on High Frequency Surface Wave Radar (HFSWR) remote sensing, that has a dense spatial coverage far offshore. A new HFSWR, referred to as STRADIVARIUS, has been recently deployed by Diginext Inc. to cover the "Golfe du Lion" (GDL) in the Western Mediterranean Sea. This radar, which operates at 4.5 MHz, uses a proprietary phase coding technology that allows detection up to 300 km in a bistatic configuration (with a baseline of about 100 km). Although the primary purpose of the radar is vessel detection in relation to homeland security, it can also be used for ocean current monitoring. The current caused by an arriving tsunami will shift the Bragg frequency by a value proportional to a component of its velocity, which can be easily obtained from the Doppler spectrum of the HFSWR signal. Using state of the art tsunami generation and propagation models, we modeled tsunami case studies in the western Mediterranean basin (both seismic and SMFs) and simulated the HFSWR backscattered signal that would be detected for the entire GDL and beyond. Based on simulated HFSWR signal, we developed two types of tsunami detection algorithms: (i) one based on standard Doppler spectra, for which we found that to be detectable within the environmental and background current noises, the Doppler shift requires tsunami currents to be at least 10-15 cm/s, which typically only occurs on the continental shelf in fairly shallow water; (ii) to allow earlier detection, a second algorithm computes correlations of the HFSWR signals at two distant locations, shifted in time by the tsunami propagation time between these locations (easily computed based on bathymetry). We found that this

  3. Development of algorithms for tsunami detection by High Frequency Radar based on modeling tsunami case studies in the Mediterranean Sea

    NASA Astrophysics Data System (ADS)

    Grilli, S. T.; Guérin, C. A.; Grosdidier, S.

    2014-12-01

    Where coastal tsunami hazard is governed by near-field sources, Submarine Mass Failures (SMFs) or earthquakes, tsunami propagation times may be too small for a detection based on deep or shallow water buoys. To offer sufficient warning time, it has been proposed by others to implement early warning systems relying on High Frequency Radar (HFR) remote sensing, that has a dense spatial coverage far offshore. A new HFR, referred to as STRADIVARIUS, is being deployed by Diginext Inc. (in Fall 2014), to cover the "Golfe du Lion" (GDL) in the Western Mediterranean Sea. This radar uses a proprietary phase coding technology that allows detection up to 300 km, in a bistatic configuration (for which radar and antennas are separated by about 100 km). Although the primary purpose of the radar is vessel detection in relation to homeland security, the 4.5 MHz HFR will provide a strong backscattered signal for ocean surface waves at the so-called Bragg frequency (here, wavelength of 30 m). The current caused by an arriving tsunami will shift the Bragg frequency, by a value proportional to the current magnitude (projected on the local radar ray direction), which can be easily obtained from the Doppler spectrum of the HFR signal. Using state of the art tsunami generation and propagation models, we modeled tsunami case studies in the western Mediterranean basin (both seismic and SMFs) and simulated the HFR backscattered signal that would be detected for the entire GDL and beyond. Based on simulated HFR signal, we developed two types of tsunami detection algorithms: (i) one based on standard Doppler spectra, for which we found that to be detectable within the environmental and background current noises, the Doppler shift requires tsunami currents to be at least 10-15 cm/s, which typically only occurs on the continental shelf in fairly shallow water; (ii) to allow earlier detection, a second algorithm computes correlations of the HFR signals at two distant locations, shifted in time

  4. Earlier green-up and spring warming amplification over Europe

    NASA Astrophysics Data System (ADS)

    Ma, Shaoxiu; Pitman, Andy J.; Lorenz, Ruth; Kala, Jatin; Srbinovsky, Jhan

    2016-03-01

    The onset of green-up of plants has advanced in response to climate change. This advance has the potential to affect heat waves via biogeochemical and biophysical processes. Here a climate model was used to investigate only the biophysical feedbacks of earlier green-up on climate as the biogeochemical feedbacks have been well addressed. Earlier green-up by 5 to 30 days amplifies spring warming in Europe, especially heat waves, but makes few differences to heat waves in summer. This spring warming is most noticeable within 30 days of advanced green-up and is associated with a decrease in low- and middle-layer clouds and associated increases of downward short wave and net radiation. We find negligible differences in the Southern Hemisphere and low latitudes of the Northern Hemisphere. Our results provide an estimate of the level of skill necessary in phenology models to avoid introducing biases in climate simulations.

  5. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation.

    PubMed

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  6. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation

    PubMed Central

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  7. Utilization of Ancillary Data Sets for SMAP Algorithm Development and Product Generation

    NASA Technical Reports Server (NTRS)

    ONeill, P.; Podest, E.; Njoku, E.

    2011-01-01

    Algorithms being developed for the Soil Moisture Active Passive (SMAP) mission require a variety of both static and ancillary data. The selection of the most appropriate source for each ancillary data parameter is driven by a number of considerations, including accuracy, latency, availability, and consistency across all SMAP products and with SMOS (Soil Moisture Ocean Salinity). It is anticipated that initial selection of all ancillary datasets, which are needed for ongoing algorithm development activities on the SMAP algorithm testbed at JPL, will be completed within the year. These datasets will be updated as new or improved sources become available, and all selections and changes will be documented for the benefit of the user community. Wise choices in ancillary data will help to enable SMAP to provide new global measurements of soil moisture and freeze/thaw state at the targeted accuracy necessary to tackle hydrologically-relevant societal issues.

  8. Development of a multi-objective optimization algorithm using surrogate models for coastal aquifer management

    NASA Astrophysics Data System (ADS)

    Kourakos, George; Mantoglou, Aristotelis

    2013-02-01

    SummaryThe demand for fresh water in coastal areas and islands can be very high due to increased local needs and tourism. A multi-objective optimization methodology is developed, involving minimization of economic and environmental costs while satisfying water demand. The methodology considers desalinization of pumped water and injection of treated water into the aquifer. Variable density aquifer models are computationally intractable when integrated in optimization algorithms. In order to alleviate this problem, a multi-objective optimization algorithm is developed combining surrogate models based on Modular Neural Networks [MOSA(MNNs)]. The surrogate models are trained adaptively during optimization based on a genetic algorithm. In the crossover step, each pair of parents generates a pool of offspring which are evaluated using the fast surrogate model. Then, the most promising offspring are evaluated using the exact numerical model. This procedure eliminates errors in Pareto solution due to imprecise predictions of the surrogate model. The method has important advancements compared to previous methods such as precise evaluation of the Pareto set and alleviation of propagation of errors due to surrogate model approximations. The method is applied to an aquifer in the Greek island of Santorini. The results show that the new MOSA(MNN) algorithm offers significant reduction in computational time compared to previous methods (in the case study it requires only 5% of the time required by other methods). Further, the Pareto solution is better than the solution obtained by alternative algorithms.

  9. Development of a Dynamic Operational Scheduling Algorithm for an Independent Micro-Grid with Renewable Energy

    NASA Astrophysics Data System (ADS)

    Obara, Shin'ya

    A micro-grid with the capacity for sustainable energy is expected to be a distributed energy system that exhibits quite a small environmental impact. In an independent micro-grid, “green energy,” which is typically thought of as unstable, can be utilized effectively by introducing a battery. In the past study, the production-of-electricity prediction algorithm (PAS) of the solar cell was developed. In PAS, a layered neural network is made to learn based on past weather data and the operation plan of the compound system of a solar cell and other energy systems was examined using this prediction algorithm. In this paper, a dynamic operational scheduling algorithm is developed using a neural network (PAS) and a genetic algorithm (GA) to provide predictions for solar cell power output. We also do a case study analysis in which we use this algorithm to plan the operation of a system that connects nine houses in Sapporo to a micro-grid composed of power equipment and a polycrystalline silicon solar cell. In this work, the relationship between the accuracy of output prediction of the solar cell and the operation plan of the micro-grid was clarified. Moreover, we found that operating the micro-grid according to the plan derived with PAS was far superior, in terms of equipment hours of operation, to that using past average weather data.

  10. Developments in the Aerosol Layer Height Retrieval Algorithm for the Copernicus Sentinel-4/UVN Instrument

    NASA Astrophysics Data System (ADS)

    Nanda, Swadhin; Sanders, Abram; Veefkind, Pepijn

    2016-04-01

    The Sentinel-4 mission is a part of the European Commission's Copernicus programme, the goal of which is to provide geo-information to manage environmental assets, and to observe, understand and mitigate the effects of the changing climate. The Sentinel-4/UVN instrument design is motivated by the need to monitor trace gas concentrations and aerosols in the atmosphere from a geostationary orbit. The on-board instrument is a high resolution UV-VIS-NIR (UVN) spectrometer system that provides hourly radiance measurements over Europe and northern Africa with a spatial sampling of 8 km. The main application area of Sentinel-4/UVN is air quality. One of the data products that is being developed for Sentinel-4/UVN is the Aerosol Layer Height (ALH). The goal is to determine the height of aerosol plumes with a resolution of better than 0.5 - 1 km. The ALH product thus targets aerosol layers in the free troposphere, such as desert dust, volcanic ash and biomass during plumes. KNMI is assigned with the development of the Aerosol Layer Height (ALH) algorithm. Its heritage is the ALH algorithm developed by Sanders and De Haan (ATBD, 2016) for the TROPOMI instrument on board the Sentinel-5 Precursor mission that is to be launched in June or July 2016 (tentative date). The retrieval algorithm designed so far for the aerosol height product is based on the absorption characteristics of the oxygen-A band (759-770 nm). The algorithm has heritage to the ALH algorithm developed for TROPOMI on the Sentinel 5 precursor satellite. New aspects for Sentinel-4/UVN include the higher resolution (0.116 nm compared to 0.4 for TROPOMI) and hourly observation from the geostationary orbit. The algorithm uses optimal estimation to obtain a spectral fit of the reflectance across absorption band, while assuming a single uniform layer with fixed width to represent the aerosol vertical distribution. The state vector includes amongst other elements the height of this layer and its aerosol optical

  11. Development and Evaluation of Model Algorithms to Account for Chemical Transformation in the Nearroad Environment

    EPA Science Inventory

    We describe the development and evaluation of two new model algorithms for NOx chemistry in the R-LINE near-road dispersion model for traffic sources. With increased urbanization, there is increased mobility leading to higher amount of traffic related activity on a global scale. ...

  12. Scheduling language and algorithm development study. Appendix: Study approach and activity summary

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The approach and organization of the study to develop a high level computer programming language and a program library are presented. The algorithm and problem modeling analyses are summarized. The approach used to identify and specify the capabilities required in the basic language is described. Results of the analyses used to define specifications for the scheduling module library are presented.

  13. Long term analysis of PALS soil moisture campaign measurements for global soil moisture algorithm development

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An important component of satellite-based soil moisture algorithm development and validation is the comparison of coincident remote sensing and in situ observations that are typically provided by intensive field campaigns. The planned NASA Soil Moisture Active Passive (SMAP) mission has unique requi...

  14. Development of sub-daily erosion and sediment transport algorithms in SWAT

    Technology Transfer Automated Retrieval System (TEKTRAN)

    New Soil and Water Assessment Tool (SWAT) algorithms for simulation of stormwater best management practices (BMPs) such as detention basins, wet ponds, sedimentation filtration ponds, and retention irrigation systems are under development for modeling small/urban watersheds. Modeling stormwater BMPs...

  15. Ocean observations with EOS/MODIS: Algorithm Development and Post Launch Studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1998-01-01

    Significant accomplishments made during the present reporting period: (1) We expanded our "spectral-matching" algorithm (SMA), for identifying the presence of absorbing aerosols and simultaneously performing atmospheric correction and derivation of the ocean's bio-optical parameters, to the point where it could be added as a subroutine to the MODIS water-leaving radiance algorithm; (2) A modification to the SMA that does not require detailed aerosol models has been developed. This is important as the requirement for realistic aerosol models has been a weakness of the SMA; and (3) We successfully acquired micro pulse lidar data in a Saharan dust outbreak during ACE-2 in the Canary Islands.

  16. Ocean Observations with EOS/MODIS: Algorithm Development and Post Launch Studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1997-01-01

    The following accomplishments were made during the present reporting period: (1) We expanded our new method, for identifying the presence of absorbing aerosols and simultaneously performing atmospheric correction, to the point where it could be added as a subroutine to the MODIS water-leaving radiance algorithm; (2) We successfully acquired micro pulse lidar (MPL) data at sea during a cruise in February; (3) We developed a water-leaving radiance algorithm module for an approximate correction of the MODIS instrument polarization sensitivity; and (4) We participated in one cruise to the Gulf of Maine, a well known region for mesoscale coccolithophore blooms. We measured coccolithophore abundance, production and optical properties.

  17. Development and benefit analysis of a sector design algorithm for terminal dynamic airspace configuration

    NASA Astrophysics Data System (ADS)

    Sciandra, Vincent

    The National Airspace System (NAS) is the vast network of systems enabling safe and efficient air travel in the United States. It consists of a set of static sectors, each controlled by one or more air traffic controllers. Air traffic control is tasked with ensuring that all flights can depart and arrive on time and in a safe and efficient matter. However, skyrocketing demand will only increase the stress on an already inefficient system, causing massive delays. The current, static configuration of the NAS cannot possibly handle the future demand on the system safely and efficiently, especially since it is projected to triple by 2025. To overcome these issues, the Next Generation of Air Transportation System (NextGen) is being enacted to increase the flexibility of the NAS. A major objective of NextGen is to implement Adaptable Dynamic Airspace Configuration (ADAC) which will dynamically allocate the sectors to best fit the traffic in the area. Dynamically allocating sectors will allow resources such as controllers to be better distributed to meet traffic demands. Currently, most DAC research has involved the en route airspace. This leaves the terminal airspace, which accounts for a large amount of the overall NAS complexity, in need of work. Using a combination of methods used in en route sectorization, this thesis has developed an algorithm for the dynamic allocation of sectors in the terminal airspace. This algorithm will be evaluated using metrics common in the evaluation of dynamic density, which is adapted for the unique challenges of the terminal airspace, and used to measure workload on air traffic controllers. These metrics give a better view of the controller workload than the number of aircraft alone. By comparing the test results with sectors currently used in the NAS using real traffic data, the algorithm xv generated sectors can be quantitatively evaluated for improvement of the current sectorizations. This will be accomplished by testing the

  18. Development of advanced WTA (Weapon Target Assignment) algorithms for parallel processing. Final report

    SciTech Connect

    Castanon, D.A.

    1989-10-01

    The objective of weapon-target assignment (WTA) in a ballistic missile defense (BMD) system is to determine how defensive weapons should be assigned to boosters and reentry vehicles in order to maximize the survival of assets belonging to the U.S. and allied countries. The implied optimization problem requires consideration of a large number of potential weapon target assignments in order to select the most effective combination of assignments. The resulting WTA optimization problems are among the most complex encountered in mathematical programming. Indeed, simple versions of the WTA problem have been shown to be NP-complete, implying that the computations required achieve optimal solutions grow exponentially with the numbers of weapons and targets considered in the solution. The computational complexity of the WTA problem has motivated the development of heuristic algorithms that are not altogether satisfactory for use in Strategic Defense Systems (SDS). Some special cases of the WTA problem are not NP-complete and can be solved using standard optimization algorithms such as linear programming and maximum-marginal-return algorithms; these algorithms enjoy low computational requirements and therefore have been adopted as heuristics for solving more general WTA problems. However, experimental studies have demonstrated that these heuristic algorithms lead to significantly suboptimal solutions for certain scenarios.

  19. Development of the Landsat Data Continuity Mission Cloud Cover Assessment Algorithms

    USGS Publications Warehouse

    Scaramuzza, Pat; Bouchard, M.A.; Dwyer, J.L.

    2012-01-01

    The upcoming launch of the Operational Land Imager (OLI) will start the next era of the Landsat program. However, the Automated Cloud-Cover Assessment (CCA) (ACCA) algorithm used on Landsat 7 requires a thermal band and is thus not suited for OLI. There will be a thermal instrument on the Landsat Data Continuity Mission (LDCM)-the Thermal Infrared Sensor-which may not be available during all OLI collections. This illustrates a need for CCA for LDCM in the absence of thermal data. To research possibilities for full-resolution OLI cloud assessment, a global data set of 207 Landsat 7 scenes with manually generated cloud masks was created. It was used to evaluate the ACCA algorithm, showing that the algorithm correctly classified 79.9% of a standard test subset of 3.95 109 pixels. The data set was also used to develop and validate two successor algorithms for use with OLI data-one derived from an off-the-shelf machine learning package and one based on ACCA but enhanced by a simple neural network. These comprehensive CCA algorithms were shown to correctly classify pixels as cloudy or clear 88.5% and 89.7% of the time, respectively.

  20. Advanced synthetic image generation models and their application to multi/hyperspectral algorithm development

    NASA Astrophysics Data System (ADS)

    Schott, John R.; Brown, Scott D.; Raqueno, Rolando V.; Gross, Harry N.; Robinson, Gary

    1999-01-01

    The need for robust image data sets for algorithm development and testing has prompted the consideration of synthetic imagery as a supplement to real imagery. The unique ability of synthetic image generation (SIG) tools to supply per-pixel truth allows algorithm writers to test difficult scenarios that would require expensive collection and instrumentation efforts. In addition, SIG data products can supply the user with `actual' truth measurements of the entire image area that are not subject to measurement error thereby allowing the user to more accurately evaluate the performance of their algorithm. Advanced algorithms place a high demand on synthetic imagery to reproduce both the spectro-radiometric and spatial character observed in real imagery. This paper describes a synthetic image generation model that strives to include the radiometric processes that affect spectral image formation and capture. In particular, it addresses recent advances in SIG modeling that attempt to capture the spatial/spectral correlation inherent in real images. The model is capable of simultaneously generating imagery from a wide range of sensors allowing it to generate daylight, low-light-level and thermal image inputs for broadband, multi- and hyper-spectral exploitation algorithms.

  1. Volumetric visualization algorithm development for an FPGA-based custom computing machine

    NASA Astrophysics Data System (ADS)

    Sallinen, Sami J.; Alakuijala, Jyrki; Helminen, Hannu; Laitinen, Joakim

    1998-05-01

    Rendering volumetric medical images is a burdensome computational task for contemporary computers due to the large size of the data sets. Custom designed reconfigurable hardware could considerably speed up volume visualization if an algorithm suitable for the platform is used. We present an algorithm and speedup techniques for visualizing volumetric medical CT and MR images with a custom-computing machine based on a Field Programmable Gate Array (FPGA). We also present simulated performance results of the proposed algorithm calculated with a software implementation running on a desktop PC. Our algorithm is capable of generating perspective projection renderings of single and multiple isosurfaces with transparency, simulated X-ray images, and Maximum Intensity Projections (MIP). Although more speedup techniques exist for parallel projection than for perspective projection, we have constrained ourselves to perspective viewing, because of its importance in the field of radiotherapy. The algorithm we have developed is based on ray casting, and the rendering is sped up by three different methods: shading speedup by gradient precalculation, a new generalized version of Ray-Acceleration by Distance Coding (RADC), and background ray elimination by speculative ray selection.

  2. Algorithm development for automated outlier detection and background noise reduction during NIR spectroscopic data processing

    NASA Astrophysics Data System (ADS)

    Abookasis, David; Workman, Jerome J.

    2011-09-01

    This study describes a hybrid processing algorithm for use during calibration/validation of near-infrared spectroscopic signals based on a spectra cross-correlation and filtering process, combined with a partial-least square regression (PLS) analysis. In the first step of the algorithm, exceptional signals (outliers) are detected and remove based on spectra correlation criteria we have developed. Then, signal filtering based on direct orthogonal signal correction (DOSC) was applied, before being used in the PLS model, to filter out background variance. After outlier screening and DOSC treatment, a PLS calibration model matrix is formed. Once this matrix has been built, it is used to predict the concentration of the unknown samples. Common statistics such as standard error of cross-validation, mean relative error, coefficient of determination, etc. were computed to assess the fitting ability of the algorithm Algorithm performance was tested on several hundred blood samples prepared at different hematocrit and glucose levels using blood materials from thirteen healthy human volunteers. During measurements, these samples were subjected to variations in temperature, flow rate, and sample pathlength. Experimental results highlight the potential, applicability, and effectiveness of the proposed algorithm in terms of low error of prediction, high sensitivity and specificity, and low false negative (Type II error) samples.

  3. An Improved Greedy Search Algorithm for the Development of a Phonetically Rich Speech Corpus

    NASA Astrophysics Data System (ADS)

    Zhang, Jin-Song; Nakamura, Satoshi

    An efficient way to develop large scale speech corpora is to collect phonetically rich ones that have high coverage of phonetic contextual units. The sentence set, usually called as the minimum set, should have small text size in order to reduce the collection cost. It can be selected by a greedy search algorithm from a large mother text corpus. With the inclusion of more and more phonetic contextual effects, the number of different phonetic contextual units increased dramatically, making the search not a trivial issue. In order to improve the search efficiency, we previously proposed a so-called least-to-most-ordered greedy search based on the conventional algorithms. This paper evaluated these algorithms in order to show their different characteristics. The experimental results showed that the least-to-most-ordered methods successfully achieved smaller objective sets at significantly less computation time, when compared with the conventional ones. This algorithm has already been applied to the development a number of speech corpora, including a large scale phonetically rich Chinese speech corpus ATRPTH which played an important role in developing our multi-language translation system.

  4. Development of a scheduling algorithm and GUI for autonomous satellite missions

    NASA Astrophysics Data System (ADS)

    Baek, Seung-woo; Han, Sun-mi; Cho, Kyeum-rae; Lee, Dae-woo; Yang, Jang-sik; Bainum, Peter M.; Kim, Hae-dong

    2011-04-01

    In this paper, a scheduling optimization algorithm is developed and verified for autonomous satellite mission operations. As satellite control and operational techniques continue to develop, satellite missions become more complicated and the overall quantity of tasks within the missions also increases. These changes require more specific consideration and a huge amount of computational resources, for scheduling the satellite missions. In addition, there is a certain level of repetition in satellite mission scheduling activities, and hence it is highly recommended that the operation manager carefully considers and builds some appropriate strategy for performing the operations autonomously. A good strategy to adopt is to develop scheduling optimization algorithms, because it is difficult for humans to consider the many mission parameters and constraints simultaneously. In this paper, a new genetic algorithm is applied to simulations of an actual satellite mission scheduling problem, and an appropriate GUI design is considered for an autonomous satellite mission operation. It is expected that the scheduling optimization algorithm and the GUI can improve the overall efficiency in practical satellite mission operations.

  5. SPHERES as Formation Flight Algorithm Development and Validation Testbed: Current Progress and Beyond

    NASA Technical Reports Server (NTRS)

    Kong, Edmund M.; Saenz-Otero, Alvar; Nolet, Simon; Berkovitz, Dustin S.; Miller, David W.; Sell, Steve W.

    2004-01-01

    The MIT-SSL SPHERES testbed provides a facility for the development of algorithms necessary for the success of Distributed Satellite Systems (DSS). The initial development contemplated formation flight and docking control algorithms; SPHERES now supports the study of metrology, control, autonomy, artificial intelligence, and communications algorithms and their effects on DSS projects. To support this wide range of topics, the SPHERES design contemplated the need to support multiple researchers, as echoed from both the hardware and software designs. The SPHERES operational plan further facilitates the development of algorithms by multiple researchers, while the operational locations incrementally increase the ability of the tests to operate in a representative environment. In this paper, an overview of the SPHERES testbed is first presented. The SPHERES testbed serves as a model of the design philosophies that allow for the various researches being carried out on such a facility. The implementation of these philosophies are further highlighted in the three different programs that are currently scheduled for testing onboard the International Space Station (ISS) and three that are proposed for a re-flight mission: Mass Property Identification, Autonomous Rendezvous and Docking, TPF Multiple Spacecraft Formation Flight in the first flight and Precision Optical Pointing, Tethered Formation Flight and Mars Orbit Sample Retrieval for the re-flight mission.

  6. Millimeter-Wave Imaging Radiometer (MIR) Data Processing and Development of Water Vapor Retrieval Algorithms

    NASA Technical Reports Server (NTRS)

    Chang, L. Aron

    1998-01-01

    This document describes the final report of the Millimeter-wave Imaging Radiometer (MIR) Data Processing and Development of Water Vapor Retrieval Algorithms. Volumes of radiometric data have been collected using airborne MIR measurements during a series of field experiments since May 1992. Calibrated brightness temperature data in MIR channels are now available for studies of various hydrological parameters of the atmosphere and Earth's surface. Water vapor retrieval algorithms using multichannel MIR data input are developed for the profiling of atmospheric humidity. The retrieval algorithms are also extended to do three-dimensional mapping of moisture field using continuous observation provided by airborne sensor MIR or spaceborne sensor SSM/T-2. Validation studies for water vapor retrieval are carried out through the intercomparison of collocated and concurrent measurements using different instruments including lidars and radiosondes. The developed MIR water vapor retrieval algorithm is capable of humidity profiling under meteorological conditions ranging from clear column to moderately cloudy sky. Simulative water vapor retrieval studies using extended microwave channels near 183 and 557 GHz strong absorption lines indicate feasibility of humidity profiling to layers in the upper troposphere and improve the overall vertical resolution through the atmosphere.

  7. Integrated Graphics Operations and Analysis Lab Development of Advanced Computer Graphics Algorithms

    NASA Technical Reports Server (NTRS)

    Wheaton, Ira M.

    2011-01-01

    The focus of this project is to aid the IGOAL in researching and implementing algorithms for advanced computer graphics. First, this project focused on porting the current International Space Station (ISS) Xbox experience to the web. Previously, the ISS interior fly-around education and outreach experience only ran on an Xbox 360. One of the desires was to take this experience and make it into something that can be put on NASA s educational site for anyone to be able to access. The current code works in the Unity game engine which does have cross platform capability but is not 100% compatible. The tasks for an intern to complete this portion consisted of gaining familiarity with Unity and the current ISS Xbox code, porting the Xbox code to the web as is, and modifying the code to work well as a web application. In addition, a procedurally generated cloud algorithm will be developed. Currently, the clouds used in AGEA animations and the Xbox experiences are a texture map. The desire is to create a procedurally generated cloud algorithm to provide dynamically generated clouds for both AGEA animations and the Xbox experiences. This task consists of gaining familiarity with AGEA and the plug-in interface, developing the algorithm, creating an AGEA plug-in to implement the algorithm inside AGEA, and creating a Unity script to implement the algorithm for the Xbox. This portion of the project was unable to be completed in the time frame of the internship; however, the IGOAL will continue to work on it in the future.

  8. Developments of global greenhouse gas retrieval algorithm using Aerosol information from GOSAT-CAI

    NASA Astrophysics Data System (ADS)

    Kim, Woogyung; kim, Jhoon; Jung, Yeonjin; lee, Hanlim; Boesch, Hartmut

    2014-05-01

    Human activities have resulted in increasing atmospheric CO2 concentration since the beginning of Industrial Revolution to reaching CO2 concentration over 400 ppm at Mauna Loa observatory for the first time. (IPCC, 2007). However, our current knowledge of carbon cycle is still insufficient due to lack of observations. Satellite measurement is one of the most effective approaches to improve the accuracy of carbon source and sink estimates by monitoring the global CO2 distributions with high spatio-temporal resolutions (Rayner and O'Brien, 2001; Houweling et al., 2004). Currently, GOSAT has provided valuable information to observe global CO2 trend, enables our extended understanding of CO2 and preparation for future satellite plan. However, due to its physical limitation, GOSAT CO2 retrieval results have low spatial resolution and cannot cover wide area. Another obstruction of GOSAT CO2 retrieval is low data availability mainly due to contamination by clouds and aerosols. Especially, in East Asia, one of the most important aerosol source areas, it is hard to have successful retrieval result due to high aerosol concentration. The main purpose of this study is to improve data availability of GOSAT CO2 retrieval. In this study, current state of CO2 retrieval algorithm development is introduced and preliminary results are shown. This algorithm is based on optimal estimation method and utilized VLIDORT the vector discrete ordinate radiative transfer model. This proto type algorithm, developed from various combinations of state vectors to find accurate CO2 concentration, shows reasonable result. Especially the aerosol retrieval algorithm using GOSAT-CAI measurements, which provide aerosol information for the same area with GOSAT-FTS measurements, are utilized as input data of CO2 retrieval. Other CO2 retrieval algorithms use chemical transport model result or climatologically expected values as aerosol information which is the main reason of low data availability. With

  9. Earlier vegetation green-up has reduced spring dust storms.

    PubMed

    Fan, Bihang; Guo, Li; Li, Ning; Chen, Jin; Lin, Henry; Zhang, Xiaoyang; Shen, Miaogen; Rao, Yuhan; Wang, Cong; Ma, Lei

    2014-01-01

    The observed decline of spring dust storms in Northeast Asia since the 1950s has been attributed to surface wind stilling. However, spring vegetation growth could also restrain dust storms through accumulating aboveground biomass and increasing surface roughness. To investigate the impacts of vegetation spring growth on dust storms, we examine the relationships between recorded spring dust storm outbreaks and satellite-derived vegetation green-up date in Inner Mongolia, Northern China from 1982 to 2008. We find a significant dampening effect of advanced vegetation growth on spring dust storms (r = 0.49, p = 0.01), with a one-day earlier green-up date corresponding to a decrease in annual spring dust storm outbreaks by 3%. Moreover, the higher correlation (r = 0.55, p < 0.01) between green-up date and dust storm outbreak ratio (the ratio of dust storm outbreaks to times of strong wind events) indicates that such effect is independent of changes in surface wind. Spatially, a negative correlation is detected between areas with advanced green-up dates and regional annual spring dust storms (r = -0.49, p = 0.01). This new insight is valuable for understanding dust storms dynamics under the changing climate. Our findings suggest that dust storms in Inner Mongolia will be further mitigated by the projected earlier vegetation green-up in the warming world. PMID:25343265

  10. Changes toward earlier streamflow timing across western North America

    USGS Publications Warehouse

    Stewart, I.T.; Cayan, D.R.; Dettinger, M.D.

    2005-01-01

    The highly variable timing of streamflow in snowmelt-dominated basins across western North America is an important consequence, and indicator, of climate fluctuations. Changes in the timing of snowmelt-derived streamflow from 1948 to 2002 were investigated in a network of 302 western North America gauges by examining the center of mass for flow, spring pulse onset dates, and seasonal fractional flows through trend and principal component analyses. Statistical analysis of the streamflow timing measures with Pacific climate indicators identified local and key large-scale processes that govern the regionally coherent parts of the changes and their relative importance. Widespread and regionally coherent trends toward earlier onsets of springtime snowmelt and streamflow have taken place across most of western North America, affecting an area that is much larger than previously recognized. These timing changes have resulted in increasing fractions of annual flow occurring earlier in the water year by 1-4 weeks. The immediate (or proximal) forcings for the spatially coherent parts of the year-to-year fluctuations and longer-term trends of streamflow timing have been higher winter and spring temperatures. Although these temperature changes are partly controlled by the decadal-scale Pacific climate mode [Pacific decadal oscillation (PDO)], a separate and significant part of the variance is associated with a springtime warming trend that spans the PDO phases. ?? 2005 American Meteorological Society.

  11. Development of a real-time model based safety monitoring algorithm for the SSME

    NASA Astrophysics Data System (ADS)

    Norman, A. M.; Maram, J.; Coleman, P.; D'Valentine, M.; Steffens, A.

    1992-07-01

    A safety monitoring system for the SSME incorporating a real time model of the engine has been developed for LeRC as a task of the LeRC Life Prediction for Rocket Engines contract, NAS3-25884. This paper describes the development of the algorithm and model to date, their capabilities and limitations, results of simulation tests, lessons learned, and the plans for implementation and test of the system.

  12. Developing a synergy algorithm for land surface temperature: the SEN4LST project

    NASA Astrophysics Data System (ADS)

    Sobrino, Jose A.; Jimenez, Juan C.; Ghent, Darren J.

    2013-04-01

    Land surface Temperature (LST) is one of the key parameters in the physics of land-surface processes on regional and global scales, combining the results of all surface-atmosphere interactions and energy fluxes between the surface and the atmosphere. An adequate characterization of LST distribution and its temporal evolution requires measurements with detailed spatial and temporal frequencies. With the advent of the Sentinel 2 (S2) and 3 (S3) series of satellites a unique opportunity exists to go beyond the current state of the art of single instrument algorithms. The Synergistic Use of The Sentinel Missions For Estimating And Monitoring Land Surface Temperature (SEN4LST) project aims at developing techniques to fully utilize synergy between S2 and S3 instruments in order to improve LST retrievals. In the framework of the SEN4LST project, three LST retrieval algorithms were proposed using the thermal infrared bands of the Sea and Land Surface Temperature Retrieval (SLSTR) instrument on board the S3 platform: split-window (SW), dual-angle (DA) and a combined algorithm using both split-window and dual-angle techniques (SW-DA). One of the objectives of the project is to select the best algorithm to generate LST products from the synergy between S2/S3 instruments. In this sense, validation is a critical step in the selection process for the best performing candidate algorithm. A unique match-up database constructed at University of Leicester (UoL) of in situ observations from over twenty ground stations and corresponding brightness temperature (BT) and LST match-ups from multi-sensor overpasses is utilised for validating the candidate algorithms. Furthermore, their performance is also evaluated against the standard ESA LST product and the enhanced offline UoL LST product. In addition, a simulation dataset is constructed using 17 synthetic images of LST and the radiative transfer model MODTRAN carried under 66 different atmospheric conditions. Each candidate LST

  13. Development of sensor-based nitrogen recommendation algorithms for cereal crops

    NASA Astrophysics Data System (ADS)

    Asebedo, Antonio Ray

    Nitrogen (N) management is one of the most recognizable components of farming both within and outside the world of agriculture. Interest over the past decade has greatly increased in improving N management systems in corn (Zea mays) and winter wheat (Triticum aestivum ) to have high NUE, high yield, and be environmentally sustainable. Nine winter wheat experiments were conducted across seven locations from 2011 through 2013. The objectives of this study were to evaluate the impacts of fall-winter, Feekes 4, Feekes 7, and Feekes 9 N applications on winter wheat grain yield, grain protein, and total grain N uptake. Nitrogen treatments were applied as single or split applications in the fall-winter, and top-dressed in the spring at Feekes 4, Feekes 7, and Feekes 9 with applied N rates ranging from 0 to 134 kg ha-1. Results indicate that Feekes 7 and 9 N applications provide more optimal combinations of grain yield, grain protein levels, and fertilizer N recovered in the grain when compared to comparable rates of N applied in the fall-winter or at Feekes 4. Winter wheat N management studies from 2006 through 2013 were utilized to develop sensor-based N recommendation algorithms for winter wheat in Kansas. Algorithm RosieKat v.2.6 was designed for multiple N application strategies and utilized N reference strips for establishing N response potential. Algorithm NRS v1.5 addressed single top-dress N applications and does not require a N reference strip. In 2013, field validations of both algorithms were conducted at eight locations across Kansas. Results show algorithm RK v2.6 consistently provided highly efficient N recommendations for improving NUE, while achieving high grain yield and grain protein. Without the use of the N reference strip, NRS v1.5 performed statistically equal to the KSU soil test N recommendation in regards to grain yield but with lower applied N rates. Six corn N fertigation experiments were conducted at KSU irrigated experiment fields from 2012

  14. Review and Analysis of Algorithmic Approaches Developed for Prognostics on CMAPSS Dataset

    NASA Technical Reports Server (NTRS)

    Ramasso, Emannuel; Saxena, Abhinav

    2014-01-01

    Benchmarking of prognostic algorithms has been challenging due to limited availability of common datasets suitable for prognostics. In an attempt to alleviate this problem several benchmarking datasets have been collected by NASA's prognostic center of excellence and made available to the Prognostics and Health Management (PHM) community to allow evaluation and comparison of prognostics algorithms. Among those datasets are five C-MAPSS datasets that have been extremely popular due to their unique characteristics making them suitable for prognostics. The C-MAPSS datasets pose several challenges that have been tackled by different methods in the PHM literature. In particular, management of high variability due to sensor noise, effects of operating conditions, and presence of multiple simultaneous fault modes are some factors that have great impact on the generalization capabilities of prognostics algorithms. More than 70 publications have used the C-MAPSS datasets for developing data-driven prognostic algorithms. The C-MAPSS datasets are also shown to be well-suited for development of new machine learning and pattern recognition tools for several key preprocessing steps such as feature extraction and selection, failure mode assessment, operating conditions assessment, health status estimation, uncertainty management, and prognostics performance evaluation. This paper summarizes a comprehensive literature review of publications using C-MAPSS datasets and provides guidelines and references to further usage of these datasets in a manner that allows clear and consistent comparison between different approaches.

  15. jClustering, an open framework for the development of 4D clustering algorithms.

    PubMed

    Mateos-Pérez, José María; García-Villalba, Carmen; Pascau, Javier; Desco, Manuel; Vaquero, Juan J

    2013-01-01

    We present jClustering, an open framework for the design of clustering algorithms in dynamic medical imaging. We developed this tool because of the difficulty involved in manually segmenting dynamic PET images and the lack of availability of source code for published segmentation algorithms. Providing an easily extensible open tool encourages publication of source code to facilitate the process of comparing algorithms and provide interested third parties with the opportunity to review code. The internal structure of the framework allows an external developer to implement new algorithms easily and quickly, focusing only on the particulars of the method being implemented and not on image data handling and preprocessing. This tool has been coded in Java and is presented as an ImageJ plugin in order to take advantage of all the functionalities offered by this imaging analysis platform. Both binary packages and source code have been published, the latter under a free software license (GNU General Public License) to allow modification if necessary. PMID:23990913

  16. DEVELOPMENT OF PROCESSING ALGORITHMS FOR OUTLIERS AND MISSING VALUES IN CONSTANT OBSERVATION DATA OF TRAFFIC VOLUMES

    NASA Astrophysics Data System (ADS)

    Hashimoto, Hiroyoshi; Kawano, Tomohiko; Momma, Toshiyuki; Uesaka, Katsumi

    Ministry of Land, Infrastructure, Transport and Tourism of Japan is going to make maximum use of vehicle detectors installed at national roads around the country and efficiently gather traffic volume data from wide areas by estimating traffic volumes within adjacent road sections based on the constant observation data obtained from the vehicle detectors. Efficient processing of outliers and missing values in constant observation data are needed in this process. Focusing on the processing of singular and missing values, the authors have developed a series of algorithms to calculate hourly traffic volumes in which a required accuracy is secured based on measurement data obtained from vehicle detectors. The algorithms have been put to practical uses. The main characteristic of these algorithms is that they use data accumulated in the past as well as data from constant observation devices in adjacent road sections. This paper describes the contents of the developed algorithms and clarifies their accuracy using actual observation data and by making comparis on with other methods.

  17. Application of custom tools and algorithms to the development of terrain and target models

    NASA Astrophysics Data System (ADS)

    Wilkosz, Aaron; Williams, Bryan L.; Motz, Steve

    2003-09-01

    In this paper we give a high level discussion outlining methodologies and techniques employed in generating high fidelity terrain and target models. We present the current state of our IR signature development efforts, cover custom tools and algorithms, and discuss future plans. We outline the steps required to derive an IR terrain and target signature models, and provide some details about algorithms developed to classify aerial imagery. In addition, we discuss our tool used to apply IR signature data to tactical vehicle models. We discuss how we process the empirical IR data of target vehicles, apply it to target models, and generate target signature models that correlate with the measured calibrated IR data. The developed characterization databases and target models are used in digital simulations by various customers within the US Army Aviation and Missile Command (AMCOM).

  18. Space-based Doppler lidar sampling strategies: Algorithm development and simulated observation experiments

    NASA Technical Reports Server (NTRS)

    Emmitt, G. D.; Wood, S. A.; Morris, M.

    1990-01-01

    Lidar Atmospheric Wind Sounder (LAWS) Simulation Models (LSM) were developed to evaluate the potential impact of global wind observations on the basic understanding of the Earth's atmosphere and on the predictive skills of current forecast models (GCM and regional scale). Fully integrated top to bottom LAWS Simulation Models for global and regional scale simulations were developed. The algorithm development incorporated the effects of aerosols, water vapor, clouds, terrain, and atmospheric turbulence into the models. Other additions include a new satellite orbiter, signal processor, line of sight uncertainty model, new Multi-Paired Algorithm and wind error analysis code. An atmospheric wind field library containing control fields, meteorological fields, phenomena fields, and new European Center for Medium Range Weather Forecasting (ECMWF) data was also added. The LSM was used to address some key LAWS issues and trades such as accuracy and interpretation of LAWS information, data density, signal strength, cloud obscuration, and temporal data resolution.

  19. Trend of earlier spring in central Europe continued

    NASA Astrophysics Data System (ADS)

    Ungersböck, Markus; Jurkovic, Anita; Koch, Elisabeth; Lipa, Wolfgang; Scheifinger, Helfried; Zach-Hermann, Susanne

    2013-04-01

    Modern phenology is the study of the timing of recurring biological events in the animal and plant world, the causes of their timing with regard to biotic and abiotic forces, and the interrelation among phases of the same or different species. The relationship between phenology and climate explains the importance of plant phenology for Climate Change studies. Plants require light, water, oxygen mineral nutrients and suitable temperature to grow. In temperate zones the seasonal life cycle of plants is primarily controlled by temperature and day length. Higher spring air temperatures are resulting in an earlier onset of the phenological spring in temperate and cool climate. On the other hand changes in phenology due to climate change do have impact on the climate system itself. Vegetation is a dynamic factor in the earth - climate system and has positive and negative feedback mechanisms to the biogeochemical and biogeophysical fluxes to the atmosphere Since the mid of the 1980s spring springs earlier in Europe and autumn is shifting back to the end of the year resulting in a longer vegetation period. The advancement of spring can be clearly attributed to temperature increase in the months prior to leaf unfolding and flowering, the timing of autumn is more complex and cannot easily be attributed to one or some few parameters. To demonstrate that the observed advancement of spring since the mid of 1980s is pro-longed in 2001 to 2010 and the delay of fall and the lengthening of the growing season is confirmed in the last decade we picked out several indicator plants from the PEP725 database www.pep725.eu. The PEP725 database collects data from different European network operators and thus offers a unique compilation of phenological observations; the database is regularly updated. The data follow the same classification scheme, the so called BBCH coding system so they can be compared. Lilac Syringa vulgaris, birch Betula pendula, beech Fagus and horse chestnut Aesculus

  20. Earlier wine-grape ripening driven by climatic warming and drying and management practices

    NASA Astrophysics Data System (ADS)

    Webb, L. B.; Whetton, P. H.; Bhend, J.; Darbyshire, R.; Briggs, P. R.; Barlow, E. W. R.

    2012-04-01

    Trends in phenological phases associated with climate change are widely reported--yet attribution remains rare. Attribution research in biological systems is critical in assisting stakeholders to develop adaptation strategies, particularly if human factors may be exacerbating impacts. Detailed, quantified attribution helps to effectively target adaptation strategies, and counters recent tendencies to overattribute phenological trends to climate shifts. Wine grapes have been ripening earlier in Australia in recent years, often with undesirable impacts. Attribution analysis of detected trends in wine-grape maturity, using time series of up to 64 years in duration, indicates that two climate variables--warming and declines in soil water content--are driving a major portion of this ripening trend. Crop-yield reductions and evolving management practices have probably also contributed to earlier ripening. Potential adaptation options are identified, as some drivers of the trend to earlier maturity can be manipulated through directed management initiatives, such as managing soil moisture and crop yield.

  1. Reducing older driver motor vehicle collisions via earlier cataract surgery.

    PubMed

    Mennemeyer, Stephen T; Owsley, Cynthia; McGwin, Gerald

    2013-12-01

    Older adults who undergo cataract extraction have roughly half the rate of motor vehicle collision (MVC) involvement per mile driven compared to cataract patients who do not elect cataract surgery. Currently in the U.S., most insurers do not allow payment for cataract surgery based upon the findings of a vision exam unless accompanied by an individual's complaint of visual difficulties that seriously interfere with driving or other daily activities and individuals themselves may be slow or reluctant to complain and seek relief. As a consequence, surgery tends to occur after significant vision problems have emerged. We hypothesize that a proactive policy encouraging cataract surgery earlier for a lesser level of complaint would significantly reduce MVCs among older drivers. We used a Monte Carlo model to simulate the MVC experience of the U.S. population from age 60 to 89 under alternative protocols for the timing of cataract surgery which we call "Current Practice" (CP) and "Earlier Surgery" (ES). Our base model finds, from a societal perspective with undiscounted 2010 dollars, that switching to ES from CP reduces by about 21% the average number of MVCs, fatalities, and MVC cost per person. The net effect on total cost - all MVC costs plus cataract surgery expenditures - is a reduction of about 16%. Quality Adjusted Life Years would increase by about 5%. From the perspective of payers for healthcare, the switch would increase cataract surgery expenditure for ages 65+ by about 8% and for ages 60-64 by about 47% but these expenditures are substantially offset after age 65 by reductions in the medical and emergency services component of MVC cost. Similar results occur with discounting at 3% and with various sensitivity analyses. We conclude that a policy of ES would significantly reduce MVCs and their associated consequences. PMID:23369786

  2. Development of a polarimetric radar based hydrometeor classification algorithm for winter precipitation

    NASA Astrophysics Data System (ADS)

    Thompson, Elizabeth Jennifer

    The nation-wide WSR-88D radar network is currently being upgraded for dual-polarized technology. While many convective, warm-season fuzzy-logic hydrometeor classification algorithms based on this new suite of radar variables and temperature have been refined, less progress has been made thus far in developing hydrometeor classification algorithms for winter precipitation. Unlike previous studies, the focus of this work is to exploit the discriminatory power of polarimetric variables to distinguish the most common precipitation types found in winter storms without the use of temperature as an additional variable. For the first time, detailed electromagnetic scattering of plates, dendrites, dry aggregated snowflakes, rain, freezing rain, and sleet are conducted at X-, C-, and S-band wavelengths. These physics-based results are used to determine the characteristic radar variable ranges associated with each precipitation type. A variable weighting system was also implemented in the algorithm's decision process to capitalize on the strengths of specific dual-polarimetric variables to discriminate between certain classes of hydrometeors, such as wet snow to indicate the melting layer. This algorithm was tested on observations during three different winter storms in Colorado and Oklahoma with the dual-wavelength X- and S-band CSU-CHILL, C-band OU-PRIME, and X-band CASA IP1 polarimetric radars. The algorithm showed success at all three frequencies, but was slightly more reliable at X-band because of the algorithm's strong dependence on KDP. While plates were rarely distinguished from dendrites, the latter were satisfactorily differentiated from dry aggregated snowflakes and wet snow. Sleet and freezing rain could not be distinguished from rain or light rain based on polarimetric variables alone. However, high-resolution radar observations illustrated the refreezing process of raindrops into ice pellets, which has been documented before but not yet

  3. Development of Outlier detection Algorithm Applicable to a Korean Surge-Gauge

    NASA Astrophysics Data System (ADS)

    Lee, Jun-Whan; Park, Sun-Cheon; Lee, Won-Jin; Lee, Duk Kee

    2016-04-01

    The Korea Meteorological Administration (KMA) is operating a surge-gauge (aerial ultrasonic type) at Ulleung-do to monitor tsunamis. And the National Institute of Meteorological Sciences (NIMS), KMA is developing a tsunami detection and observation system using this surge-gauge. Outliers resulting from a problem with the transmission and extreme events, which change the water level temporarily, are one of the most common discouraging problems in tsunami detection. Unlike a spike, multipoint outliers are difficult to detect clearly. Most of the previous studies used statistic values or signal processing methods such as wavelet transform and filter to detect the multipoint outliers, and used a continuous dataset. However, as the focus moved to a near real-time operation with a dataset that contains gaps, these methods are no longer tenable. In this study, we developed an outlier detection algorithm applicable to the Ulleung-do surge gauge where both multipoint outliers and missing data exist. Although only 9-point data and two arithmetic operations (plus and minus) are used, because of the newly developed keeping method, the algorithm is not only simple and fast but also effective in a non-continuous dataset. We calibrated 17 thresholds and conducted performance tests using the three month data from the Ulleung-do surge gauge. The results show that the newly developed despiking algorithm performs reliably in alleviating the outlier detecting problem.

  4. Sensitivity of cloud retrieval statistics to algorithm choices: Lessons learned from MODIS product development

    NASA Astrophysics Data System (ADS)

    Platnick, Steven; Ackerman, Steven; King, Michael; Zhang, Zhibo; Wind, Galina

    2013-04-01

    Cloud detection algorithms search for measurement signatures that differentiate a cloud-contaminated or "not-clear" pixel from the clear-sky background. These signatures can be spectral, textural or temporal in nature. The magnitude of the difference between the cloud and the background must exceed a threshold value for the pixel to be classified having a not-clear FOV. All detection algorithms employ multiple tests ranging across some portion of the solar reflectance and/or infrared spectrum. However, a cloud is not a single, uniform object, but rather has a distribution of optical thickness and morphology. As a result, problems can arise when the distributions of cloud and clear-sky background characteristics overlap, making some test results indeterminate and/or leading to some amount of detection misclassification. Further, imager cloud retrieval statistics are highly sensitive to how a pixel identified as not-clear by a cloud mask is determined to be useful for cloud-top and optical retrievals based on 1-D radiative models. This presentation provides an overview of the different 'choices' algorithm developers make in cloud detection algorithms and the impact on regional and global cloud amounts and fractional coverage, cloud type and property distributions. Lessons learned over the course of the MODIS cloud product development history are discussed. As an example, we will focus on the 1km MODIS Collection 5 cloud optical retrieval algorithm (product MOD06/MYD06 for Terra and Aqua, respectively) which removed pixels associated with cloud edges as defined by immediate adjacency to clear FOV MODIS cloud mask (MOD35/MYD35) pixels as well as ocean pixels with partly cloudy elements in the 250m MODIS cloud mask - part of the so-called Clear Sky Restoral algorithm. The Collection 6 algorithm attempts retrievals for these two types of partly cloudy pixel populations, but allows a user to isolate or filter out the populations. Retrieval sensitivities for these

  5. Dataset exploited for the development and validation of automated cyanobacteria quantification algorithm, ACQUA.

    PubMed

    Gandola, Emanuele; Antonioli, Manuela; Traficante, Alessio; Franceschini, Simone; Scardi, Michele; Congestri, Roberta

    2016-09-01

    The estimation and quantification of potentially toxic cyanobacteria in lakes and reservoirs are often used as a proxy of risk for water intended for human consumption and recreational activities. Here, we present data sets collected from three volcanic Italian lakes (Albano, Vico, Nemi) that present filamentous cyanobacteria strains at different environments. Presented data sets were used to estimate abundance and morphometric characteristics of potentially toxic cyanobacteria comparing manual Vs. automated estimation performed by ACQUA ("ACQUA: Automated Cyanobacterial Quantification Algorithm for toxic filamentous genera using spline curves, pattern recognition and machine learning" (Gandola et al., 2016) [1]). This strategy was used to assess the algorithm performance and to set up the denoising algorithm. Abundance and total length estimations were used for software development, to this aim we evaluated the efficiency of statistical tools and mathematical algorithms, here described. The image convolution with the Sobel filter has been chosen to denoise input images from background signals, then spline curves and least square method were used to parameterize detected filaments and to recombine crossing and interrupted sections aimed at performing precise abundances estimations and morphometric measurements. PMID:27500194

  6. A Focus Group on Dental Pain Complaints with General Medical Practitioners: Developing a Treatment Algorithm

    PubMed Central

    Carter, Geoff; Abbey, Robyn

    2016-01-01

    Objective. The differential diagnosis of pain in the mouth can be challenging for general medical practitioners (GMPs) as many different dental problems can present with similar signs and symptoms. This study aimed to create a treatment algorithm for GMPs to effectively and appropriately refer the patients and prescribe antibiotics. Design. The study design is comprised of qualitative focus group discussions. Setting and Subjects. Groups of GMPs within the Gold Coast and Brisbane urban and city regions. Outcome Measures. Content thematically analysed and treatment algorithm developed. Results. There were 5 focus groups with 8-9 participants per group. Addressing whether antibiotics should be given to patients with dental pain was considered very important to GMPs to prevent overtreatment and creating antibiotic resistance. Many practitioners were unsure of what the different forms of dental pains represent. 90% of the practitioners involved agreed that the treatment algorithm was useful to daily practice. Conclusion. Common dental complaints and infections are seldom surgical emergencies but can result in prolonged appointments for those GMPs who do not regularly deal with these issues. The treatment algorithm for referral processes and prescriptions was deemed easily downloadable and simple to interpret and detailed but succinct enough for clinical use by GMPs. PMID:27462469

  7. A Focus Group on Dental Pain Complaints with General Medical Practitioners: Developing a Treatment Algorithm.

    PubMed

    Carter, Ava Elizabeth; Carter, Geoff; Abbey, Robyn

    2016-01-01

    Objective. The differential diagnosis of pain in the mouth can be challenging for general medical practitioners (GMPs) as many different dental problems can present with similar signs and symptoms. This study aimed to create a treatment algorithm for GMPs to effectively and appropriately refer the patients and prescribe antibiotics. Design. The study design is comprised of qualitative focus group discussions. Setting and Subjects. Groups of GMPs within the Gold Coast and Brisbane urban and city regions. Outcome Measures. Content thematically analysed and treatment algorithm developed. Results. There were 5 focus groups with 8-9 participants per group. Addressing whether antibiotics should be given to patients with dental pain was considered very important to GMPs to prevent overtreatment and creating antibiotic resistance. Many practitioners were unsure of what the different forms of dental pains represent. 90% of the practitioners involved agreed that the treatment algorithm was useful to daily practice. Conclusion. Common dental complaints and infections are seldom surgical emergencies but can result in prolonged appointments for those GMPs who do not regularly deal with these issues. The treatment algorithm for referral processes and prescriptions was deemed easily downloadable and simple to interpret and detailed but succinct enough for clinical use by GMPs. PMID:27462469

  8. Development of Algorithms for Control of Humidity in Plant Growth Chambers

    NASA Technical Reports Server (NTRS)

    Costello, Thomas A.

    2003-01-01

    Algorithms were developed to control humidity in plant growth chambers used for research on bioregenerative life support at Kennedy Space Center. The algorithms used the computed water vapor pressure (based on measured air temperature and relative humidity) as the process variable, with time-proportioned outputs to operate the humidifier and de-humidifier. Algorithms were based upon proportional-integral-differential (PID) and Fuzzy Logic schemes and were implemented using I/O Control software (OPTO-22) to define and download the control logic to an autonomous programmable logic controller (PLC, ultimate ethernet brain and assorted input-output modules, OPTO-22), which performed the monitoring and control logic processing, as well the physical control of the devices that effected the targeted environment in the chamber. During limited testing, the PLC's successfully implemented the intended control schemes and attained a control resolution for humidity of less than 1%. The algorithms have potential to be used not only with autonomous PLC's but could also be implemented within network-based supervisory control programs. This report documents unique control features that were implemented within the OPTO-22 framework and makes recommendations regarding future uses of the hardware and software for biological research by NASA.

  9. Development and application of efficient pathway enumeration algorithms for metabolic engineering applications.

    PubMed

    Liu, F; Vilaça, P; Rocha, I; Rocha, M

    2015-02-01

    Metabolic Engineering (ME) aims to design microbial cell factories towards the production of valuable compounds. In this endeavor, one important task relates to the search for the most suitable heterologous pathway(s) to add to the selected host. Different algorithms have been developed in the past towards this goal, following distinct approaches spanning constraint-based modeling, graph-based methods and knowledge-based systems based on chemical rules. While some of these methods search for pathways optimizing specific objective functions, here the focus will be on methods that address the enumeration of pathways that are able to convert a set of source compounds into desired targets and their posterior evaluation according to different criteria. Two pathway enumeration algorithms based on (hyper)graph-based representations are selected as the most promising ones and are analyzed in more detail: the Solution Structure Generation and the Find Path algorithms. Their capabilities and limitations are evaluated when designing novel heterologous pathways, by applying these methods on three case studies of synthetic ME related to the production of non-native compounds in E. coli and S. cerevisiae: 1-butanol, curcumin and vanillin. Some targeted improvements are implemented, extending both methods to address limitations identified that impair their scalability, improving their ability to extract potential pathways over large-scale databases. In all case-studies, the algorithms were able to find already described pathways for the production of the target compounds, but also alternative pathways that can represent novel ME solutions after further evaluation. PMID:25580014

  10. Development of an Aircraft Approach and Departure Atmospheric Profile Generation Algorithm

    NASA Technical Reports Server (NTRS)

    Buck, Bill K.; Velotas, Steven G.; Rutishauser, David K. (Technical Monitor)

    2004-01-01

    In support of NASA Virtual Airspace Modeling and Simulation (VAMS) project, an effort was initiated to develop and test techniques for extracting meteorological data from landing and departing aircraft, and for building altitude based profiles for key meteorological parameters from these data. The generated atmospheric profiles will be used as inputs to NASA s Aircraft Vortex Spacing System (AVOLSS) Prediction Algorithm (APA) for benefits and trade analysis. A Wake Vortex Advisory System (WakeVAS) is being developed to apply weather and wake prediction and sensing technologies with procedures to reduce current wake separation criteria when safe and appropriate to increase airport operational efficiency. The purpose of this report is to document the initial theory and design of the Aircraft Approach Departure Atmospheric Profile Generation Algorithm.

  11. Forecasting of the development of professional medical equipment engineering based on neuro-fuzzy algorithms

    NASA Astrophysics Data System (ADS)

    Vaganova, E. V.; Syryamkin, M. V.

    2015-11-01

    The purpose of the research is the development of evolutionary algorithms for assessments of promising scientific directions. The main attention of the present study is paid to the evaluation of the foresight possibilities for identification of technological peaks and emerging technologies in professional medical equipment engineering in Russia and worldwide on the basis of intellectual property items and neural network modeling. An automated information system consisting of modules implementing various classification methods for accuracy of the forecast improvement and the algorithm of construction of neuro-fuzzy decision tree have been developed. According to the study result, modern trends in this field will focus on personalized smart devices, telemedicine, bio monitoring, «e-Health» and «m-Health» technologies.

  12. Algorithm developments for the Euler equations with calculations of transonic flows

    NASA Technical Reports Server (NTRS)

    Goorjian, Peter M.

    1987-01-01

    A new algorithm has been developed for the Euler equations that uses flux vector splitting in combination with the concept of rotating the coordinate system to the local streamwise direction. Flux vector biasing is applied along the local streamwise direction and central differencing is used transverse to the flow direction. The flux vector biasing is switched from upwind for supersonic flow to downwind-biased for subsonic flow. This switching is based on the Mach number; hence the proper domain of dependence is used in the supersonic regions and the switching occurs across shock waves. The theoretical basis and the development of the formulas for flux vector splitting are presented. Then several one-dimensional calculations are presented of steady and unsteady transonic flows, which demonstrate the stability and accuracy of the algorithm. Finally results are shown for unsteady transonic flow over an airfoil. The pressure coefficient plots show sharp transonic shock profiles, and the Mach contour plots show smoothly varying contours.

  13. Ocean Observations with EOS/MODIS: Algorithm Development and Post Launch Studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1997-01-01

    Significant accomplishments made during the present reporting period are as follows: (1) We developed a new method for identifying the presence of absorbing aerosols and, simultaneously, performing atmospheric correction. The algorithm consists of optimizing the match between the top-of-atmosphere radiance spectrum and the result of models of both the ocean and aerosol optical properties; (2) We developed an algorithm for providing an accurate computation of the diffuse transmittance of the atmosphere given an aerosol model. A module for inclusion into the MODIS atmospheric-correction algorithm was completed; (3) We acquired reflectance data for oceanic whitecaps during a cruise on the RV Ka'imimoana in the Tropical Pacific (Manzanillo, Mexico to Honolulu, Hawaii). The reflectance spectrum of whitecaps was found to be similar to that for breaking waves in the surf zone measured by Frouin, Schwindling and Deschamps, however, the drop in augmented reflectance from 670 to 860 nm was not as great, and the magnitude of the augmented reflectance was significantly less than expected; and (4) We developed a method for the approximate correction for the effects of the MODIS polarization sensitivity. The correction, however, requires adequate characterization of the polarization sensitivity of MODIS prior to launch.

  14. Developing an Algorithm to Identify History of Cancer Using Electronic Medical Records

    PubMed Central

    Clarke, Christina L.; Feigelson, Heather S.

    2016-01-01

    Introduction/Objective: The objective of this study was to develop an algorithm to identify Kaiser Permanente Colorado (KPCO) members with a history of cancer. Background: Tumor registries are used with high precision to identify incident cancer, but are not designed to capture prevalent cancer within a population. We sought to identify a cohort of adults with no history of cancer, and thus, we could not rely solely on the tumor registry. Methods: We included all KPCO members between the ages of 40–75 years who were continuously enrolled during 2013 (N=201,787). Data from the tumor registry, chemotherapy files, inpatient and outpatient claims were used to create an algorithm to identify members with a high likelihood of cancer. We validated the algorithm using chart review and calculated sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) for occurrence of cancer. Findings: The final version of the algorithm achieved a sensitivity of 100 percent and specificity of 84.6 percent for identifying cancer. If we relied on the tumor registry alone, 47 percent of those with a history of cancer would have been missed. Discussion: Using the tumor registry alone to identify a cohort of patients with prior cancer is not sufficient. In the final version of the algorithm, the sensitivity and PPV were improved when a diagnosis code for cancer was required to accompany oncology visits or chemotherapy administration. Conclusion: Electronic medical record (EMR) data can be used effectively in combination with data from the tumor registry to identify health plan members with a history of cancer. PMID:27195308

  15. Earlier North American Monsoon Onset in a Warmer World?

    NASA Astrophysics Data System (ADS)

    Rauscher, S. A.; Seth, A.; Ringler, T.; Rojas, M.; Liebmann, B.

    2009-12-01

    Analysis of twenty-first century projections indicate substantial drying over the American Southwest and the potential for “Dust Bowl” conditions to be the norm by the middle of century. Closer examination of monthly precipitation data from the CMIP3 models indicates that the annual cycle is actually amplified over the North American Monsoon (NAMS) region, with drier conditions during the winter and an increase in monsoon rains during the later part of the rainy season. Importantly, the projected decrease in winter precipitation extends into the spring season, suggesting a delayed onset of the NAMS. Consistent thermodynamic changes, including a decrease in low-level relative humidity and an increase in the vertical gradient of moist static energy, accompany this spring precipitation decrease. Here we examine daily precipitation data from the CMIP3 archive to determine if this reduced spring precipitation represents a true delay in the NAMS onset. We further analyze the hydrological cycle over the NAMS region in several of the CMIP3 models, focusing on changes in net moisture divergence, surface evaporation, and soil moisture in order to fully understand how the hydrological cycle will change in the future based on the CMIP3 simulations, and how these changes may be translated into the timing and intensity of the NAMS. The combination of a delayed NAMS onset and earlier and reduced snowmelt runoff in the western US could substantially change the availability of water resources over the NAMS region.

  16. Poorest countries experience earlier anthropogenic emergence of daily temperature extremes

    NASA Astrophysics Data System (ADS)

    Harrington, Luke J.; Frame, David J.; Fischer, Erich M.; Hawkins, Ed; Joshi, Manoj; Jones, Chris D.

    2016-05-01

    Understanding how the emergence of the anthropogenic warming signal from the noise of internal variability translates to changes in extreme event occurrence is of crucial societal importance. By utilising simulations of cumulative carbon dioxide (CO2) emissions and temperature changes from eleven earth system models, we demonstrate that the inherently lower internal variability found at tropical latitudes results in large increases in the frequency of extreme daily temperatures (exceedances of the 99.9th percentile derived from pre-industrial climate simulations) occurring much earlier than for mid-to-high latitude regions. Most of the world’s poorest people live at low latitudes, when considering 2010 GDP-PPP per capita; conversely the wealthiest population quintile disproportionately inhabit more variable mid-latitude climates. Consequently, the fraction of the global population in the lowest socio-economic quintile is exposed to substantially more frequent daily temperature extremes after much lower increases in both mean global warming and cumulative CO2 emissions.

  17. Development and evaluation of an articulated registration algorithm for human skeleton registration

    NASA Astrophysics Data System (ADS)

    Yip, Stephen; Perk, Timothy; Jeraj, Robert

    2014-03-01

    Accurate registration over multiple scans is necessary to assess treatment response of bone diseases (e.g. metastatic bone lesions). This study aimed to develop and evaluate an articulated registration algorithm for the whole-body skeleton registration in human patients. In articulated registration, whole-body skeletons are registered by auto-segmenting into individual bones using atlas-based segmentation, and then rigidly aligning them. Sixteen patients (weight = 80-117 kg, height = 168-191 cm) with advanced prostate cancer underwent the pre- and mid-treatment PET/CT scans over a course of cancer therapy. Skeletons were extracted from the CT images by thresholding (HU>150). Skeletons were registered using the articulated, rigid, and deformable registration algorithms to account for position and postural variability between scans. The inter-observers agreement in the atlas creation, the agreement between the manually and atlas-based segmented bones, and the registration performances of all three registration algorithms were all assessed using the Dice similarity index—DSIobserved, DSIatlas, and DSIregister. Hausdorff distance (dHausdorff) of the registered skeletons was also used for registration evaluation. Nearly negligible inter-observers variability was found in the bone atlases creation as the DSIobserver was 96 ± 2%. Atlas-based and manual segmented bones were in excellent agreement with DSIatlas of 90 ± 3%. Articulated (DSIregsiter = 75 ± 2%, dHausdorff = 0.37 ± 0.08 cm) and deformable registration algorithms (DSIregister = 77 ± 3%, dHausdorff = 0.34 ± 0.08 cm) considerably outperformed the rigid registration algorithm (DSIregsiter = 59 ± 9%, dHausdorff = 0.69 ± 0.20 cm) in the skeleton registration as the rigid registration algorithm failed to capture the skeleton flexibility in the joints. Despite superior skeleton registration performance, deformable registration algorithm failed to preserve the local rigidity of bones as over 60% of the

  18. Development of algorithms for building inventory compilation through remote sensing and statistical inferencing

    NASA Astrophysics Data System (ADS)

    Sarabandi, Pooya

    Building inventories are one of the core components of disaster vulnerability and loss estimations models, and as such, play a key role in providing decision support for risk assessment, disaster management and emergency response efforts. In may parts of the world inclusive building inventories, suitable for the use in catastrophe models cannot be found. Furthermore, there are serious shortcomings in the existing building inventories that include incomplete or out-dated information on critical attributes as well as missing or erroneous values for attributes. In this dissertation a set of methodologies for updating spatial and geometric information of buildings from single and multiple high-resolution optical satellite images are presented. Basic concepts, terminologies and fundamentals of 3-D terrain modeling from satellite images are first introduced. Different sensor projection models are then presented and sources of optical noise such as lens distortions are discussed. An algorithm for extracting height and creating 3-D building models from a single high-resolution satellite image is formulated. The proposed algorithm is a semi-automated supervised method capable of extracting attributes such as longitude, latitude, height, square footage, perimeter, irregularity index and etc. The associated errors due to the interactive nature of the algorithm are quantified and solutions for minimizing the human-induced errors are proposed. The height extraction algorithm is validated against independent survey data and results are presented. The validation results show that an average height modeling accuracy of 1.5% can be achieved using this algorithm. Furthermore, concept of cross-sensor data fusion for the purpose of 3-D scene reconstruction using quasi-stereo images is developed in this dissertation. The developed algorithm utilizes two or more single satellite images acquired from different sensors and provides the means to construct 3-D building models in a more

  19. Development of Deterministic Disaggregation Algorithm for Remotely Sensed Soil Moisture Products

    NASA Astrophysics Data System (ADS)

    Shin, Y.; Mohanty, B. P.

    2011-12-01

    Soil moisture near the land surface and in the subsurface profile is an important issue for hydrology, agronomy, and meteorology. Soil moisture data are limited in the spatial and temporal scales. Till now, point-scaled soil moisture measurements representing regional scales are available. Remote sensing (RS) scheme can be an alternative to direct measurement. However, the availability of RS datasets has a limitation due to the scale discrepancy between the RS resolution and local-scale. A number of studies have been conducted to develop downscaling/disaggregation algorithm for extracting fine-scaled soil moisture within a remote sensing product using the stochastic methods. The stochastic downscaling/disaggregation schemes provide us only for soil texture information and sub-area fractions contained in a RS pixel indicating that their specific locations are not recognized. Thus, we developed the deterministic disaggregation algorithm (DDA) with a genetic algorithm (GA) adapting the inverse method for extracting/searching soil textures and their specific location of sub-pixels within a RS soil moisture product under the numerical experiments and field validations. This approach performs quite well in disaggregating/recognizing the soil textures and their specific locations within a RS soil moisture footprint compared to the results of stochastic method. On the basis of these findings, we can suggest that the DDA can be useful for improving the availability of RS products.

  20. Development of an algorithm to predict comfort of wheelchair fit based on clinical measures

    PubMed Central

    Kon, Keisuke; Hayakawa, Yasuyuki; Shimizu, Shingo; Nosaka, Toshiya; Tsuruga, Takeshi; Matsubara, Hiroyuki; Nomura, Tomohiro; Murahara, Shin; Haruna, Hirokazu; Ino, Takumi; Inagaki, Jun; Kobayashi, Toshiki

    2015-01-01

    [Purpose] The purpose of this study was to develop an algorithm to predict the comfort of a subject seated in a wheelchair, based on common clinical measurements and without depending on verbal communication. [Subjects] Twenty healthy males (mean age: 21.5 ± 2 years; height: 171 ± 4.3 cm; weight: 56 ± 12.3 kg) participated in this study. [Methods] Each experimental session lasted for 60 min. The clinical measurements were obtained under 4 conditions (good posture, with and without a cushion; bad posture, with and without a cushion). Multiple regression analysis was performed to determine the relationship between a visual analogue scale and exercise physiology parameters (respiratory and metabolism), autonomic nervous parameters (heart rate, blood pressure, and salivary amylase level), and 3D-coordinate posture parameters (good or bad posture). [Results] For the equation (algorithm) to predict the visual analogue scale score, the adjusted multiple correlation coefficient was 0.72, the residual standard deviation was 1.2, and the prediction error was 12%. [Conclusion] The algorithm developed in this study could predict the comfort of healthy male seated in a wheelchair with 72% accuracy. PMID:26504299

  1. Development of an algorithm to predict comfort of wheelchair fit based on clinical measures.

    PubMed

    Kon, Keisuke; Hayakawa, Yasuyuki; Shimizu, Shingo; Nosaka, Toshiya; Tsuruga, Takeshi; Matsubara, Hiroyuki; Nomura, Tomohiro; Murahara, Shin; Haruna, Hirokazu; Ino, Takumi; Inagaki, Jun; Kobayashi, Toshiki

    2015-09-01

    [Purpose] The purpose of this study was to develop an algorithm to predict the comfort of a subject seated in a wheelchair, based on common clinical measurements and without depending on verbal communication. [Subjects] Twenty healthy males (mean age: 21.5 ± 2 years; height: 171 ± 4.3 cm; weight: 56 ± 12.3 kg) participated in this study. [Methods] Each experimental session lasted for 60 min. The clinical measurements were obtained under 4 conditions (good posture, with and without a cushion; bad posture, with and without a cushion). Multiple regression analysis was performed to determine the relationship between a visual analogue scale and exercise physiology parameters (respiratory and metabolism), autonomic nervous parameters (heart rate, blood pressure, and salivary amylase level), and 3D-coordinate posture parameters (good or bad posture). [Results] For the equation (algorithm) to predict the visual analogue scale score, the adjusted multiple correlation coefficient was 0.72, the residual standard deviation was 1.2, and the prediction error was 12%. [Conclusion] The algorithm developed in this study could predict the comfort of healthy male seated in a wheelchair with 72% accuracy. PMID:26504299

  2. Development of Analytical Algorithm for the Performance Analysis of Power Train System of an Electric Vehicle

    NASA Astrophysics Data System (ADS)

    Kim, Chul-Ho; Lee, Kee-Man; Lee, Sang-Heon

    Power train system design is one of the key R&D areas on the development process of new automobile because an optimum size of engine with adaptable power transmission which can accomplish the design requirement of new vehicle can be obtained through the system design. Especially, for the electric vehicle design, very reliable design algorithm of a power train system is required for the energy efficiency. In this study, an analytical simulation algorithm is developed to estimate driving performance of a designed power train system of an electric. The principal theory of the simulation algorithm is conservation of energy with several analytical and experimental data such as rolling resistance, aerodynamic drag, mechanical efficiency of power transmission etc. From the analytical calculation results, running resistance of a designed vehicle is obtained with the change of operating condition of the vehicle such as inclined angle of road and vehicle speed. Tractive performance of the model vehicle with a given power train system is also calculated at each gear ratio of transmission. Through analysis of these two calculation results: running resistance and tractive performance, the driving performance of a designed electric vehicle is estimated and it will be used to evaluate the adaptability of the designed power train system on the vehicle.

  3. Hybrid Neural-Network: Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics Developed and Demonstrated

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2002-01-01

    As part of the NASA Aviation Safety Program, a unique model-based diagnostics method that employs neural networks and genetic algorithms for aircraft engine performance diagnostics has been developed and demonstrated at the NASA Glenn Research Center against a nonlinear gas turbine engine model. Neural networks are applied to estimate the internal health condition of the engine, and genetic algorithms are used for sensor fault detection, isolation, and quantification. This hybrid architecture combines the excellent nonlinear estimation capabilities of neural networks with the capability to rank the likelihood of various faults given a specific sensor suite signature. The method requires a significantly smaller data training set than a neural network approach alone does, and it performs the combined engine health monitoring objectives of performance diagnostics and sensor fault detection and isolation in the presence of nominal and degraded engine health conditions.

  4. Ocean Observations with EOS/MODIS: Algorithm Development and Post Launch Studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.; Conboy, Barbara (Technical Monitor)

    1999-01-01

    This separation has been logical thus far; however, as launch of AM-1 approaches, it must be recognized that many of these activities will shift emphasis from algorithm development to validation. For example, the second, third, and fifth bullets will become almost totally validation-focussed activities in the post-launch era, providing the core of our experimental validation effort. Work under the first bullet will continue into the post-launch time frame, driven in part by algorithm deficiencies revealed as a result of validation activities. Prior to the start of the 1999 fiscal year (FY99) we were requested to prepare a brief plan for our FY99 activities. This plan is included as Appendix 1. The present report describes the progress made on our planned activities.

  5. Development of a block Lanczos algorithm for free vibration analysis of spinning structures

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.; Lawson, C. L.

    1988-01-01

    This paper is concerned with the development of an efficient eigenproblem solution algorithm and an associated computer program for the economical solution of the free vibration problem of complex practical spinning structural systems. Thus, a detailed description of a newly developed block Lanczos procedure is presented in this paper that employs only real numbers in all relevant computations and also fully exploits sparsity of associated matrices. The procedure is capable of computing multiple roots and proves to be most efficient compared to other existing similar techniques.

  6. Status of GCOM-W1/AMSR2 development, algorithms, and products

    NASA Astrophysics Data System (ADS)

    Maeda, Takashi; Imaoka, Keiji; Kachi, Misako; Fujii, Hideyuki; Shibata, Akira; Naoki, Kazuhiro; Kasahara, Marehito; Ito, Norimasa; Nakagawa, Keizo; Oki, Taikan

    2011-11-01

    The Global Change Observation Mission (GCOM) consists of two polar orbiting satellite observing systems, GCOM-W (Water) and GCOM-C (Climate), and three generations to achieve global and long-term monitoring of the Earth. GCOM-W1 is the first satellite of the GCOM-W series and scheduled to be launched in Japanese fiscal year 2011. The Advanced Microwave Scanning Radiometer-2 (AMSR2) will be the mission instrument of GCOM-W1. AMSR2 will extend the observation of currently ongoing AMSR-E on EOS Aqua platform. Development of GCOM-W1 and AMSR2 is progressing on schedule. Proto-flight test (PFT) of AMSR2 was completed and delivered to the GCOM-W1 satellite system. Currently, the GCOM-W1 system is under PFT at Tsukuba Space Center until summer 2011 before shipment to launch site, Tanegashima Space Center. Development of retrieval algorithms has been also progressing with the collaboration of the principal investigators. Based on the algorithm comparison results, at-launch standard algorithms were selected and implemented into the processing system. These algorithms will be validated and updated during the initial calibration and validation phase. As an instrument calibration activity, a deep space calibration maneuver is planned during the initial checkout phase, to confirm the consistency of cold sky calibration and intra-scan biases. Maintaining and expanding the validation sites are also ongoing activities. A flux tower observing instruments will be introduced into the Murray-Darling basin in Australia, where the validation of other soil moisture instruments (e.g., SMOS and SMAP) is planned.

  7. Developments of global greenhouse gas retrieval algorithm based on Optimal Estimation Method

    NASA Astrophysics Data System (ADS)

    Kim, W. V.; Kim, J.; Lee, H.; Jung, Y.; Boesch, H.

    2013-12-01

    After the industrial revolution, atmospheric carbon dioxide concentration increased drastically over the last 250 years. It is still increasing and over than 400ppm of carbon dioxide was measured at Mauna Loa observatory for the first time which value was considered as important milestone. Therefore, understanding the source, emission, transport and sink of global carbon dioxide is unprecedentedly important. Currently, Total Carbon Column Observing Network (TCCON) is operated to observe CO2 concentration by ground base instruments. However, the number of site is very few and concentrated to Europe and North America. Remote sensing of CO2 could supplement those limitations. Greenhouse Gases Observing SATellite (GOSAT) which was launched 2009 is measuring column density of CO2 and other satellites are planned to launch in a few years. GOSAT provide valuable measurement data but its low spatial resolution and poor success rate of retrieval due to aerosol and cloud, forced the results to cover less than half of the whole globe. To improve data availability, accurate aerosol information is necessary, especially for East Asia region where the aerosol concentration is higher than other region. For the first step, we are developing CO2 retrieval algorithm based on optimal estimation method with VLIDORT the vector discrete ordinate radiative transfer model. Proto type algorithm, developed from various combinations of state vectors to find best combination of state vectors, shows appropriate result and good agreement with TCCON measurements. To reduce calculation cost low-stream interpolation is applied for model simulation and the simulation time is drastically reduced. For the further study, GOSAT CO2 retrieval algorithm will be combined with accurate GOSAT-CAI aerosol retrieval algorithm to obtain more accurate result especially for East Asia.

  8. The development of line-scan image recognition algorithms for the detection of frass on mature tomatoes

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In this research, a multispectral algorithm derived from hyperspectral line-scan fluorescence imaging under violet LED excitation was developed for the detection of frass contamination on mature tomatoes. The algorithm utilized the fluorescence intensities at two wavebands, 664 nm and 690 nm, for co...

  9. Algorithm development and verification of UASCM for multi-dimension and multi-group neutron kinetics model

    SciTech Connect

    Si, S.

    2012-07-01

    The Universal Algorithm of Stiffness Confinement Method (UASCM) for neutron kinetics model of multi-dimensional and multi-group transport equations or diffusion equations has been developed. The numerical experiments based on transport theory code MGSNM and diffusion theory code MGNEM have demonstrated that the algorithm has sufficient accuracy and stability. (authors)

  10. Development and evaluation of collision warning/collision avoidance algorithms using an errable driver model

    NASA Astrophysics Data System (ADS)

    Yang, Hsin-Hsiang; Peng, Huei

    2010-12-01

    Collision warning/collision avoidance (CW/CA) systems must be designed to work seamlessly with a human driver, providing warning or control actions when the driver's response (or lack of) is deemed inappropriate. The effectiveness of CW/CA systems working with a human driver needs to be evaluated thoroughly because of legal/liability and other (e.g. traffic flow) concerns. CW/CA systems tuned only under open-loop manoeuvres were frequently found to work unsatisfactorily with human-in-the-loop. However, tuning CW/CA systems with human drivers co-existing is slow and non-repeatable. Driver models, if constructed and used properly, can capture human/control interactions and accelerate the CW/CA development process. Design and evaluation methods for CW/CA algorithms can be categorised into three approaches, scenario-based, performance-based and human-centred. The strength and weakness of these approaches were discussed in this paper and a humanised errable driver model was introduced to improve the developing process. The errable driver model used in this paper is a model that emulates human driver's functions and can generate both nominal (error-free) and devious (with error) behaviours. The car-following data used for developing and validating the model were obtained from a large-scale naturalistic driving database. Three error-inducing behaviours were introduced: human perceptual limitation, time delay and distraction. By including these error-inducing behaviours, rear-end collisions with a lead vehicle were found to occur at a probability similar to traffic accident statistics in the USA. This driver model is then used to evaluate the performance of several existing CW/CA algorithms. Finally, a new CW/CA algorithm was developed based on this errable driver model.

  11. GLASS daytime all-wave net radiation product: Algorithm development and preliminary validation

    DOE PAGESBeta

    Jiang, Bo; Liang, Shunlin; Ma, Han; Zhang, Xiaotong; Xiao, Zhiqiang; Zhao, Xiang; Jia, Kun; Yao, Yunjun; Jia, Aolin

    2016-03-09

    Mapping surface all-wave net radiation (Rn) is critically needed for various applications. Several existing Rn products from numerical models and satellite observations have coarse spatial resolutions and their accuracies may not meet the requirements of land applications. In this study, we develop the Global LAnd Surface Satellite (GLASS) daytime Rn product at a 5 km spatial resolution. Its algorithm for converting shortwave radiation to all-wave net radiation using the Multivariate Adaptive Regression Splines (MARS) model is determined after comparison with three other algorithms. The validation of the GLASS Rn product based on high-quality in situ measurements in the United Statesmore » shows a coefficient of determination value of 0.879, an average root mean square error value of 31.61 Wm-2, and an average bias of 17.59 Wm-2. Furthermore, we also compare our product/algorithm with another satellite product (CERES-SYN) and two reanalysis products (MERRA and JRA55), and find that the accuracy of the much higher spatial resolution GLASS Rn product is satisfactory. The GLASS Rn product from 2000 to the present is operational and freely available to the public.« less

  12. Development of algorithms for understanding the temporal and spatial variability of the earth's radiation balance

    NASA Technical Reports Server (NTRS)

    Brooks, D. R.; Harrison, E. F.; Minnis, P.; Suttles, J. T.; Kandel, R. S.

    1986-01-01

    A brief description is given of how temporal and spatial variability in the earth's radiative behavior influences the goals of satellite radiation monitoring systems and how some previous systems have addressed the existing problems. Then, results of some simulations of radiation budget monitoring missions are presented. These studies led to the design of the Earth Radiation Budget Experiment (ERBE). A description is given of the temporal and spatial averaging algorithms developed for the ERBE data analysis. These algorithms are intended primarily to produce monthly averages of the net radiant exitance on regional, zonal, and global scales and to provide insight into the regional diurnal variability of radiative parameters such as albedo and long-wave radiant exitance. The algorithms are applied to scanner and nonscanner data for up to three satellites. Modeling of dialy shortwave albedo and radiant exitance with satellite samling that is insufficient to fully account for changing meteorology is discussed in detail. Studies performed during the ERBE mission and software design are reviewed. These studies provide quantitative estimates of the effects of temporally sparse and biased sampling on inferred diurnal and regional radiative parameters. Other topics covered include long-wave diurnal modeling, extraction of a regional monthly net clear-sky radiation budget, the statistical significance of observed diurnal variability, quality control of the analysis, and proposals for validating the results of ERBE time and space averaging.

  13. Development of an Algorithm Suite for MODIS and VIIRS Cloud Data Record Continuity

    NASA Astrophysics Data System (ADS)

    Platnick, S. E.; Holz, R.; Heidinger, A. K.; Ackerman, S. A.; Meyer, K.; Frey, R.; Wind, G.; Amarasinghe, N.

    2014-12-01

    The launch of Suomi NPP in the fall of 2011 began the next generation of the U.S. operational polar orbiting environmental observations. Similar to MODIS, the VIIRS imager provides visible through IR observations at moderate spatial resolution with a 1330 LT equatorial crossing consistent with MODIS on the Aqua platform. However, unlike MODIS, VIIRS lacks key water vapor and CO2 absorbing channels used by the MODIS cloud algorithms for high cloud detection and cloud-top property retrievals (including emissivity), as well as multilayer cloud detection. In addition, there is a significant change in the spectral location of the 2.1 μm shortwave-infrared channel used by MODIS for cloud microphysical retrievals. The climate science community will face an interruption in the continuity of key global cloud data sets once the NASA EOS Terra and Aqua sensors cease operation. Given the instrument differences between MODIS EOS and VIIRS S-NPP/JPSS, we discuss methods for merging the 14+ year MODIS observational record with VIIRS/CrIS observations in order to generate cloud climate data record continuity across the observing systems. The main approach used by our team was to develop a cloud retrieval algorithm suite that is applied only to the common MODIS and VIIRS spectral channels. The suite uses heritage algorithms that produce the existing MODIS cloud mask (MOD35), MODIS cloud optical and microphysical properties (MOD06), and NOAA AWG/CLAVR-x cloud-top property products. Global monthly results from this hybrid algorithm suite (referred to as MODAWG) will be shown. Collocated CALIPSO comparisons will be shown that can independently evaluate inter-instrument product consistency for a subset of the MODAWG datasets.

  14. Development of a new time domain-based algorithm for train detection and axle counting

    NASA Astrophysics Data System (ADS)

    Allotta, B.; D'Adamio, P.; Meli, E.; Pugi, L.

    2015-12-01

    This paper presents an innovative train detection algorithm, able to perform the train localisation and, at the same time, to estimate its speed, the crossing times on a fixed point of the track and the axle number. The proposed solution uses the same approach to evaluate all these quantities, starting from the knowledge of generic track inputs directly measured on the track (for example, the vertical forces on the sleepers, the rail deformation and the rail stress). More particularly, all the inputs are processed through cross-correlation operations to extract the required information in terms of speed, crossing time instants and axle counter. This approach has the advantage to be simple and less invasive than the standard ones (it requires less equipment) and represents a more reliable and robust solution against numerical noise because it exploits the whole shape of the input signal and not only the peak values. A suitable and accurate multibody model of railway vehicle and flexible track has also been developed by the authors to test the algorithm when experimental data are not available and in general, under any operating conditions (fundamental to verify the algorithm accuracy and robustness). The railway vehicle chosen as benchmark is the Manchester Wagon, modelled in the Adams VI-Rail environment. The physical model of the flexible track has been implemented in the Matlab and Comsol Multiphysics environments. A simulation campaign has been performed to verify the performance and the robustness of the proposed algorithm, and the results are quite promising. The research has been carried out in cooperation with Ansaldo STS and ECM Spa.

  15. Development of Bio-Optical Algorithms for Geostationary Ocean Color Imager

    NASA Astrophysics Data System (ADS)

    Ryu, J.; Moon, J.; Min, J.; Palanisamy, S.; Han, H.; Ahn, Y.

    2007-12-01

    GOCI, the first Geostationary Ocean Color Imager, shall be operated in a staring-frame capture mode onboard its Communication Ocean and Meteorological Satellite (COMS) and tentatively scheduled for launch in 2008. The mission concept includes eight visible-to-near-infrared bands, 0.5 km pixel resolution, and a coverage region of 2,500 ¢®¢¯ 2,500 km centered at Korea. The GOCI is expected to provide SeaWiFS quality observations for a single study area with imaging interval of 1 hour from 10 am to 5 pm. In the GOCI swath area, the optical properties of the East Sea (typical of Case-I water), the Yellow Sea and East China Sea (typical of Case-II water) are investigated. For developing the GOCI bio-optical algorithms in optically more complex waters, it is necessary to study and understand the optical properties around the Korean Sea. Radiometric measurements were made using WETLabs AC-S, TriOS RAMSES ACC/ARC, and ASD FieldSpec Pro Dual VNIR Spectroradiometer. Seawater samples were collected concurrently with the radiometric measurements at about 300 points around the Korean Sea during 1998 to 2007. The absorption coefficients were determined using Perkin-Elmer Lambda 19 dual-beam spectrophotometer. We analyzed the absorption coefficient of sea water constituents such as phytoplankton, Suspended Sediment (SS) and Dissolved Organic Matter (DOM). Two kinds of chlorophyll algorithms are developed by using statistical regression and fluorescence-based technique considering the bio- optical properties in Case-II waters. Fluorescence measurements were related to in situ Chl-a concentrations to obtain the Flu(681), Flu(688) and Flu(area) algorithms, which were compared with those from standard spectral ratios of the remote sensing reflectance. The single band algorithm for is derived by relationship between Rrs (555) and in situ concentration. The CDOM is estimated by absorption spectra and its slope centered at 440 nm wavelength. These standard algorithms will be

  16. Development of an algorithm for automatic detection and rating of squeak and rattle events

    NASA Astrophysics Data System (ADS)

    Chandrika, Unnikrishnan Kuttan; Kim, Jay H.

    2010-10-01

    A new algorithm for automatic detection and rating of squeak and rattle (S&R) events was developed. The algorithm utilizes the perceived transient loudness (PTL) that approximates the human perception of a transient noise. At first, instantaneous specific loudness time histories are calculated over 1-24 bark range by applying the analytic wavelet transform and Zwicker loudness transform to the recorded noise. Transient specific loudness time histories are then obtained by removing estimated contributions of the background noise from instantaneous specific loudness time histories. These transient specific loudness time histories are summed to obtain the transient loudness time history. Finally, the PTL time history is obtained by applying Glasberg and Moore temporal integration to the transient loudness time history. Detection of S&R events utilizes the PTL time history obtained by summing only 18-24 barks components to take advantage of high signal-to-noise ratio in the high frequency range. A S&R event is identified when the value of the PTL time history exceeds the detection threshold pre-determined by a jury test. The maximum value of the PTL time history is used for rating of S&R events. Another jury test showed that the method performs much better if the PTL time history obtained by summing all frequency components is used. Therefore, r ating of S&R events utilizes this modified PTL time history. Two additional jury tests were conducted to validate the developed detection and rating methods. The algorithm developed in this work will enable automatic detection and rating of S&R events with good accuracy and minimum possibility of false alarm.

  17. Development of a Smart Release Algorithm for Mid-Air Separation of Parachute Test Articles

    NASA Technical Reports Server (NTRS)

    Moore, James W.

    2011-01-01

    The Crew Exploration Vehicle Parachute Assembly System (CPAS) project is currently developing an autonomous method to separate a capsule-shaped parachute test vehicle from an air-drop platform for use in the test program to develop and validate the parachute system for the Orion spacecraft. The CPAS project seeks to perform air-drop tests of an Orion-like boilerplate capsule. Delivery of the boilerplate capsule to the test condition has proven to be a critical and complicated task. In the current concept, the boilerplate vehicle is extracted from an aircraft on top of a Type V pallet and then separated from the pallet in mid-air. The attitude of the vehicles at separation is critical to avoiding re-contact and successfully deploying the boilerplate into a heatshield-down orientation. Neither the pallet nor the boilerplate has an active control system. However, the attitude of the mated vehicle as a function of time is somewhat predictable. CPAS engineers have designed an avionics system to monitor the attitude of the mated vehicle as it is extracted from the aircraft and command a release when the desired conditions are met. The algorithm includes contingency capabilities designed to release the test vehicle before undesirable orientations occur. The algorithm was verified with simulation and ground testing. The pre-flight development and testing is discussed and limitations of ground testing are noted. The CPAS project performed a series of three drop tests as a proof-of-concept of the release technique. These tests helped to refine the attitude instrumentation and software algorithm to be used on future tests. The drop tests are described in detail and the evolution of the release system with each test is described.

  18. Development of Algorithms and Error Analyses for the Short Baseline Lightning Detection and Ranging System

    NASA Technical Reports Server (NTRS)

    Starr, Stanley O.

    1998-01-01

    NASA, at the John F. Kennedy Space Center (KSC), developed and operates a unique high-precision lightning location system to provide lightning-related weather warnings. These warnings are used to stop lightning- sensitive operations such as space vehicle launches and ground operations where equipment and personnel are at risk. The data is provided to the Range Weather Operations (45th Weather Squadron, U.S. Air Force) where it is used with other meteorological data to issue weather advisories and warnings for Cape Canaveral Air Station and KSC operations. This system, called Lightning Detection and Ranging (LDAR), provides users with a graphical display in three dimensions of 66 megahertz radio frequency events generated by lightning processes. The locations of these events provide a sound basis for the prediction of lightning hazards. This document provides the basis for the design approach and data analysis for a system of radio frequency receivers to provide azimuth and elevation data for lightning pulses detected simultaneously by the LDAR system. The intent is for this direction-finding system to correct and augment the data provided by LDAR and, thereby, increase the rate of valid data and to correct or discard any invalid data. This document develops the necessary equations and algorithms, identifies sources of systematic errors and means to correct them, and analyzes the algorithms for random error. This data analysis approach is not found in the existing literature and was developed to facilitate the operation of this Short Baseline LDAR (SBLDAR). These algorithms may also be useful for other direction-finding systems using radio pulses or ultrasonic pulse data.

  19. Integrative multicellular biological modeling: a case study of 3D epidermal development using GPU algorithms

    PubMed Central

    2010-01-01

    Background Simulation of sophisticated biological models requires considerable computational power. These models typically integrate together numerous biological phenomena such as spatially-explicit heterogeneous cells, cell-cell interactions, cell-environment interactions and intracellular gene networks. The recent advent of programming for graphical processing units (GPU) opens up the possibility of developing more integrative, detailed and predictive biological models while at the same time decreasing the computational cost to simulate those models. Results We construct a 3D model of epidermal development and provide a set of GPU algorithms that executes significantly faster than sequential central processing unit (CPU) code. We provide a parallel implementation of the subcellular element method for individual cells residing in a lattice-free spatial environment. Each cell in our epidermal model includes an internal gene network, which integrates cellular interaction of Notch signaling together with environmental interaction of basement membrane adhesion, to specify cellular state and behaviors such as growth and division. We take a pedagogical approach to describing how modeling methods are efficiently implemented on the GPU including memory layout of data structures and functional decomposition. We discuss various programmatic issues and provide a set of design guidelines for GPU programming that are instructive to avoid common pitfalls as well as to extract performance from the GPU architecture. Conclusions We demonstrate that GPU algorithms represent a significant technological advance for the simulation of complex biological models. We further demonstrate with our epidermal model that the integration of multiple complex modeling methods for heterogeneous multicellular biological processes is both feasible and computationally tractable using this new technology. We hope that the provided algorithms and source code will be a starting point for modelers to

  20. Development of model-based fault diagnosis algorithms for MASCOTTE cryogenic test bench

    NASA Astrophysics Data System (ADS)

    Iannetti, A.; Marzat, J.; Piet-Lahanier, H.; Ordonneau, G.; Vingert, L.

    2014-12-01

    This article describes the on-going results of a fault diagnosis benchmark for a cryogenic rocket engine demonstrator. The benchmark consists in the use of classical model- based fault diagnosis methods to monitor the status of the cooling circuit of the MASCOTTE cryogenic bench. The algorithms developed are validated on real data from the last 2014 firing campaign (ATAC campaign). The objective of this demonstration is to find practical diagnosis alternatives to classical redline providing more flexible means of data exploitation in real time and for post processing.

  1. Multidisciplinary Design, Analysis, and Optimization Tool Development using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Li, Wesley

    2008-01-01

    Multidisciplinary design, analysis, and optimization using a genetic algorithm is being developed at the National Aeronautics and Space A dministration Dryden Flight Research Center to automate analysis and design process by leveraging existing tools such as NASTRAN, ZAERO a nd CFD codes to enable true multidisciplinary optimization in the pr eliminary design stage of subsonic, transonic, supersonic, and hypers onic aircraft. This is a promising technology, but faces many challe nges in large-scale, real-world application. This paper describes cur rent approaches, recent results, and challenges for MDAO as demonstr ated by our experience with the Ikhana fire pod design.

  2. Multidisciplinary Design, Analysis, and Optimization Tool Development Using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Li, Wesley

    2009-01-01

    Multidisciplinary design, analysis, and optimization using a genetic algorithm is being developed at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California) to automate analysis and design process by leveraging existing tools to enable true multidisciplinary optimization in the preliminary design stage of subsonic, transonic, supersonic, and hypersonic aircraft. This is a promising technology, but faces many challenges in large-scale, real-world application. This report describes current approaches, recent results, and challenges for multidisciplinary design, analysis, and optimization as demonstrated by experience with the Ikhana fire pod design.!

  3. Development and evaluation of a predictive algorithm for telerobotic task complexity

    NASA Technical Reports Server (NTRS)

    Gernhardt, M. L.; Hunter, R. C.; Hedgecock, J. C.; Stephenson, A. G.

    1993-01-01

    There is a wide range of complexity in the various telerobotic servicing tasks performed in subsea, space, and hazardous material handling environments. Experience with telerobotic servicing has evolved into a knowledge base used to design tasks to be 'telerobot friendly.' This knowledge base generally resides in a small group of people. Written documentation and requirements are limited in conveying this knowledge base to serviceable equipment designers and are subject to misinterpretation. A mathematical model of task complexity based on measurable task parameters and telerobot performance characteristics would be a valuable tool to designers and operational planners. Oceaneering Space Systems and TRW have performed an independent research and development project to develop such a tool for telerobotic orbital replacement unit (ORU) exchange. This algorithm was developed to predict an ORU exchange degree of difficulty rating (based on the Cooper-Harper rating used to assess piloted operations). It is based on measurable parameters of the ORU, attachment receptacle and quantifiable telerobotic performance characteristics (e.g., link length, joint ranges, positional accuracy, tool lengths, number of cameras, and locations). The resulting algorithm can be used to predict task complexity as the ORU parameters, receptacle parameters, and telerobotic characteristics are varied.

  4. Developing a data element repository to support EHR-driven phenotype algorithm authoring and execution.

    PubMed

    Jiang, Guoqian; Kiefer, Richard C; Rasmussen, Luke V; Solbrig, Harold R; Mo, Huan; Pacheco, Jennifer A; Xu, Jie; Montague, Enid; Thompson, William K; Denny, Joshua C; Chute, Christopher G; Pathak, Jyotishman

    2016-08-01

    The Quality Data Model (QDM) is an information model developed by the National Quality Forum for representing electronic health record (EHR)-based electronic clinical quality measures (eCQMs). In conjunction with the HL7 Health Quality Measures Format (HQMF), QDM contains core elements that make it a promising model for representing EHR-driven phenotype algorithms for clinical research. However, the current QDM specification is available only as descriptive documents suitable for human readability and interpretation, but not for machine consumption. The objective of the present study is to develop and evaluate a data element repository (DER) for providing machine-readable QDM data element service APIs to support phenotype algorithm authoring and execution. We used the ISO/IEC 11179 metadata standard to capture the structure for each data element, and leverage Semantic Web technologies to facilitate semantic representation of these metadata. We observed there are a number of underspecified areas in the QDM, including the lack of model constraints and pre-defined value sets. We propose a harmonization with the models developed in HL7 Fast Healthcare Interoperability Resources (FHIR) and Clinical Information Modeling Initiatives (CIMI) to enhance the QDM specification and enable the extensibility and better coverage of the DER. We also compared the DER with the existing QDM implementation utilized within the Measure Authoring Tool (MAT) to demonstrate the scalability and extensibility of our DER-based approach. PMID:27392645

  5. Estimating aquifer recharge in Mission River watershed, Texas: model development and calibration using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Uddameri, V.; Kuchanur, M.

    2007-01-01

    Soil moisture balance studies provide a convenient approach to estimate aquifer recharge when only limited site-specific data are available. A monthly mass-balance approach has been utilized in this study to estimate recharge in a small watershed in the coastal bend of South Texas. The developed lumped parameter model employs four adjustable parameters to calibrate model predicted stream runoff to observations at a gaging station. A new procedure was developed to correctly capture the intermittent nature of rainfall. The total monthly rainfall was assigned to a single-equivalent storm whose duration was obtained via calibration. A total of four calibrations were carried out using an evolutionary computing technique called genetic algorithms as well as the conventional gradient descent (GD) technique. Ordinary least squares and the heteroscedastic maximum likelihood error (HMLE) based objective functions were evaluated as part of this study as well. While the genetic algorithm based calibrations were relatively better in capturing the peak runoff events, the GD based calibration did slightly better in capturing the low flow events. Treating the Box-Cox exponent in the HMLE function as a calibration parameter did not yield better estimates and the study corroborates the suggestion made in the literature of fixing this exponent at 0.3. The model outputs were compared against available information and results indicate that the developed modeling approach provides a conservative estimate of recharge.

  6. Development of a Spatially-Selective, Nonlinear Refinement Algorithm for Thermal-Hydraulic Safety Analysis

    NASA Astrophysics Data System (ADS)

    Lloyd, Lewis John

    This work focused on developing a novel method for solving the nonlinear partial differential equations associated with thermal-hydraulic safety analysis software. Traditional methods involve solving large systems of nonlinear equations. One class of methods linearizes the nonlinear equations and attempts to minimize the nonlinear truncation error with timestep size selection. These linearized methods are characterized by low computational cost but reduced accuracy. Another class resolves those nonlinearities by using an iterative nonlinear refinement technique. However, these iterative methods are computationally expensive when multiple iterates are required to resolve the nonlinearities. These two paradigms stand at the opposite ends of a spectrum, and the middle ground had yet to be investigated. This research sought to find that middle ground, a balance between the competing incentives of computational cost and accuracy, by creating a hybrid method: a spatially-selective, nonlinear refinement (SNR) algorithm. As part of this work, the two-phase, three-field software COBRA was converted from a linearized semi-implicit solver to a nonlinearly convergent solver; an operator-based scaling that provides a physically meaningful convergence measure was developed and implemented; and the SNR algorithm was developed to enable a subdomain of the simulation to be subjected to multiple nonlinear iterates while maintaining global consistency. By selecting those areas of the computational domain where nonlinearities are expected to be high and subjecting only them to multiple nonlinear iterations, the accuracy of the nonlinear solver may be obtained without its associated computational cost.

  7. Development of Elevation and Relief Databases for ICESat-2/ATLAS Receiver Algorithms

    NASA Astrophysics Data System (ADS)

    Leigh, H. W.; Magruder, L. A.; Carabajal, C. C.; Saba, J. L.; Urban, T. J.; Mcgarry, J.; Schutz, B. E.

    2013-12-01

    The Advanced Topographic Laser Altimeter System (ATLAS) is planned to launch onboard NASA's ICESat-2 spacecraft in 2016. ATLAS operates at a wavelength of 532 nm with a laser repeat rate of 10 kHz and 6 individual laser footprints. The satellite will be in a 500 km, 91-day repeat ground track orbit at an inclination of 92°. A set of onboard Receiver Algorithms has been developed to reduce the data volume and data rate to acceptable levels while still transmitting the relevant ranging data. The onboard algorithms limit the data volume by distinguishing between surface returns and background noise and selecting a small vertical region around the surface return to be included in telemetry. The algorithms make use of signal processing techniques, along with three databases, the Digital Elevation Model (DEM), the Digital Relief Map (DRM), and the Surface Reference Mask (SRM), to find the signal and determine the appropriate dynamic range of vertical data surrounding the surface for downlink. The DEM provides software-based range gating for ATLAS. This approach allows the algorithm to limit the surface signal search to the vertical region between minimum and maximum elevations provided by the DEM (plus some margin to account for uncertainties). The DEM is constructed in a nested, three-tiered grid to account for a hardware constraint limiting the maximum vertical range to 6 km. The DRM is used to select the vertical width of the telemetry band around the surface return. The DRM contains global values of relief calculated along 140 m and 700 m ground track segments consistent with a 92° orbit. The DRM must contain the maximum value of relief seen in any given area, but must be as close to truth as possible as the DRM directly affects data volume. The SRM, which has been developed independently from the DEM and DRM, is used to set parameters within the algorithm and select telemetry bands for downlink. Both the DEM and DRM are constructed from publicly available digital

  8. Development of image reconstruction algorithms for fluorescence diffuse optical tomography using total light approach

    NASA Astrophysics Data System (ADS)

    Okawa, S.; Yamamoto, H.; Miwa, Y.; Yamada, Y.

    2011-07-01

    Fluorescence diffuse optical tomography (FDOT) based on the total light approach is developed. The continuous wave light is used for excitation in this system. The reconstruction algorithm is based on the total light approach that reconstructs the absorption coefficients increased by the fluorophore. Additionally we propose noise reduction using the algebraic reconstruction technique (ART) incorporating the truncated singular value decomposition (TSVD). Numerical and phantom experiments show that the developed system successfully reconstructs the fluorophore concentration in the biological media, and the ART with TSVD alleviates the influence of noises. In vivo experiment demonstrated that the developed FDOT system localized the fluorescent agent which was concentrated in the cancer transplanted into a kidney in a mouse.

  9. An algorithm for hyperspectral remote sensing of aerosols: 1. Development of theoretical framework

    NASA Astrophysics Data System (ADS)

    Hou, Weizhen; Wang, Jun; Xu, Xiaoguang; Reid, Jeffrey S.; Han, Dong

    2016-07-01

    This paper describes the first part of a series of investigations to develop algorithms for simultaneous retrieval of aerosol parameters and surface reflectance from a newly developed hyperspectral instrument, the GEOstationary Trace gas and Aerosol Sensor Optimization (GEO-TASO), by taking full advantage of available hyperspectral measurement information in the visible bands. We describe the theoretical framework of an inversion algorithm for the hyperspectral remote sensing of the aerosol optical properties, in which major principal components (PCs) for surface reflectance is assumed known, and the spectrally dependent aerosol refractive indices are assumed to follow a power-law approximation with four unknown parameters (two for real and two for imaginary part of refractive index). New capabilities for computing the Jacobians of four Stokes parameters of reflected solar radiation at the top of the atmosphere with respect to these unknown aerosol parameters and the weighting coefficients for each PC of surface reflectance are added into the UNified Linearized Vector Radiative Transfer Model (UNL-VRTM), which in turn facilitates the optimization in the inversion process. Theoretical derivations of the formulas for these new capabilities are provided, and the analytical solutions of Jacobians are validated against the finite-difference calculations with relative error less than 0.2%. Finally, self-consistency check of the inversion algorithm is conducted for the idealized green-vegetation and rangeland surfaces that were spectrally characterized by the U.S. Geological Survey digital spectral library. It shows that the first six PCs can yield the reconstruction of spectral surface reflectance with errors less than 1%. Assuming that aerosol properties can be accurately characterized, the inversion yields a retrieval of hyperspectral surface reflectance with an uncertainty of 2% (and root-mean-square error of less than 0.003), which suggests self-consistency in the

  10. Development of ocean color algorithms for estimating chlorophyll-a concentrations and inherent optical properties using gene expression programming (GEP).

    PubMed

    Chang, Chih-Hua

    2015-03-01

    This paper proposes new inversion algorithms for the estimation of Chlorophyll-a concentration (Chla) and the ocean's inherent optical properties (IOPs) from the measurement of remote sensing reflectance (Rrs). With in situ data from the NASA bio-optical marine algorithm data set (NOMAD), inversion algorithms were developed by the novel gene expression programming (GEP) approach, which creates, manipulates and selects the most appropriate tree-structured functions based on evolutionary computing. The limitations and validity of the proposed algorithms are evaluated by simulated Rrs spectra with respect to NOMAD, and a closure test for IOPs obtained at a single reference wavelength. The application of GEP-derived algorithms is validated against in situ, synthetic and satellite match-up data sets compiled by NASA and the International Ocean Color Coordinate Group (IOCCG). The new algorithms are able to provide Chla and IOPs retrievals to those derived by other state-of-the-art regression approaches and obtained with the semi- and quasi-analytical algorithms, respectively. In practice, there are no significant differences between GEP, support vector regression, and multilayer perceptron model in terms of the overall performance. The GEP-derived algorithms are successfully applied in processing the images taken by the Sea Wide Field-of-view Sensor (SeaWiFS), generate Chla and IOPs maps which show better details of developing algal blooms, and give more information on the distribution of water constituents between different water bodies. PMID:25836776

  11. Developing algorithms for predicting protein-protein interactions of homology modeled proteins.

    SciTech Connect

    Martin, Shawn Bryan; Sale, Kenneth L.; Faulon, Jean-Loup Michel; Roe, Diana C.

    2006-01-01

    The goal of this project was to examine the protein-protein docking problem, especially as it relates to homology-based structures, identify the key bottlenecks in current software tools, and evaluate and prototype new algorithms that may be developed to improve these bottlenecks. This report describes the current challenges in the protein-protein docking problem: correctly predicting the binding site for the protein-protein interaction and correctly placing the sidechains. Two different and complementary approaches are taken that can help with the protein-protein docking problem. The first approach is to predict interaction sites prior to docking, and uses bioinformatics studies of protein-protein interactions to predict theses interaction site. The second approach is to improve validation of predicted complexes after docking, and uses an improved scoring function for evaluating proposed docked poses, incorporating a solvation term. This scoring function demonstrates significant improvement over current state-of-the art functions. Initial studies on both these approaches are promising, and argue for full development of these algorithms.

  12. Development of optimization model for sputtering process parameter based on gravitational search algorithm

    NASA Astrophysics Data System (ADS)

    Norlina, M. S.; Diyana, M. S. Nor; Mazidah, P.; Rusop, M.

    2016-07-01

    In the RF magnetron sputtering process, the desirable layer properties are largely influenced by the process parameters and conditions. If the quality of the thin film has not reached up to its intended level, the experiments have to be repeated until the desirable quality has been met. This research is proposing Gravitational Search Algorithm (GSA) as the optimization model to reduce the time and cost to be spent in the thin film fabrication. The optimization model's engine has been developed using Java. The model is developed based on GSA concept, which is inspired by the Newtonian laws of gravity and motion. In this research, the model is expected to optimize four deposition parameters which are RF power, deposition time, oxygen flow rate and substrate temperature. The results have turned out to be promising and it could be concluded that the performance of the model is satisfying in this parameter optimization problem. Future work could compare GSA with other nature based algorithms and test them with various set of data.

  13. Remote Sensing of Ocean Color in the Arctic: Algorithm Development and Comparative Validation. Chapter 9

    NASA Technical Reports Server (NTRS)

    Cota, Glenn F.

    2001-01-01

    The overall goal of this effort is to acquire a large bio-optical database, encompassing most environmental variability in the Arctic, to develop algorithms for phytoplankton biomass and production and other optically active constituents. A large suite of bio-optical and biogeochemical observations have been collected in a variety of high latitude ecosystems at different seasons. The Ocean Research Consortium of the Arctic (ORCA) is a collaborative effort between G.F. Cota of Old Dominion University (ODU), W.G. Harrison and T. Platt of the Bedford Institute of Oceanography (BIO), S. Sathyendranath of Dalhousie University and S. Saitoh of Hokkaido University. ORCA has now conducted 12 cruises and collected over 500 in-water optical profiles plus a variety of ancillary data. Observational suites typically include apparent optical properties (AOPs), inherent optical property (IOPs), and a variety of ancillary observations including sun photometry, biogeochemical profiles, and productivity measurements. All quality-assured data have been submitted to NASA's SeaWIFS Bio-Optical Archive and Storage System (SeaBASS) data archive. Our algorithm development efforts address most of the potential bio-optical data products for the Sea-Viewing Wide Field-of-view Sensor (SeaWiFS), Moderate Resolution Imaging Spectroradiometer (MODIS), and GLI, and provides validation for a specific areas of concern, i.e., high latitudes and coastal waters.

  14. Three Rings, Three Enrichment Activities, Three Decades Earlier

    ERIC Educational Resources Information Center

    Gubbins, E. Jean

    2010-01-01

    Three decades ago, Dr. Joseph S. Renzulli reflected on the state of the art in gifted and talented programs. His work in schools and his evaluation of existing programs made it patently clear it was time to review existing definitions of gifted and talented students and to develop defensible programs for the gifted and talented. The result was the…

  15. Late Entrants Leave School Earlier: Evidence from Mozambique

    ERIC Educational Resources Information Center

    Wils, Annababette

    2004-01-01

    Late school entry prevails in many developing countries, and a brief international comparison suggests it has a general negative impact on school retention rates. Yet this widespread phenomenon has received little attention. This essay investigates late school entry in one of the larger countries in sub-Saharan Africa, Mozambique, for which data…

  16. Developing a Moving-Solid Algorithm for Simulating Tsunamis Induced by Rock Sliding

    NASA Astrophysics Data System (ADS)

    Chuang, M.; Wu, T.; Huang, C.; Wang, C.; Chu, C.; Chen, M.

    2012-12-01

    The landslide generated tsunami is one of the most devastating nature hazards. However, the involvement of the moving obstacle and dynamic free-surface movement makes the numerical simulation a difficult task. To describe the fluid motion, we use modified two-step projection method to decouple the velocity and pressure fields with 3D LES turbulent model. The free-surface movement is tracked by volume of fluid (VOF) method (Wu, 2004). To describe the effect from the moving obstacle on the fluid, a newly developed moving-solid algorithm (MSA) is developed. We combine the ideas from immersed boundary method (IBM) and partial-cell treatment (PCT) for specifying the contacting speed on the solid face and for presenting the obstacle blocking effect, respectively. By using the concept of IBM, the cell-center and cell-face velocities can be specified arbitrarily. And because we move the solid obstacle on a fixed grid, the boundary of the solid seldom coincides with the cell faces, which makes it inappropriate to assign the solid boundary velocity to the cell faces. To overcome this problem, the PCT is adopted. Using this algorithm, the solid surface is conceptually coincided with the cell faces, and the cell face velocity is able to be specified as the obstacle velocity. The advantage of using this algorithm is obtaining the stable pressure field which is extremely important for coupling with a force-balancing model which describes the solid motion. This model is therefore able to simulate incompressible high-speed fluid motion. In order to describe the solid motion, the DEM (Discrete Element Method) is adopted. The new-time solid movement can be predicted and divided into translation and rotation based on the Newton's equations and Euler's equations respectively. The detail of the moving-solid algorithm is presented in this paper. This model is then applied to studying the rock-slide generated tsunami. The results are validated with the laboratory data (Liu and Wu, 2005

  17. Development of Gis Tool for the Solution of Minimum Spanning Tree Problem using Prim's Algorithm

    NASA Astrophysics Data System (ADS)

    Dutta, S.; Patra, D.; Shankar, H.; Alok Verma, P.

    2014-11-01

    minimum spanning tree (MST) of a connected, undirected and weighted network is a tree of that network consisting of all its nodes and the sum of weights of all its edges is minimum among all such possible spanning trees of the same network. In this study, we have developed a new GIS tool using most commonly known rudimentary algorithm called Prim's algorithm to construct the minimum spanning tree of a connected, undirected and weighted road network. This algorithm is based on the weight (adjacency) matrix of a weighted network and helps to solve complex network MST problem easily, efficiently and effectively. The selection of the appropriate algorithm is very essential otherwise it will be very hard to get an optimal result. In case of Road Transportation Network, it is very essential to find the optimal results by considering all the necessary points based on cost factor (time or distance). This paper is based on solving the Minimum Spanning Tree (MST) problem of a road network by finding it's minimum span by considering all the important network junction point. GIS technology is usually used to solve the network related problems like the optimal path problem, travelling salesman problem, vehicle routing problems, location-allocation problems etc. Therefore, in this study we have developed a customized GIS tool using Python script in ArcGIS software for the solution of MST problem for a Road Transportation Network of Dehradun city by considering distance and time as the impedance (cost) factors. It has a number of advantages like the users do not need a greater knowledge of the subject as the tool is user-friendly and that allows to access information varied and adapted the needs of the users. This GIS tool for MST can be applied for a nationwide plan called Prime Minister Gram Sadak Yojana in India to provide optimal all weather road connectivity to unconnected villages (points). This tool is also useful for constructing highways or railways spanning several

  18. Simple Algorithms for Distributed Leader Election in Anonymous Synchronous Rings and Complete Networks Inspired by Neural Development in Fruit Flies.

    PubMed

    Xu, Lei; Jeavons, Peter

    2015-11-01

    Leader election in anonymous rings and complete networks is a very practical problem in distributed computing. Previous algorithms for this problem are generally designed for a classical message passing model where complex messages are exchanged. However, the need to send and receive complex messages makes such algorithms less practical for some real applications. We present some simple synchronous algorithms for distributed leader election in anonymous rings and complete networks that are inspired by the development of the neural system of the fruit fly. Our leader election algorithms all assume that only one-bit messages are broadcast by nodes in the network and processors are only able to distinguish between silence and the arrival of one or more messages. These restrictions allow implementations to use a simpler message-passing architecture. Even with these harsh restrictions our algorithms are shown to achieve good time and message complexity both analytically and experimentally. PMID:26173905

  19. Endometriosis in Adolescence: Practical Rules for an Earlier Diagnosis.

    PubMed

    Zannoni, Letizia; Forno, Simona Del; Paradisi, Roberto; Seracchioli, Renato

    2016-09-01

    Dysmenorrhea, cyclic pelvic pain, and acyclic pelvic pain are common in adolescent girls, and at least 10% of these girls are at risk for subsequent development of endometriosis. In this article we highlight practical tips for the management of dysmenorrhea and chronic pelvic pain and how to diagnose endometriosis as early as possible and detect patients at risk for developing the disease in the future. We suggest five practical rules for managing adolescents with dysmenorrhea and chronic pelvic pain: (1) Never underestimate the pain; (2) Always consider endometriosis as a possible cause of severe cyclic pain; (3) Obtain a detailed and accurate history before performing clinical evaluation and pelvic sonography; (4) Treat the pain with hormonal therapies (combined oral contraceptives or progestogen-only pill) and analgesics (acetaminophen and nonsteroidal anti-inflammatory drugs); and (5) Plan frequent follow-up visits to re-evaluate the patient. [Pediatr Ann. 2016;45(9):e332-e335.]. PMID:27622918

  20. Development of region processing algorithm for HSTAMIDS: status and field test results

    NASA Astrophysics Data System (ADS)

    Ngan, Peter; Burke, Sean; Cresci, Roger; Wilson, Joseph N.; Gader, Paul; Ho, K. C.; Bartosz, Elizabeth; Duvoisin, Herbert

    2007-04-01

    The Region Processing Algorithm (RPA) has been developed by the Office of the Army Humanitarian Demining Research and Development (HD R&D) Program as part of improvements for the AN/PSS-14. The effort was a collaboration between the HD R&D Program, L-3 Communication CyTerra Corporation, University of Florida, Duke University and University of Missouri. RPA has been integrated into and implemented in a real-time AN/PSS-14. The subject unit was used to collect data and tested for its performance at three Army test sites within the United States of America. This paper describes the status of the technology and its recent test results.

  1. Collaboration on Development and Validation of the AMSR-E Snow Water Equivalent Algorithm

    NASA Technical Reports Server (NTRS)

    Armstrong, Richard L.

    2000-01-01

    The National Snow and Ice Data Center (NSIDC) has produced a global SMMR and SSM/I Level 3 Brightness Temperature data set in the Equal Area Scalable Earth (EASE) Grid for the period 1978 to 2000. Processing of current data is-ongoing. The EASE-Grid passive microwave data sets are appropriate for algorithm development and validation prior to the launch of AMSR-E. Having the lower frequency channels of SMMR (6.6 and 10.7 GHz) and the higher frequency channels of SSM/I (85.5 GHz) in the same format will facilitate the preliminary development of applications which could potentially make use of similar frequencies from AMSR-E (6.9, 10.7, 89.0 GHz).

  2. Development and validation of a spike detection and classification algorithm aimed at implementation on hardware devices.

    PubMed

    Biffi, E; Ghezzi, D; Pedrocchi, A; Ferrigno, G

    2010-01-01

    Neurons cultured in vitro on MicroElectrode Array (MEA) devices connect to each other, forming a network. To study electrophysiological activity and long term plasticity effects, long period recording and spike sorter methods are needed. Therefore, on-line and real time analysis, optimization of memory use and data transmission rate improvement become necessary. We developed an algorithm for amplitude-threshold spikes detection, whose performances were verified with (a) statistical analysis on both simulated and real signal and (b) Big O Notation. Moreover, we developed a PCA-hierarchical classifier, evaluated on simulated and real signal. Finally we proposed a spike detection hardware design on FPGA, whose feasibility was verified in terms of CLBs number, memory occupation and temporal requirements; once realized, it will be able to execute on-line detection and real time waveform analysis, reducing data storage problems. PMID:20300592

  3. Ocean observations with EOS/MODIS: Algorithm development and post launch studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1995-01-01

    Several significant accomplishments were made during the present reporting period. (1) Initial simulations to understand the applicability of the MODerate Resolution Imaging Spectrometer (MODIS) 1380 nm band for removing the effects of stratospheric aerosols and thin cirrus clouds were completed using a model for an aged volcanic aerosol. The results suggest that very simple procedures requiring no a priori knowledge of the optical properties of the stratospheric aerosol may be as effective as complex procedures requiring full knowledge of the aerosol properties, except the concentration which is estimated from the reflectance at 1380 nm. The limitations of this conclusion will be examined in the next reporting period; (2) The lookup tables employed in the implementation of the atmospheric correction algorithm have been modified in several ways intended to improve the accuracy and/or speed of processing. These have been delivered to R. Evans for implementation into the MODIS prototype processing algorithm for testing; (3) A method was developed for removal of the effects of the O2 'A' absorption band from SeaWiFS band 7 (745-785 nm). This is important in that SeaWiFS imagery will be used as a test data set for the MODIS atmospheric correction algorithm over the oceans; and (4) Construction of a radiometer, and associated deployment boom, for studying the spectral reflectance of oceanic whitecaps at sea was completed. The system was successfully tested on a cruise off Hawaii on which whitecaps were plentiful during October-November. This data set is now under analysis.

  4. Development and Implementation of a Hardware In-the-Loop Test Bed for Unmanned Aerial Vehicle Control Algorithms

    NASA Technical Reports Server (NTRS)

    Nyangweso, Emmanuel; Bole, Brian

    2014-01-01

    Successful prediction and management of battery life using prognostic algorithms through ground and flight tests is important for performance evaluation of electrical systems. This paper details the design of test beds suitable for replicating loading profiles that would be encountered in deployed electrical systems. The test bed data will be used to develop and validate prognostic algorithms for predicting battery discharge time and battery failure time. Online battery prognostic algorithms will enable health management strategies. The platform used for algorithm demonstration is the EDGE 540T electric unmanned aerial vehicle (UAV). The fully designed test beds developed and detailed in this paper can be used to conduct battery life tests by controlling current and recording voltage and temperature to develop a model that makes a prediction of end-of-charge and end-of-life of the system based on rapid state of health (SOH) assessment.

  5. Chest dynamics asymmetry facilitates earlier detection of pneumothorax.

    PubMed

    Waisman, D; Landesberg, A; Kohn, S; Faingersh, A; Klotzman, I C; Gover, A; Kessel, I; Rotschild, A

    2016-02-01

    Pneumothorax is usually diagnosed when signs of life-threatening tension pneumothorax develop. The case report describes novel data derived from miniature superficial sensors that continuously monitored the amplitude and symmetry of the chest wall tidal displacement (TDi) in a premature infant that suffered from pneumothorax. Off-line analysis of the TDi revealed slowly progressing asymmetric ventilation that could be detected 38 min before the diagnosis was made. The TDi provides novel and valuable information that can assist in early detection and decision making. PMID:26814803

  6. Molecular analysis of biomarkers for earlier cancer detection

    NASA Astrophysics Data System (ADS)

    Xu, Xiao-Hong N.; Gao, Jinsong; Jeffers, Robert B.; Logan, Brad; Wen, Z. Julia

    2000-04-01

    Single-molecule phosphorescence immunoassay microscopy was developed and applied for high-throughput screening of tumor markers at the single-molecule (SM) level with no need of separation processes. The screening of individual analyte in a mixture was based upon distinguished diffusion images of single molecules associated with its size and mass. As a working example, several molecular forms of serum prostate- specific antigens (PSA) were labeled with Ru(bpy)32+-NHS-ester and labeled PSA-free and PSA-complex were distinguished based upon their SM diffusion images using this SM microscopy. The bound and unbound PSA-free with its monoclonal antibody (MAB) were also detected using this SM microscopy. A novel solution-phase quantitative electro chemiluminescence (ECL) immunoassay was developed to measure affinity constants of PSA with its antibody and diffusion coefficients of labeled PSA. The ECL immunoassay was able to detect PSA at 1.7 pg/mL. Diffusion of labeled PSA-free and PSA-complex measured by ECL and SM microscopy was consistent demonstrating that distinguished SM diffusion images could be used to screen multiple analytes in a complex mixture. This also implied the possibility of real- time monitoring of kinetics of binding reactions using such SM microscopy.

  7. Toward Developing an Unbiased Scoring Algorithm for "NASA" and Similar Ranking Tasks.

    ERIC Educational Resources Information Center

    Lane, Irving M.; And Others

    1981-01-01

    Presents both logical and empirical evidence to illustrate that the conventional scoring algorithm for ranking tasks significantly underestimates the initial level of group ability and that Slevin's alternative scoring algorithm significantly overestimates the initial level of ability. Presents a modification of Slevin's algorithm which authors…

  8. The settling dynamics of flocculating mud-sand mixtures: Part 1—Empirical algorithm development

    NASA Astrophysics Data System (ADS)

    Manning, Andrew James; Baugh, John V.; Spearman, Jeremy R.; Pidduck, Emma L.; Whitehouse, Richard J. S.

    2011-03-01

    , and in most cases produced excessive over-estimations in MSF. The reason for these predictive errors was that this hybrid approach still treated mud and sand separately. This is potentially reasonable if the sediments are segregated and non-interactive, but appears to be unacceptable when the mud and sand are flocculating via an interactive matrix. The MSSV empirical model may be regarded as a `first stage' approximation for scientists and engineers either wishing to investigate mixed-sediment flocculation and its depositional characteristics in a quantifiable framework, or simulate mixed-sediment settling in a numerical sediment transport model where flocculation is occurring. The preliminary assessment concluded that in general when all the SPM and shear stress range data were combined, the net result indicated that the new mixed-sediment settling velocity empirical model was only in error by -3 to -6.7% across the experimental mud:sand mixture ratios. Tuning of the algorithm coefficients is required for the accurate prediction of depositional rates in a specific estuary, as was demonstrated by the algorithm calibration using data from Portsmouth Harbour. The development of a more physics-based model, which captures the essential features of the empirical MSSV model, would be more universally applicable.

  9. A Prototype Hail Detection Algorithm and Hail Climatology Developed with the Advanced Microwave Sounding Unit (AMSU)

    NASA Technical Reports Server (NTRS)

    Ferraro, Ralph; Beauchamp, James; Cecil, Dan; Heymsfeld, Gerald

    2015-01-01

    In previous studies published in the open literature, a strong relationship between the occurrence of hail and the microwave brightness temperatures (primarily at 37 and 85 GHz) was documented. These studies were performed with the Nimbus-7 SMMR, the TRMM Microwave Imager (TMI) and most recently, the Aqua AMSR-E sensor. This lead to climatologies of hail frequency from TMI and AMSR-E, however, limitations include geographical domain of the TMI sensor (35 S to 35 N) and the overpass time of the Aqua satellite (130 am/pm local time), both of which reduce an accurate mapping of hail events over the global domain and the full diurnal cycle. Nonetheless, these studies presented exciting, new applications for passive microwave sensors. Since 1998, NOAA and EUMETSAT have been operating the AMSU-A/B and the MHS on several operational satellites: NOAA-15 through NOAA-19; MetOp-A and -B. With multiple satellites in operation since 2000, the AMSU/MHS sensors provide near global coverage every 4 hours, thus, offering a much larger time and temporal sampling than TRMM or AMSR-E. With similar observation frequencies near 30 and 85 GHz and additionally three at the 183 GHz water vapor band, the potential to detect strong convection associated with severe storms on a more comprehensive time and space scale exists. In this study, we develop a prototype AMSU-based hail detection algorithm through the use of collocated satellite and surface hail reports over the continental U.S. for a 12-year period (2000-2011). Compared with the surface observations, the algorithm detects approximately 40 percent of hail occurrences. The simple threshold algorithm is then used to generate a hail climatology that is based on all available AMSU observations during 2000-11 that is stratified in several ways, including total hail occurrence by month (March through September), total annual, and over the diurnal cycle. Independent comparisons are made compared to similar data sets derived from other

  10. Description of ALARMA: the alarm algorithm developed for the Nuclear Car Wash

    SciTech Connect

    Luu, T; Biltoft, P; Church, J; Descalle, M; Hall, J; Manatt, D; Mauger, J; Norman, E; Petersen, D; Pruet, J; Prussin, S; Slaughter, D

    2006-11-28

    The goal of any alarm algorithm should be that it provide the necessary tools to derive confidence limits on whether the existence of fissile materials is present in cargo containers. It should be able to extract these limits from (usually) noisy and/or weak data while maintaining a false alarm rate (FAR) that is economically suitable for port operations. It should also be able to perform its analysis within a reasonably short amount of time (i.e. {approx} seconds). To achieve this, it is essential that the algorithm be able to identify and subtract any interference signature that might otherwise be confused with a fissile signature. Lastly, the algorithm itself should be user-intuitive and user-friendly so that port operators with little or no experience with detection algorithms may use it with relative ease. In support of the Nuclear Car Wash project at Lawrence Livermore Laboratory, we have developed an alarm algorithm that satisfies the above requirements. The description of the this alarm algorithm, dubbed ALARMA, is the purpose of this technical report. The experimental setup of the nuclear car wash has been well documented [1, 2, 3]. The presence of fissile materials is inferred by examining the {beta}-delayed gamma spectrum induced after a brief neutron irradiation of cargo, particularly in the high-energy region above approximately 2.5 MeV. In this region naturally occurring gamma rays are virtually non-existent. Thermal-neutron induced fission of {sup 235}U and {sup 239}P, on the other hand, leaves a unique {beta}-delayed spectrum [4]. This spectrum comes from decays of fission products having half-lives as large as 30 seconds, many of which have high Q-values. Since high-energy photons penetrate matter more freely, it is natural to look for unique fissile signatures in this energy region after neutron irradiation. The goal of this interrogation procedure is a 95% success rate of detection of as little as 5 kilograms of fissile material while retaining

  11. Microphysical particle properties derived from inversion algorithms developed in the framework of EARLINET

    NASA Astrophysics Data System (ADS)

    Müller, D.; Böckmann, C.; Kolgotin, A.; Schneidenbach, L.; Chemyakin, E.; Rosemann, J.; Znak, P.; Romanov, A.

    2015-12-01

    We present a summary on the current status of two inversion algorithms that are used in EARLINET for the inversion of data collected with EARLINET multiwavelength Raman lidars. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. Development of these two algorithms started in 2000 when EARLINET was founded. The algorithms are based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithms allow us to derive particle effective radius, and volume and surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo can be computed from the retrieved microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. We discuss the current status of these manually operated algorithms, the potentially achievable accuracy of data products, and the goals for future work on the basis of a few exemplary simulations with synthetic optical data. The optical data used in our study cover a range of Ångström exponents and extinction-to-backscatter (lidar) ratios that are found from lidar measurements of various aerosol types. We also tested aerosol scenarios that are considered highly unlikely, e.g., the lidar ratios fall outside the commonly accepted range of values measured with Raman lidar, even though the underlying microphysical particle properties are not uncommon. The goal of this part of the study is to test robustness of the algorithms toward their ability to identify aerosol types that have not been measured so far, but cannot be ruled out based on our current knowledge of

  12. White Light Modeling, Algorithm Development, and Validation on the Micro-arcsecond Metrology Testbed

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.; Regher, Martin; Shen, Tsae Pyng

    2004-01-01

    The Space Interferometry Mission (SIM) scheduled for launch in early 2010, is an optical interferometer that will perform narrow angle and global wide angle astrometry with unprecedented accuracy, providing differential position accuracies of 1uas, and 4uas global accuracies in position, proper motion and parallax. The astrometric observations of the SIM instrument are performed via delay measurements provided by three Michelson-type, white light interferometers. Two 'guide' interferometers acquire fringes on bright guide stars in order to make highly precise measurements of variations in spacecraft attitude, while the third interferometer performs the science measurement. SIM derives its performance from a combination of precise fringe measurements of the interfered starlight (a few ten-thousandths of a wave) and very precise (tens of picometers) relative distance measurements made between a set of fiducials. The focus of the present paper is on the development and analysis of algorithms for accurate white light estimation, and on validating some of these algorithms on the MicroArcsecond Testbed.

  13. MODIS calibration algorithm improvements developed for Collection 6 Level-1B

    NASA Astrophysics Data System (ADS)

    Wenny, Brian N.; Sun, Junqiang; Xiong, Xiaoxiong; Wu, Aisheng; Chen, Hongda; Angal, Amit; Choi, Taeyoung; Chen, Na; Madhavan, Sriharsha; Geng, Xu; Kuyper, James; Tan, Liqin

    2010-09-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) has been operating on both the Terra and Aqua spacecraft for over 10.5 and 8 years, respectively. Over 40 science products are generated routinely from MODIS Earth images and used extensively by the global science community for a wide variety of land, ocean, and atmosphere applications. Over the mission lifetime, several versions of the MODIS data set have been in use as the calibration and data processing algorithms evolved. Currently Version 5 MODIS data is the baseline Level-1B calibrated science product. The MODIS Characterization Support Team (MCST), with input from the MODIS Science Team, developed and delivered a number of improvements and enhancements to the calibration algorithms, Level-1B processing code and Look-up Tables for the Version 6 Level-1B MODIS data. Version 6 implements a number of changes in the calibration methodology for both the Reflective Solar Bands (RSB) and Thermal Emissive Bands (TEB). This paper describes the improvements introduced in Collection 6 to the RSB and TEB calibration and detector Quality Assurance (QA) handling.

  14. Development of an algorithm for production of inactivated arbovirus antigens in cell culture

    PubMed Central

    Goodman, C.H.; Russell, B.J.; Velez, J.O.; Laven, J.J.; Nicholson, W.L; Bagarozzi, D.A.; Moon, J.L.; Bedi, K.; Johnson, B.W.

    2015-01-01

    Arboviruses are medically important pathogens that cause human disease ranging from a mild fever to encephalitis. Laboratory diagnosis is essential to differentiate arbovirus infections from other pathogens with similar clinical manifestations. The Arboviral Diseases Branch (ADB) reference laboratory at the CDC Division of Vector-Borne Diseases (DVBD) produces reference antigens used in serological assays such as the virus-specific immunoglobulin M antibody-capture enzyme-linked immunosorbent assay (MAC-ELISA). Antigen production in cell culture has largely replaced the use of suckling mice; however, the methods are not directly transferable. The development of a cell culture antigen production algorithm for nine arboviruses from the three main arbovirus families, Flaviviridae, Togaviridae, and Bunyaviridae, is described here. Virus cell culture growth and harvest conditions were optimized, inactivation methods were evaluated, and concentration procedures were compared for each virus. Antigen performance was evaluated by the MAC-ELISA at each step of the procedure. The antigen production algorithm is a framework for standardization of methodology and quality control; however, a single antigen production protocol was not applicable to all arboviruses and needed to be optimized for each virus. PMID:25102428

  15. Development of an algorithm for production of inactivated arbovirus antigens in cell culture.

    PubMed

    Goodman, C H; Russell, B J; Velez, J O; Laven, J J; Nicholson, W L; Bagarozzi, D A; Moon, J L; Bedi, K; Johnson, B W

    2014-11-01

    Arboviruses are medically important pathogens that cause human disease ranging from a mild fever to encephalitis. Laboratory diagnosis is essential to differentiate arbovirus infections from other pathogens with similar clinical manifestations. The Arboviral Diseases Branch (ADB) reference laboratory at the CDC Division of Vector-Borne Diseases (DVBD) produces reference antigens used in serological assays such as the virus-specific immunoglobulin M antibody-capture enzyme-linked immunosorbent assay (MAC-ELISA). Antigen production in cell culture has largely replaced the use of suckling mice; however, the methods are not directly transferable. The development of a cell culture antigen production algorithm for nine arboviruses from the three main arbovirus families, Flaviviridae, Togaviridae, and Bunyaviridae, is described here. Virus cell culture growth and harvest conditions were optimized, inactivation methods were evaluated, and concentration procedures were compared for each virus. Antigen performance was evaluated by the MAC-ELISA at each step of the procedure. The antigen production algorithm is a framework for standardization of methodology and quality control; however, a single antigen production protocol was not applicable to all arboviruses and needed to be optimized for each virus. PMID:25102428

  16. Path optimization by a variational reaction coordinate method. I. Development of formalism and algorithms

    SciTech Connect

    Birkholz, Adam B.; Schlegel, H. Bernhard

    2015-12-28

    The development of algorithms to optimize reaction pathways between reactants and products is an active area of study. Existing algorithms typically describe the path as a discrete series of images (chain of states) which are moved downhill toward the path, using various reparameterization schemes, constraints, or fictitious forces to maintain a uniform description of the reaction path. The Variational Reaction Coordinate (VRC) method is a novel approach that finds the reaction path by minimizing the variational reaction energy (VRE) of Quapp and Bofill. The VRE is the line integral of the gradient norm along a path between reactants and products and minimization of VRE has been shown to yield the steepest descent reaction path. In the VRC method, we represent the reaction path by a linear expansion in a set of continuous basis functions and find the optimized path by minimizing the VRE with respect to the linear expansion coefficients. Improved convergence is obtained by applying constraints to the spacing of the basis functions and coupling the minimization of the VRE to the minimization of one or more points along the path that correspond to intermediates and transition states. The VRC method is demonstrated by optimizing the reaction path for the Müller-Brown surface and by finding a reaction path passing through 5 transition states and 4 intermediates for a 10 atom Lennard-Jones cluster.

  17. Development of TIF based figuring algorithm for deterministic pitch tool polishing

    NASA Astrophysics Data System (ADS)

    Yi, Hyun-Su; Kim, Sug-Whan; Yang, Ho-Soon; Lee, Yun-Woo

    2007-12-01

    Pitch is perhaps the oldest material used for optical polishing, leaving superior surface texture, and has been used widely in the optics shop floor. However, for its unpredictable controllability of removal characteristics, the pitch tool polishing has been rarely analysed quantitatively and many optics shops rely heavily on optician's "feel" even today. In order to bring a degree of process controllability to the pitch tool polishing, we added motorized tool motions to the conventional Draper type polishing machine and modelled the tool path in the absolute machine coordinate. We then produced a number of Tool Influence Function (TIF) both from an analytical model and a series of experimental polishing runs using the pitch tool. The theoretical TIFs agreed well with the experimental TIFs to the profile accuracy of 79 % in terms of its shape. The surface figuring algorithm was then developed in-house utilizing both theoretical and experimental TIFs. We are currently undertaking a series of trial figuring experiments to prove the performance of the polishing algorithm, and the early results indicate that the highly deterministic material removal control with the pitch tool can be achieved to a certain level of form error. The machine renovation, TIF theory and experimental confirmation, figuring simulation results are reported together with implications to deterministic polishing.

  18. Path optimization by a variational reaction coordinate method. I. Development of formalism and algorithms.

    PubMed

    Birkholz, Adam B; Schlegel, H Bernhard

    2015-12-28

    The development of algorithms to optimize reaction pathways between reactants and products is an active area of study. Existing algorithms typically describe the path as a discrete series of images (chain of states) which are moved downhill toward the path, using various reparameterization schemes, constraints, or fictitious forces to maintain a uniform description of the reaction path. The Variational Reaction Coordinate (VRC) method is a novel approach that finds the reaction path by minimizing the variational reaction energy (VRE) of Quapp and Bofill. The VRE is the line integral of the gradient norm along a path between reactants and products and minimization of VRE has been shown to yield the steepest descent reaction path. In the VRC method, we represent the reaction path by a linear expansion in a set of continuous basis functions and find the optimized path by minimizing the VRE with respect to the linear expansion coefficients. Improved convergence is obtained by applying constraints to the spacing of the basis functions and coupling the minimization of the VRE to the minimization of one or more points along the path that correspond to intermediates and transition states. The VRC method is demonstrated by optimizing the reaction path for the Müller-Brown surface and by finding a reaction path passing through 5 transition states and 4 intermediates for a 10 atom Lennard-Jones cluster. PMID:26723645

  19. Path optimization by a variational reaction coordinate method. I. Development of formalism and algorithms

    NASA Astrophysics Data System (ADS)

    Birkholz, Adam B.; Schlegel, H. Bernhard

    2015-12-01

    The development of algorithms to optimize reaction pathways between reactants and products is an active area of study. Existing algorithms typically describe the path as a discrete series of images (chain of states) which are moved downhill toward the path, using various reparameterization schemes, constraints, or fictitious forces to maintain a uniform description of the reaction path. The Variational Reaction Coordinate (VRC) method is a novel approach that finds the reaction path by minimizing the variational reaction energy (VRE) of Quapp and Bofill. The VRE is the line integral of the gradient norm along a path between reactants and products and minimization of VRE has been shown to yield the steepest descent reaction path. In the VRC method, we represent the reaction path by a linear expansion in a set of continuous basis functions and find the optimized path by minimizing the VRE with respect to the linear expansion coefficients. Improved convergence is obtained by applying constraints to the spacing of the basis functions and coupling the minimization of the VRE to the minimization of one or more points along the path that correspond to intermediates and transition states. The VRC method is demonstrated by optimizing the reaction path for the Müller-Brown surface and by finding a reaction path passing through 5 transition states and 4 intermediates for a 10 atom Lennard-Jones cluster.

  20. Optimal HIV testing and earlier care: the way forward in Europe.

    PubMed

    Coenen, T; Lundgren, J; Lazarus, J V; Matic, S

    2008-07-01

    The articles in this supplement were developed from a recent pan-European conference entitled 'HIV in Europe 2007: Working together for optimal testing and earlier care', which took place on 26-27 November in Brussels, Belgium. The conference, organized by a multidisciplinary group of experts representing advocacy, clinical and policy areas of the HIV field, was convened in an effort to gain a common understanding on the role of HIV testing and counselling in optimizing diagnosis and the need for earlier care. Key topics discussed at the conference and described in the following articles include: current barriers to HIV testing across Europe, trends in the epidemiology of HIV in the region, problems associated with undiagnosed infection and the psychosocial barriers impacting on testing. The supplement also provides a summary of the World Health Organization's recommendations for HIV testing in Europe and an outline of an indicator disease-guided approach to HIV testing proposed by a committee of experts from the European AIDS Clinical Society (EACS). We hope that consideration of the issues discussed in this supplement will help to shift the HIV field closer towards our ultimate goal: provision of optimal HIV testing and earlier care across the whole of the European region. PMID:18557862

  1. Development of a neonate lung reconstruction algorithm using a wavelet AMG and estimated boundary form.

    PubMed

    Bayford, R; Kantartzis, P; Tizzard, A; Yerworth, R; Liatsis, P; Demosthenous, A

    2008-06-01

    Objective, non-invasive measures of lung maturity and development, oxygen requirements and lung function, suitable for use in small, unsedated infants, are urgently required to define the nature and severity of persisting lung disease, and to identify risk factors for developing chronic lung problems. Disorders of lung growth, maturation and control of breathing are among the most important problems faced by the neonatologists. At present, no system for continuous monitoring of neonate lung function to reduce the risk of chronic lung disease in infancy in intensive care units exists. We are in the process of developing a new integrated electrical impedance tomography (EIT) system based on wearable technology to integrate measures of the boundary diameter from the boundary form for neonates into the reconstruction algorithm. In principle, this approach could provide a reduction of image artefacts in the reconstructed image associated with incorrect boundary form assumptions. In this paper, we investigate the required accuracy of the boundary form that would be suitable to minimize artefacts in the reconstruction for neonate lung function. The number of data points needed to create the required boundary form is automatically determined using genetic algorithms. The approach presented in this paper is to assist quality of the reconstruction using different approximations to the ideal boundary form. We also investigate the use of a wavelet algebraic multi-grid (WAMG) preconditioner to reduce the reconstruction computation requirements. Results are presented that demonstrate a full 3D model is required to minimize artefact in the reconstructed image and the implementation of a WAMG for EIT. PMID:18544799

  2. Now, the Taller Die Earlier: The Curse of Cancer.

    PubMed

    Sohn, Kitae

    2016-06-01

    This study estimates the relationship between height and mortality. Individuals in the National Health Interview Survey 1986, a nationally representative U.S. sample, are linked to death certificate data until December 31, 2006. We analyze this relationship in 14,440 men and 16,390 women aged 25+. We employ the Cox proportional hazards model, controlling for birthday and education. An additional inch increase in height is related to a hazard ratio of death from all causes that is 2.2% higher for men and 2.5% higher for women. The findings are robust to changing survival distributions, and further analyses indicate that the figures are lower bounds. This relationship is mainly driven by the positive relationship between height and development of cancer. An additional inch increase in height is related to a hazard ratio of death from malignant neoplasms that is 7.1% higher for men and 5.7% higher for women. In contrast to the negative relationship between height and mortality in the past, this relationship is now positive. This demonstrates the success and accessibility of medical technology in treating patients with many acute and chronic diseases other than cancer. PMID:25991828

  3. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  4. Development of a Low-Lift Chiller Controller and Simplified Precooling Control Algorithm - Final Report

    SciTech Connect

    Gayeski, N.; Armstrong, Peter; Alvira, M.; Gagne, J.; Katipamula, Srinivas

    2011-11-30

    KGS Buildings LLC (KGS) and Pacific Northwest National Laboratory (PNNL) have developed a simplified control algorithm and prototype low-lift chiller controller suitable for model-predictive control in a demonstration project of low-lift cooling. Low-lift cooling is a highly efficient cooling strategy conceived to enable low or net-zero energy buildings. A low-lift cooling system consists of a high efficiency low-lift chiller, radiant cooling, thermal storage, and model-predictive control to pre-cool thermal storage overnight on an optimal cooling rate trajectory. We call the properly integrated and controlled combination of these elements a low-lift cooling system (LLCS). This document is the final report for that project.

  5. Development of Web-Based Menu Planning Support System and its Solution Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Kashima, Tomoko; Matsumoto, Shimpei; Ishii, Hiroaki

    2009-10-01

    Recently lifestyle-related diseases have become an object of public concern, while at the same time people are being more health conscious. As an essential factor for causing the lifestyle-related diseases, we assume that the knowledge circulation on dietary habits is still insufficient. This paper focuses on everyday meals close to our life and proposes a well-balanced menu planning system as a preventive measure of lifestyle-related diseases. The system is developed by using a Web-based frontend and it provides multi-user services and menu information sharing capabilities like social networking services (SNS). The system is implemented on a Web server running Apache (HTTP server software), MySQL (database management system), and PHP (scripting language for dynamic Web pages). For the menu planning, a genetic algorithm is applied by understanding this problem as multidimensional 0-1 integer programming.

  6. Development of algorithms for detection of mechanical injury on white mushrooms (Agaricus bisporus) using hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Gowen, A. A.; O'Donnell, C. P.

    2009-05-01

    White mushrooms were subjected to mechanical injury by controlled shaking in a plastic box at 400 rpm for different times (0, 60, 120, 300 and 600 s). Immediately after shaking, hyperspectral images were obtained using two pushbroom line-scanning hyperspectral imaging instruments, one operating in the wavelength range of 400 - 1000 nm with spectroscopic resolution of 5 nm, the other operating in the wavelength range of 950 - 1700 nm with spectroscopic resolution of 7 nm. Different spectral and spatial pretreatments were investigated to reduce the effect of sample curvature on hyperspectral data. Algorithms based on Chemometric techniques (Principal Component Analysis and Partial Least Squares Discriminant Analysis) and image processing methods (masking, thresholding, morphological operations) were developed for pixel classification in hyperspectral images. In addition, correlation analysis, spectral angle mapping and scaled difference of sample spectra were investigated and compared with the chemometric approaches.

  7. Development of Great Lakes algorithms for the Nimbus-G coastal zone color scanner

    NASA Technical Reports Server (NTRS)

    Tanis, F. J.; Lyzenga, D. R.

    1981-01-01

    A series of experiments in the Great Lakes designed to evaluate the application of the Nimbus G satellite Coastal Zone Color Scanner (CZCS) were conducted. Absorption and scattering measurement data were reduced to obtain a preliminary optical model for the Great Lakes. Available optical models were used in turn to calculate subsurface reflectances for expected concentrations of chlorophyll-a pigment and suspended minerals. Multiple nonlinear regression techniques were used to derive CZCS water quality prediction equations from Great Lakes simulation data. An existing atmospheric model was combined with a water model to provide the necessary simulation data for evaluation of the preliminary CZCS algorithms. A CZCS scanner model was developed which accounts for image distorting scanner and satellite motions. This model was used in turn to generate mapping polynomials that define the transformation from the original image to one configured in a polyconic projection. Four computer programs (FORTRAN IV) for image transformation are presented.

  8. Developing Image Processing Meta-Algorithms with Data Mining of Multiple Metrics

    PubMed Central

    Cunha, Alexandre; Toga, A. W.; Parker, D. Stott

    2014-01-01

    People often use multiple metrics in image processing, but here we take a novel approach of mining the values of batteries of metrics on image processing results. We present a case for extending image processing methods to incorporate automated mining of multiple image metric values. Here by a metric we mean any image similarity or distance measure, and in this paper we consider intensity-based and statistical image measures and focus on registration as an image processing problem. We show how it is possible to develop meta-algorithms that evaluate different image processing results with a number of different metrics and mine the results in an automated fashion so as to select the best results. We show that the mining of multiple metrics offers a variety of potential benefits for many image processing problems, including improved robustness and validation. PMID:24653748

  9. Development of a new metal artifact reduction algorithm by using an edge preserving method for CBCT imaging

    NASA Astrophysics Data System (ADS)

    Kim, Juhye; Nam, Haewon; Lee, Rena

    2015-07-01

    CT (computed tomography) images, metal materials such as tooth supplements or surgical clips can cause metal artifact and degrade image quality. In severe cases, this may lead to misdiagnosis. In this research, we developed a new MAR (metal artifact reduction) algorithm by using an edge preserving filter and the MATLAB program (Mathworks, version R2012a). The proposed algorithm consists of 6 steps: image reconstruction from projection data, metal segmentation, forward projection, interpolation, applied edge preserving smoothing filter, and new image reconstruction. For an evaluation of the proposed algorithm, we obtained both numerical simulation data and data for a Rando phantom. In the numerical simulation data, four metal regions were added into the Shepp Logan phantom for metal artifacts. The projection data of the metal-inserted Rando phantom were obtained by using a prototype CBCT scanner manufactured by medical engineering and medical physics (MEMP) laboratory research group in medical science at Ewha Womans University. After these had been adopted the proposed algorithm was performed, and the result were compared with the original image (with metal artifact without correction) and with a corrected image based on linear interpolation. Both visual and quantitative evaluations were done. Compared with the original image with metal artifacts and with the image corrected by using linear interpolation, both the numerical and the experimental phantom data demonstrated that the proposed algorithm reduced the metal artifact. In conclusion, the evaluation in this research showed that the proposed algorithm outperformed the interpolation based MAR algorithm. If an optimization and a stability evaluation of the proposed algorithm can be performed, the developed algorithm is expected to be an effective tool for eliminating metal artifacts even in commercial CT systems.

  10. Algorithm and code development for unsteady three-dimensional Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Obayashi, Shigeru

    1993-01-01

    In the last two decades, there have been extensive developments in computational aerodynamics, which constitutes a major part of the general area of computational fluid dynamics. Such developments are essential to advance the understanding of the physics of complex flows, to complement expensive wind-tunnel tests, and to reduce the overall design cost of an aircraft, particularly in the area of aeroelasticity. Aeroelasticity plays an important role in the design and development of aircraft, particularly modern aircraft, which tend to be more flexible. Several phenomena that can be dangerous and limit the performance of an aircraft occur because of the interaction of the flow with flexible components. For example, an aircraft with highly swept wings may experience vortex-induced aeroelastic oscillations. Also, undesirable aeroelastic phenomena due to the presence and movement of shock waves occur in the transonic range. Aeroelastically critical phenomena, such as a low transonic flutter speed, have been known to occur through limited wind-tunnel tests and flight tests. Aeroelastic tests require extensive cost and risk. An aeroelastic wind-tunnel experiment is an order of magnitude more expensive than a parallel experiment involving only aerodynamics. By complementing the wind-tunnel experiments with numerical simulations the overall cost of the development of aircraft can be considerably reduced. In order to accurately compute aeroelastic phenomenon it is necessary to solve the unsteady Euler/Navier-Stokes equations simultaneously with the structural equations of motion. These equations accurately describe the flow phenomena for aeroelastic applications. At Ames a code, ENSAERO, is being developed for computing the unsteady aerodynamics and aeroelasticity of aircraft and it solves the Euler/Navier-Stokes equations. The purpose of this contract is to continue the algorithm enhancements of ENSAERO and to apply the code to complicated geometries. During the last year

  11. Development of a new genetic algorithm to solve the feedstock scheduling problem in an anaerobic digester

    NASA Astrophysics Data System (ADS)

    Cram, Ana Catalina

    As worldwide environmental awareness grow, alternative sources of energy have become important to mitigate climate change. Biogas in particular reduces greenhouse gas emissions that contribute to global warming and has the potential of providing 25% of the annual demand for natural gas in the U.S. In 2011, 55,000 metric tons of methane emissions were reduced and 301 metric tons of carbon dioxide emissions were avoided through the use of biogas alone. Biogas is produced by anaerobic digestion through the fermentation of organic material. It is mainly composed of methane with a rage of 50 to 80% in its concentration. Carbon dioxide covers 20 to 50% and small amounts of hydrogen, carbon monoxide and nitrogen. The biogas production systems are anaerobic digestion facilities and the optimal operation of an anaerobic digester requires the scheduling of all batches from multiple feedstocks during a specific time horizon. The availability times, biomass quantities, biogas production rates and storage decay rates must all be taken into account for maximal biogas production to be achieved during the planning horizon. Little work has been done to optimize the scheduling of different types of feedstock in anaerobic digestion facilities to maximize the total biogas produced by these systems. Therefore, in the present thesis, a new genetic algorithm is developed with the main objective of obtaining the optimal sequence in which different feedstocks will be processed and the optimal time to allocate to each feedstock in the digester with the main objective of maximizing the production of biogas considering different types of feedstocks, arrival times and decay rates. Moreover, all batches need to be processed in the digester in a specified time with the restriction that only one batch can be processed at a time. The developed algorithm is applied to 3 different examples and a comparison with results obtained in previous studies is presented.

  12. Developing Multiple Diverse Potential Designs for Heat Transfer Utilizing Graph Based Evolutionary Algorithms

    SciTech Connect

    David J. Muth Jr.

    2006-09-01

    This paper examines the use of graph based evolutionary algorithms (GBEAs) to find multiple acceptable solutions for heat transfer in engineering systems during the optimization process. GBEAs are a type of evolutionary algorithm (EA) in which a topology, or geography, is imposed on an evolving population of solutions. The rates at which solutions can spread within the population are controlled by the choice of topology. As in nature geography can be used to develop and sustain diversity within the solution population. Altering the choice of graph can create a more or less diverse population of potential solutions. The choice of graph can also affect the convergence rate for the EA and the number of mating events required for convergence. The engineering system examined in this paper is a biomass fueled cookstove used in developing nations for household cooking. In this cookstove wood is combusted in a small combustion chamber and the resulting hot gases are utilized to heat the stove’s cooking surface. The spatial temperature profile of the cooking surface is determined by a series of baffles that direct the flow of hot gases. The optimization goal is to find baffle configurations that provide an even temperature distribution on the cooking surface. Often in engineering, the goal of optimization is not to find the single optimum solution but rather to identify a number of good solutions that can be used as a starting point for detailed engineering design. Because of this a key aspect of evolutionary optimization is the diversity of the solutions found. The key conclusion in this paper is that GBEA’s can be used to create multiple good solutions needed to support engineering design.

  13. Developing a modified SEBAL algorithm that is responsive to advection by using limited weather data

    NASA Astrophysics Data System (ADS)

    Mkhwanazi, Mcebisi

    The use of Remote Sensing ET algorithms in water management, especially for agricultural purposes is increasing, and there are more models being introduced. The Surface Energy Balance Algorithm for Land (SEBAL) and its variant, Mapping Evapotranspiration with Internalized Calibration (METRIC) are some of the models that are being widely used. While SEBAL has several advantages over other RS models, including that it does not require prior knowledge of soil, crop and other ground details, it has the downside of underestimating evapotranspiration (ET) on days when there is advection, which may be in most cases in arid and semi-arid areas. METRIC, however has been modified to be able to account for advection, but in doing so it requires hourly weather data. In most developing countries, while accurate estimates of ET are required, the weather data necessary to use METRIC may not be available. This research therefore was meant to develop a modified version of SEBAL that would require minimal weather data that may be available in these areas, and still estimate ET accurately. The data that were used to develop this model were minimum and maximum temperatures, wind data, preferably the run of wind in the afternoon, and wet bulb temperature. These were used to quantify the advected energy that would increase ET in the field. This was a two-step process; the first was developing the model for standard conditions, which was described as a healthy cover of alfalfa, 40-60 cm tall and not short of water. Under standard conditions, when estimated ET using modified SEBAL was compared with lysimeter-measured ET, the modified SEBAL model had a Mean Bias Error (MBE) of 2.2 % compared to -17.1 % from the original SEBAL. The Root Mean Square Error (RMSE) was lower for the modified SEBAL model at 10.9 % compared to 25.1 % for the original SEBAL. The modified SEBAL model, developed on an alfalfa field in Rocky Ford, was then tested on other crops; beans and wheat. It was also tested on

  14. Development of a Near-Real Time Hail Damage Swath Identification Algorithm for Vegetation

    NASA Technical Reports Server (NTRS)

    Bell, Jordan R.; Molthan, Andrew L.; Schultz, Lori A.; McGrath, Kevin M.; Burks, Jason E.

    2015-01-01

    The Midwest is home to one of the world's largest agricultural growing regions. Between the time period of late May through early September, and with irrigation and seasonal rainfall these crops are able to reach their full maturity. Using moderate to high resolution remote sensors, the monitoring of the vegetation can be achieved using the red and near-infrared wavelengths. These wavelengths allow for the calculation of vegetation indices, such as Normalized Difference Vegetation Index (NDVI). The vegetation growth and greenness, in this region, grows and evolves uniformly as the growing season progresses. However one of the biggest threats to Midwest vegetation during the time period is thunderstorms that bring large hail and damaging winds. Hail and wind damage to crops can be very expensive to crop growers and, damage can be spread over long swaths associated with the tracks of the damaging storms. Damage to the vegetation can be apparent in remotely sensed imagery and is visible from space after storms slightly damage the crops, allowing for changes to occur slowly over time as the crops wilt or more readily apparent if the storms strip material from the crops or destroy them completely. Previous work on identifying these hail damage swaths used manual interpretation by the way of moderate and higher resolution satellite imagery. With the development of an automated and near-real time hail swath damage identification algorithm, detection can be improved, and more damage indicators be created in a faster and more efficient way. The automated detection of hail damage swaths will examine short-term, large changes in the vegetation by differencing near-real time eight day NDVI composites and comparing them to post storm imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard Terra and Aqua and Visible Infrared Imaging Radiometer Suite (VIIRS) aboard Suomi NPP. In addition land surface temperatures from these instruments will be examined as

  15. Nonlinear Motion Cueing Algorithm: Filtering at Pilot Station and Development of the Nonlinear Optimal Filters for Pitch and Roll

    NASA Technical Reports Server (NTRS)

    Zaychik, Kirill B.; Cardullo, Frank M.

    2012-01-01

    Telban and Cardullo have developed and successfully implemented the non-linear optimal motion cueing algorithm at the Visual Motion Simulator (VMS) at the NASA Langley Research Center in 2005. The latest version of the non-linear algorithm performed filtering of motion cues in all degrees-of-freedom except for pitch and roll. This manuscript describes the development and implementation of the non-linear optimal motion cueing algorithm for the pitch and roll degrees of freedom. Presented results indicate improved cues in the specified channels as compared to the original design. To further advance motion cueing in general, this manuscript describes modifications to the existing algorithm, which allow for filtering at the location of the pilot's head as opposed to the centroid of the motion platform. The rational for such modification to the cueing algorithms is that the location of the pilot's vestibular system must be taken into account as opposed to the off-set of the centroid of the cockpit relative to the center of rotation alone. Results provided in this report suggest improved performance of the motion cueing algorithm.

  16. Development of the algorithm for life for the search for extraterrestrial life

    NASA Astrophysics Data System (ADS)

    Kolb, Vera M.

    2013-09-01

    We first introduce a concept of algorithms in a form which is useful to astrobiology. We follow Dennett's description of algorithms, which he has used to introduce the idea that evolution takes place via natural selection in an algorithmic process. We then bring up various examples and principles of evolution, including inventive evolution for the biosynthesis of secondary metabolites, and propose them as candidates for constituting evolutionary algorithms. Finally, we discuss philosophy papers of Rescher about extraterrestrials and their science and attempt to extract from them some generalized principles for the search for extraterrestrial life.

  17. Development of the Tardivo Algorithm to Predict Amputation Risk of Diabetic Foot

    PubMed Central

    Tardivo, João Paulo; Baptista, Maurício S.; Correa, João Antonio; Adami, Fernando; Pinhal, Maria Aparecida Silva

    2015-01-01

    Diabetes is a chronic disease that affects almost 19% of the elderly population in Brazil and similar percentages around the world. Amputation of lower limbs in diabetic patients who present foot complications is a common occurrence with a significant reduction of life quality, and heavy costs on the health system. Unfortunately, there is no easy protocol to define the conditions that should be considered to proceed to amputation. The main objective of the present study is to create a simple prognostic score to evaluate the diabetic foot, which is called Tardivo Algorithm. Calculation of the score is based on three main factors: Wagner classification, signs of peripheral arterial disease (PAD), which is evaluated by using Peripheral Arterial Disease Classification, and the location of ulcers. The final score is obtained by multiplying the value of the individual factors. Patients with good peripheral vascularization received a value of 1, while clinical signs of ischemia received a value of 2 (PAD 2). Ulcer location was defined as forefoot, midfoot and hind foot. The conservative treatment used in patients with scores below 12 was based on a recently developed Photodynamic Therapy (PDT) protocol. 85.5% of these patients presented a good outcome and avoided amputation. The results showed that scores 12 or higher represented a significantly higher probability of amputation (Odds ratio and logistic regression-IC 95%, 12.2–1886.5). The Tardivo algorithm is a simple prognostic score for the diabetic foot, easily accessible by physicians. It helps to determine the amputation risk and the best treatment, whether it is conservative or surgical management. PMID:26281044

  18. The development of a near-real time hail damage swath identification algorithm for vegetation

    NASA Astrophysics Data System (ADS)

    Bell, Jordan R.

    The central United States is primarily covered in agricultural lands with a growing season that peaks during the same time as the region's climatological maximum for severe weather. These severe thunderstorms can bring large hail that can cause extensive areas of crop damage, which can be difficult to survey from the ground. Satellite remote sensing can help with the identification of these damaged areas. This study examined three techniques for identifying damage using satellite imagery that could be used in the development of a near-real time algorithm formulated for the detection of damage to agriculture caused by hail. The three techniques: a short term Normalized Difference Vegetation Index (NDVI) change product, a modified Vegetation Health Index (mVHI) that incorporates both NDVI and land surface temperature (LST), and a feature detection technique based on NDVI and LST anomalies were tested on a single training case and five case studies. Skill scores were computed for each of the techniques during the training case and each case study. Among the best-performing case studies, the probability of detection (POD) for the techniques ranged from 0.527 - 0.742. Greater skill was noted for environments that occurred later in the growing season over areas where the land cover was consistently one or two types of uniform vegetation. The techniques struggled in environments where the land cover was not able to provide uniform vegetation, resulting in POD of 0.067 - 0.223. The feature detection technique was selected to be used for the near-real-time algorithm, based on the consistent performance throughout the entire growing season.

  19. Development and validation of a simple algorithm for initiation of CPAP in neonates with respiratory distress in Malawi

    PubMed Central

    Hundalani, Shilpa G; Richards-Kortum, Rebecca; Oden, Maria; Kawaza, Kondwani; Gest, Alfred; Molyneux, Elizabeth

    2015-01-01

    Background Low-cost bubble continuous positive airway pressure (bCPAP) systems have been shown to improve survival in neonates with respiratory distress, in developing countries including Malawi. District hospitals in Malawi implementing CPAP requested simple and reliable guidelines to enable healthcare workers with basic skills and minimal training to determine when treatment with CPAP is necessary. We developed and validated TRY (T: Tone is good, R: Respiratory Distress and Y=Yes) CPAP, a simple algorithm to identify neonates with respiratory distress who would benefit from CPAP. Objective To validate the TRY CPAP algorithm for neonates with respiratory distress in a low-resource setting. Methods We constructed an algorithm using a combination of vital signs, tone and birth weight to determine the need for CPAP in neonates with respiratory distress. Neonates admitted to the neonatal ward of Queen Elizabeth Central Hospital, in Blantyre, Malawi, were assessed in a prospective, cross-sectional study. Nurses and paediatricians-in-training assessed neonates to determine whether they required CPAP using the TRY CPAP algorithm. To establish the accuracy of the TRY CPAP algorithm in evaluating the need for CPAP, their assessment was compared with the decision of a neonatologist blinded to the TRY CPAP algorithm findings. Results 325 neonates were evaluated over a 2-month period; 13% were deemed to require CPAP by the neonatologist. The inter-rater reliability with the algorithm was 0.90 for nurses and 0.97 for paediatricians-in-training using the neonatologist's assessment as the reference standard. Conclusions The TRY CPAP algorithm has the potential to be a simple and reliable tool to assist nurses and clinicians in identifying neonates who require treatment with CPAP in low-resource settings. PMID:25877290

  20. Day Care Babies Catch Stomach Bugs Earlier, but Get Fewer Later

    MedlinePlus

    ... nlm.nih.gov/medlineplus/news/fullstory_158513.html Day Care Babies Catch Stomach Bugs Earlier, But Get ... TUESDAY, April 26, 2016 (HealthDay News) -- Babies in day care catch their first stomach bug earlier than ...

  1. Day Care Babies Catch Stomach Bugs Earlier, but Get Fewer Later

    MedlinePlus

    ... page: https://medlineplus.gov/news/fullstory_158513.html Day Care Babies Catch Stomach Bugs Earlier, But Get ... TUESDAY, April 26, 2016 (HealthDay News) -- Babies in day care catch their first stomach bug earlier than ...

  2. Ice surface temperature retrieval from AVHRR, ATSR, and passive microwave satellite data: Algorithm development and application

    NASA Technical Reports Server (NTRS)

    Key, Jeff; Maslanik, James; Steffen, Konrad

    1994-01-01

    One essential parameter used in the estimation of radiative and turbulent heat fluxes from satellite data is surface temperature. Sea and land surface temperature (SST and LST) retrieval algorithms that utilize the thermal infrared portion of the spectrum have been developed, with the degree of success dependent primarily upon the variability of the surface and atmospheric characteristics. However, little effort has been directed to the retrieval of the sea ice surface temperature (IST) in the Arctic and Antarctic pack ice or the ice sheet surface temperature over Antarctica and Greenland. The reason is not one of methodology, but rather our limited knowledge of atmospheric temperature, humidity, and aerosol vertical, spatial and temporal distributions, the microphysical properties of polar clouds, and the spectral characteristics of snow, ice, and water surfaces. Over the open ocean the surface is warm, dark, and relatively homogeneous. This makes SST retrieval, including cloud clearing, a fairly straightforward task. Over the ice, however, the surface within a single satellite pixel is likely to be highly heterogeneous, a mixture of ice of various thicknesses, open water, and snow cover in the case of sea ice. Additionally, the Arctic is cloudy - very cloudy - with typical cloud cover amounts ranging from 60-90 percent. There are few observations of cloud cover amounts over Antarctica. The goal of this research is to increase our knowledge of surface temperature patterns and magnitudes in both polar regions, by examining existing data and improving our ability to use satellite data as a monitoring tool. Four instruments are of interest in this study: the AVHRR, ATSR, SMMR, and SSM/I. Our objectives are as follows. Refine the existing AVHRR retrieval algorithm defined in Key and Haefliger (1992; hereafter KH92) and applied elsewhere. Develop a method for IST retrieval from ATSR data similar to the one used for SST. Further investigate the possibility of estimating

  3. Calibration and algorithm development for estimation of nitrogen in wheat crop using tractor mounted N-sensor.

    PubMed

    Singh, Manjeet; Kumar, Rajneesh; Sharma, Ankit; Singh, Bhupinder; Thind, S K

    2015-01-01

    The experiment was planned to investigate the tractor mounted N-sensor (Make Yara International) to predict nitrogen (N) for wheat crop under different nitrogen levels. It was observed that, for tractor mounted N-sensor, spectrometers can scan about 32% of total area of crop under consideration. An algorithm was developed using a linear relationship between sensor sufficiency index (SIsensor) and SISPAD to calculate the N app as a function of SISPAD. There was a strong correlation among sensor attributes (sensor value, sensor biomass, and sensor NDVI) and different N-levels. It was concluded that tillering stage is most prominent stage to predict crop yield as compared to the other stages by using sensor attributes. The algorithms developed for tillering and booting stages are useful for the prediction of N-application rates for wheat crop. N-application rates predicted by algorithm developed and sensor value were almost the same for plots with different levels of N applied. PMID:25811039

  4. Calibration and Algorithm Development for Estimation of Nitrogen in Wheat Crop Using Tractor Mounted N-Sensor

    PubMed Central

    Singh, Manjeet; Kumar, Rajneesh; Sharma, Ankit; Singh, Bhupinder; Thind, S. K.

    2015-01-01

    The experiment was planned to investigate the tractor mounted N-sensor (Make Yara International) to predict nitrogen (N) for wheat crop under different nitrogen levels. It was observed that, for tractor mounted N-sensor, spectrometers can scan about 32% of total area of crop under consideration. An algorithm was developed using a linear relationship between sensor sufficiency index (SIsensor) and SISPAD to calculate the Napp as a function of SISPAD. There was a strong correlation among sensor attributes (sensor value, sensor biomass, and sensor NDVI) and different N-levels. It was concluded that tillering stage is most prominent stage to predict crop yield as compared to the other stages by using sensor attributes. The algorithms developed for tillering and booting stages are useful for the prediction of N-application rates for wheat crop. N-application rates predicted by algorithm developed and sensor value were almost the same for plots with different levels of N applied. PMID:25811039

  5. Accessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records

    PubMed Central

    MacRae, J; Darlow, B; McBain, L; Jones, O; Stubbe, M; Turner, N; Dowell, A

    2015-01-01

    Objective To develop a natural language processing software inference algorithm to classify the content of primary care consultations using electronic health record Big Data and subsequently test the algorithm's ability to estimate the prevalence and burden of childhood respiratory illness in primary care. Design Algorithm development and validation study. To classify consultations, the algorithm is designed to interrogate clinical narrative entered as free text, diagnostic (Read) codes created and medications prescribed on the day of the consultation. Setting Thirty-six consenting primary care practices from a mixed urban and semirural region of New Zealand. Three independent sets of 1200 child consultation records were randomly extracted from a data set of all general practitioner consultations in participating practices between 1 January 2008–31 December 2013 for children under 18 years of age (n=754 242). Each consultation record within these sets was independently classified by two expert clinicians as respiratory or non-respiratory, and subclassified according to respiratory diagnostic categories to create three ‘gold standard’ sets of classified records. These three gold standard record sets were used to train, test and validate the algorithm. Outcome measures Sensitivity, specificity, positive predictive value and F-measure were calculated to illustrate the algorithm's ability to replicate judgements of expert clinicians within the 1200 record gold standard validation set. Results The algorithm was able to identify respiratory consultations in the 1200 record validation set with a sensitivity of 0.72 (95% CI 0.67 to 0.78) and a specificity of 0.95 (95% CI 0.93 to 0.98). The positive predictive value of algorithm respiratory classification was 0.93 (95% CI 0.89 to 0.97). The positive predictive value of the algorithm classifying consultations as being related to specific respiratory diagnostic categories ranged from 0.68 (95% CI 0.40 to 1.00; other

  6. Development of fast line scanning imaging algorithm for diseased chicken detection

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Chieh; Chao, Kuanglin; Chen, Yud-Ren; Kim, Moon S.

    2005-11-01

    A hyperspectral line-scan imaging system for automated inspection of wholesome and diseased chickens was developed and demonstrated. The hyperspectral imaging system consisted of an electron-multiplying charge-coupled-device (EMCCD) camera and an imaging spectrograph. The system used a spectrograph to collect spectral measurements across a pixel-wide vertical linear field of view through which moving chicken carcasses passed. After a series of image calibration procedures, the hyperspectral line-scan images were collected for chickens on a laboratory simulated processing line. From spectral analysis, four key wavebands for differentiating between wholesome and systemically diseased chickens were selected: 413 nm, 472 nm, 515 nm, and 546 nm, and a reference waveband, 622 nm. The ratio of relative reflectance between each key wavelength and the reference wavelength was calculated as an image feature. A fuzzy logic-based algorithm utilizing the key wavebands was developed to identify individual pixels on the chicken surface exhibiting symptoms of systemic disease. Two differentiation methods were built to successfully differentiate 72 systemically diseased chickens from 65 wholesome chickens.

  7. Development and verification of algorithms for spacecraft formation flight using the SPHERES testbed: application to TPF

    NASA Astrophysics Data System (ADS)

    Kong, Edmund M.; Hilstad, Mark O.; Nolet, Simon; Miller, David W.

    2004-10-01

    The MIT Space Systems Laboratory and Payload Systems Inc. has developed the SPHERES testbed for NASA and DARPA as a risk-tolerant medium for the development and maturation of spacecraft formation flight and docking algorithms. The testbed, which is designed to operate both onboard the International Space Station and on the ground, provides researchers with a unique long-term, replenishable, and upgradeable platform for the validation of high-risk control and autonomy technologies critical to the operation of distributed spacecraft missions such as the proposed formation flying interferometer version of Terrestrial Planet Finder (TPF). In November 2003, a subset of the key TPF-like maneuvers has been performed onboard NASA's KC-135 microgravity facility, followed by 2-D demonstrations of two and three spacecraft maneuvers at the Marshall Space Flight Center (MSFC) in June 2004. Due to the short experiment duration, only elements of a TPF lost in space maneuver were implemented and validated. The longer experiment time at the MSFC flat-floor facility allows more elaborate maneuvers such as array spin-up/down, array resizing and array rotation be tested but in a less representative environment. The results obtained from these experiments are presented together with the basic estimator and control building blocks used in these experiments.

  8. Development of algorithms and approximations for rapid operational air quality modelling

    NASA Astrophysics Data System (ADS)

    Barrett, Steven R. H.; Britter, Rex E.

    In regulatory and public health contexts the long-term average pollutant concentration in the vicinity of a source is frequently of interest. Well-developed modelling tools such as AERMOD and ADMS are able to generate time-series air quality estimates of considerable accuracy, applying an up-to-date understanding of atmospheric boundary layer behaviour. However, such models incur a significant computational cost with runtimes of hours to days. These approaches are often acceptable when considering a single industrial complex, but for widespread policy analyses the computational cost rapidly becomes intractable. In this paper we present some mathematical techniques and algorithmic approaches that can make air quality estimates several orders of magnitude faster. We show that, for long-term average concentrations, lateral dispersion need not be accounted for explicitly. This is applied to a simple reference case of a ground-level point source in a neutral boundary layer. A scaling law is also developed for the area in exceedance of a regulatory limit value.

  9. Examination of a genetic algorithm for the application in high-throughput downstream process development.

    PubMed

    Treier, Katrin; Berg, Annette; Diederich, Patrick; Lang, Katharina; Osberghaus, Anna; Dismer, Florian; Hubbuch, Jürgen

    2012-10-01

    Compared to traditional strategies, application of high-throughput experiments combined with optimization methods can potentially speed up downstream process development and increase our understanding of processes. In contrast to the method of Design of Experiments in combination with response surface analysis (RSA), optimization approaches like genetic algorithms (GAs) can be applied to identify optimal parameter settings in multidimensional optimizations tasks. In this article the performance of a GA was investigated applying parameters applicable in high-throughput downstream process development. The influence of population size, the design of the initial generation and selection pressure on the optimization results was studied. To mimic typical experimental data, four mathematical functions were used for an in silico evaluation. The influence of GA parameters was minor on landscapes with only one optimum. On landscapes with several optima, parameters had a significant impact on GA performance and success in finding the global optimum. Premature convergence increased as the number of parameters and noise increased. RSA was shown to be comparable or superior for simple systems and low to moderate noise. For complex systems or high noise levels, RSA failed, while GA optimization represented a robust tool for process optimization. Finally, the effect of different objective functions is shown exemplarily for a refolding optimization of lysozyme. PMID:22700464

  10. Fundamental analysis and algorithms for development of a mobile fast-scan lateral migration radiography system

    NASA Astrophysics Data System (ADS)

    Su, Zhong

    developed and applied to acquired images to eliminate the undesirable effects. The images acquired with this system also have the general characteristics of LMR images: (1) displacement of object image center from the true object center exists for subsurface objects in the collimated detector images; (2) shadowing effects occur for objects that protrude above the scanned surface; (3) scanned objects with air volumes present greater contrast in the acquired images than those without air volumes. Image processing and object recognition algorithms are developed and applied to the LMR images to enhance the image quality, to remove surface clutter, and to obtain depth information of subsurface objects. The physical analysis of the x-ray beam rotating collimator and the development of the corresponding mobile fast-scan LMR system and its image acquisition and processing algorithms show that LMR is a proven technique for fast, mobile object surface and subsurface examination.

  11. Development of an Innovative Algorithm for Aerodynamics-Structure Interaction Using Lattice Boltzmann Method

    NASA Technical Reports Server (NTRS)

    Mei, Ren-Wei; Shyy, Wei; Yu, Da-Zhi; Luo, Li-Shi; Rudy, David (Technical Monitor)

    2001-01-01

    The lattice Boltzmann equation (LBE) is a kinetic formulation which offers an alternative computational method capable of solving fluid dynamics for various systems. Major advantages of the method are owing to the fact that the solution for the particle distribution functions is explicit, easy to implement, and the algorithm is natural to parallelize. In this final report, we summarize the works accomplished in the past three years. Since most works have been published, the technical details can be found in the literature. Brief summary will be provided in this report. In this project, a second-order accurate treatment of boundary condition in the LBE method is developed for a curved boundary and tested successfully in various 2-D and 3-D configurations. To evaluate the aerodynamic force on a body in the context of LBE method, several force evaluation schemes have been investigated. A simple momentum exchange method is shown to give reliable and accurate values for the force on a body in both 2-D and 3-D cases. Various 3-D LBE models have been assessed in terms of efficiency, accuracy, and robustness. In general, accurate 3-D results can be obtained using LBE methods. The 3-D 19-bit model is found to be the best one among the 15-bit, 19-bit, and 27-bit LBE models. To achieve desired grid resolution and to accommodate the far field boundary conditions in aerodynamics computations, a multi-block LBE method is developed by dividing the flow field into various blocks each having constant lattice spacing. Substantial contribution to the LBE method is also made through the development of a new, generalized lattice Boltzmann equation constructed in the moment space in order to improve the computational stability, detailed theoretical analysis on the stability, dispersion, and dissipation characteristics of the LBE method, and computational studies of high Reynolds number flows with singular gradients. Finally, a finite difference-based lattice Boltzmann method is

  12. Ocean observations with EOS/MODIS: Algorithm development and post launch studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1994-01-01

    During CY 1994 there are five objectives under this task: (1) investigate the effects of stratospheric aerosol on the proposed correction algorithm, and investigate the use of the 1380 nm MODIS band to remove the stratospheric aerosol perturbation; (2) investigate the effect of vertical structure in aerosol concentration and type on the behavior of the proposed correction algorithm; (3) investigate the effects of polarization on the accuracy of the algorithm; (4) improve the accuracy and speed of the existing algorithm; and (5) investigate removal of the O2 'A' absorption band at 762 nm from the 765 nm SeaWiFS band so the latter can be used in atmospheric correction of SeaWiFS. The importance of this to MODIS is that SeaWiFS data will be used extensively to test and improve the MODIS algorithm. Thus it is essential that the O2 absorption be adequately dealt with for SeaWiFS.

  13. Development of a deformable dosimetric phantom to verify dose accumulation algorithms for adaptive radiotherapy.

    PubMed

    Zhong, Hualiang; Adams, Jeffrey; Glide-Hurst, Carri; Zhang, Hualin; Li, Haisen; Chetty, Indrin J

    2016-01-01

    Adaptive radiotherapy may improve treatment outcomes for lung cancer patients. Because of the lack of an effective tool for quality assurance, this therapeutic modality is not yet accepted in clinic. The purpose of this study is to develop a deformable physical phantom for validation of dose accumulation algorithms in regions with heterogeneous mass. A three-dimensional (3D) deformable phantom was developed containing a tissue-equivalent tumor and heterogeneous sponge inserts. Thermoluminescent dosimeters (TLDs) were placed at multiple locations in the phantom each time before dose measurement. Doses were measured with the phantom in both the static and deformed cases. The deformation of the phantom was actuated by a motor driven piston. 4D computed tomography images were acquired to calculate 3D doses at each phase using Pinnacle and EGSnrc/DOSXYZnrc. These images were registered using two registration software packages: VelocityAI and Elastix. With the resultant displacement vector fields (DVFs), the calculated 3D doses were accumulated using a mass-and energy congruent mapping method and compared to those measured by the TLDs at four typical locations. In the static case, TLD measurements agreed with all the algorithms by 1.8% at the center of the tumor volume and by 4.0% in the penumbra. In the deformable case, the phantom's deformation was reproduced within 1.1 mm. For the 3D dose calculated by Pinnacle, the total dose accumulated with the Elastix DVF agreed well to the TLD measurements with their differences <2.5% at four measured locations. When the VelocityAI DVF was used, their difference increased up to 11.8%. For the 3D dose calculated by EGSnrc/DOSXYZnrc, the total doses accumulated with the two DVFs were within 5.7% of the TLD measurements which are slightly over the rate of 5% for clinical acceptance. The detector-embedded deformable phantom allows radiation dose to be measured in a dynamic environment, similar to deforming lung tissues, supporting

  14. Utilization of Ancillary Data Sets for Conceptual SMAP Mission Algorithm Development and Product Generation

    NASA Technical Reports Server (NTRS)

    O'Neill, P.; Podest, E.

    2011-01-01

    The planned Soil Moisture Active Passive (SMAP) mission is one of the first Earth observation satellites being developed by NASA in response to the National Research Council's Decadal Survey, Earth Science and Applications from Space: National Imperatives for the Next Decade and Beyond [1]. Scheduled to launch late in 2014, the proposed SMAP mission would provide high resolution and frequent revisit global mapping of soil moisture and freeze/thaw state, utilizing enhanced Radio Frequency Interference (RFI) mitigation approaches to collect new measurements of the hydrological condition of the Earth's surface. The SMAP instrument design incorporates an L-band radar (3 km) and an L band radiometer (40 km) sharing a single 6-meter rotating mesh antenna to provide measurements of soil moisture and landscape freeze/thaw state [2]. These observations would (1) improve our understanding of linkages between the Earth's water, energy, and carbon cycles, (2) benefit many application areas including numerical weather and climate prediction, flood and drought monitoring, agricultural productivity, human health, and national security, (3) help to address priority questions on climate change, and (4) potentially provide continuity with brightness temperature and soil moisture measurements from ESA's SMOS (Soil Moisture Ocean Salinity) and NASA's Aquarius missions. In the planned SMAP mission prelaunch time frame, baseline algorithms are being developed for generating (1) soil moisture products both from radiometer measurements on a 36 km grid and from combined radar/radiometer measurements on a 9 km grid, and (2) freeze/thaw products from radar measurements on a 3 km grid. These retrieval algorithms need a variety of global ancillary data, both static and dynamic, to run the retrieval models, constrain the retrievals, and provide flags for indicating retrieval quality. The choice of which ancillary dataset to use for a particular SMAP product would be based on a number of factors

  15. Development of a deformable dosimetric phantom to verify dose accumulation algorithms for adaptive radiotherapy

    PubMed Central

    Zhong, Hualiang; Adams, Jeffrey; Glide-Hurst, Carri; Zhang, Hualin; Li, Haisen; Chetty, Indrin J.

    2016-01-01

    Adaptive radiotherapy may improve treatment outcomes for lung cancer patients. Because of the lack of an effective tool for quality assurance, this therapeutic modality is not yet accepted in clinic. The purpose of this study is to develop a deformable physical phantom for validation of dose accumulation algorithms in regions with heterogeneous mass. A three-dimensional (3D) deformable phantom was developed containing a tissue-equivalent tumor and heterogeneous sponge inserts. Thermoluminescent dosimeters (TLDs) were placed at multiple locations in the phantom each time before dose measurement. Doses were measured with the phantom in both the static and deformed cases. The deformation of the phantom was actuated by a motor driven piston. 4D computed tomography images were acquired to calculate 3D doses at each phase using Pinnacle and EGSnrc/DOSXYZnrc. These images were registered using two registration software packages: VelocityAI and Elastix. With the resultant displacement vector fields (DVFs), the calculated 3D doses were accumulated using a mass-and energy congruent mapping method and compared to those measured by the TLDs at four typical locations. In the static case, TLD measurements agreed with all the algorithms by 1.8% at the center of the tumor volume and by 4.0% in the penumbra. In the deformable case, the phantom's deformation was reproduced within 1.1 mm. For the 3D dose calculated by Pinnacle, the total dose accumulated with the Elastix DVF agreed well to the TLD measurements with their differences <2.5% at four measured locations. When the VelocityAI DVF was used, their difference increased up to 11.8%. For the 3D dose calculated by EGSnrc/DOSXYZnrc, the total doses accumulated with the two DVFs were within 5.7% of the TLD measurements which are slightly over the rate of 5% for clinical acceptance. The detector-embedded deformable phantom allows radiation dose to be measured in a dynamic environment, similar to deforming lung tissues, supporting

  16. Development and verification of an analytical algorithm to predict absorbed dose distributions in ocular proton therapy using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Koch, Nicholas C.; Newhauser, Wayne D.

    2010-02-01

    Proton beam radiotherapy is an effective and non-invasive treatment for uveal melanoma. Recent research efforts have focused on improving the dosimetric accuracy of treatment planning and overcoming the present limitation of relative analytical dose calculations. Monte Carlo algorithms have been shown to accurately predict dose per monitor unit (D/MU) values, but this has yet to be shown for analytical algorithms dedicated to ocular proton therapy, which are typically less computationally expensive than Monte Carlo algorithms. The objective of this study was to determine if an analytical method could predict absolute dose distributions and D/MU values for a variety of treatment fields like those used in ocular proton therapy. To accomplish this objective, we used a previously validated Monte Carlo model of an ocular nozzle to develop an analytical algorithm to predict three-dimensional distributions of D/MU values from pristine Bragg peaks and therapeutically useful spread-out Bragg peaks (SOBPs). Results demonstrated generally good agreement between the analytical and Monte Carlo absolute dose calculations. While agreement in the proximal region decreased for beams with less penetrating Bragg peaks compared with the open-beam condition, the difference was shown to be largely attributable to edge-scattered protons. A method for including this effect in any future analytical algorithm was proposed. Comparisons of D/MU values showed typical agreement to within 0.5%. We conclude that analytical algorithms can be employed to accurately predict absolute proton dose distributions delivered by an ocular nozzle.

  17. Changing the seasonality of an Arctic tundra ecosystem: earlier snowmelt and warmer temperatures

    NASA Astrophysics Data System (ADS)

    Steltzer, H.; Weintraub, M. N.; Darrouzet-Nardi, A.; Melle, C.; Segal, A.; Sullivan, P.; Landry, C.; Wallenstein, M. D.

    2010-12-01

    In the Arctic and around the world, earlier plant growth is an indication that warmer temperatures or other global changes are changing the seasonality of the Earth’s ecosystems. To determine how changes in seasonality affect plant life histories and biogeochemical cycles in tussock tundra, we established a factorial experiment that includes two approaches to changing the seasonality of this ecosystem. In early May, we placed radiation absorbing fabric on the snow surface to accelerate the timing of snowmelt. We monitored the rate of snowmelt over a 10 day period and removed the fabric on the 10th day when the accelerated plots were 80% snowfree. Instrument arrays placed in the plots collected daily data that characterize an increase in energy absorption in these snowfree areas over the 4 day period prior to when control areas were snowfree. In addition, when the plots became snowfree we placed open-top-chambers in areas with and without accelerated snowmelt. The chambers increased air temperatures especially during mid-day early in the growing season. The instrument arrays included light sensors to monitor the plant community life history by observing surface greenness. Our results suggest that the plant community initiated growth earlier when snowmelt occurred earlier and that warming speeded the development of the plant canopy. However, plant species’ life history responses to these changes in seasonality were variable. Experimental alteration of the timing of plant life history events will provide a useful tool to examine controls on the seasonality of biogeochemical processes, such as nutrient availability to plants and nutrient limitation of decomposition. Accelerated snowmelt and warmer temperatures in tussock tundra, AK.

  18. Development and Validation of a Portable Platform for Deploying Decision-Support Algorithms in Prehospital Settings

    PubMed Central

    Reisner, A. T.; Khitrov, M. Y.; Chen, L.; Blood, A.; Wilkins, K.; Doyle, W.; Wilcox, S.; Denison, T.; Reifman, J.

    2013-01-01

    Summary Background Advanced decision-support capabilities for prehospital trauma care may prove effective at improving patient care. Such functionality would be possible if an analysis platform were connected to a transport vital-signs monitor. In practice, there are technical challenges to implementing such a system. Not only must each individual component be reliable, but, in addition, the connectivity between components must be reliable. Objective We describe the development, validation, and deployment of the Automated Processing of Physiologic Registry for Assessment of Injury Severity (APPRAISE) platform, intended to serve as a test bed to help evaluate the performance of decision-support algorithms in a prehospital environment. Methods We describe the hardware selected and the software implemented, and the procedures used for laboratory and field testing. Results The APPRAISE platform met performance goals in both laboratory testing (using a vital-sign data simulator) and initial field testing. After its field testing, the platform has been in use on Boston MedFlight air ambulances since February of 2010. Conclusion These experiences may prove informative to other technology developers and to healthcare stakeholders seeking to invest in connected electronic systems for prehospital as well as in-hospital use. Our experiences illustrate two sets of important questions: are the individual components reliable (e.g., physical integrity, power, core functionality, and end-user interaction) and is the connectivity between components reliable (e.g., communication protocols and the metadata necessary for data interpretation)? While all potential operational issues cannot be fully anticipated and eliminated during development, thoughtful design and phased testing steps can reduce, if not eliminate, technical surprises. PMID:24155791

  19. Algorithm development for the retrieval of coastal water constituents from satellite Modular Optoelectronic Scanner images

    NASA Astrophysics Data System (ADS)

    Hetscher, Matthias; Krawczyk, Harald; Neumann, Andreas; Walzel, Thomas; Zimmermann, Gerhard

    1997-10-01

    DLR's imaging spectrometer the Modular Optoelectronic Scanner (MOS) on the Indian remote sensing satellite IRS-P3 has been orbiting since March 1996. MOS consists of two spectrometers, one narrow band spectrometer around 760 nm for retrieval of atmospheric parameters and a second one in the IVS/NIR region with an additional line camera at 1,6 micrometers . The instrument was especially designed for the remote sensing of coastal zone water and the determination and distinction of its constituents. MOS was developed and manufactured at the Institute of Space Sensor Technology (ISST) and launched in a joint effort with the Indian Space Research Organization (ISRO). The high spectral resolution of MOS offers the possibility of using the differences in spectral signatures of remote sensing objects for quantitative determination of geophysical parameters. In ISST a linear estimator to derive water constituents and aerosol optical thickness has been developed, exploiting Principal Component Inversion (PCI) of modeled top-of- atmosphere and experimental radiance data sets. The estimator results in sets of weighting coefficients for each measurement band, depending on the geophysical situations. Because of systematic misinterpretation due to non- adequateness of model and real situation the further development implies the parallel improvement of used water models and recalibration with in-situ data. The paper will present for selected test sites of the European coasts results of algorithm application. It will show the improvement of the estimated water constituents by using regional specific model parameter. Derived maps of chlorophyll like pigments, sediments and aerosol optical thickness ar presented.

  20. Development of an algorithm for heartbeats detection and classification in Holter records based on temporal and morphological features

    NASA Astrophysics Data System (ADS)

    García, A.; Romano, H.; Laciar, E.; Correa, R.

    2011-12-01

    In this work a detection and classification algorithm for heartbeats analysis in Holter records was developed. First, a QRS complexes detector was implemented and their temporal and morphological characteristics were extracted. A vector was built with these features; this vector is the input of the classification module, based on discriminant analysis. The beats were classified in three groups: Premature Ventricular Contraction beat (PVC), Atrial Premature Contraction beat (APC) and Normal Beat (NB). These beat categories represent the most important groups of commercial Holter systems. The developed algorithms were evaluated in 76 ECG records of two validated open-access databases "arrhythmias MIT BIH database" and "MIT BIH supraventricular arrhythmias database". A total of 166343 beats were detected and analyzed, where the QRS detection algorithm provides a sensitivity of 99.69 % and a positive predictive value of 99.84 %. The classification stage gives sensitivities of 97.17% for NB, 97.67% for PCV and 92.78% for APC.

  1. Development of an algorithm to meaningfully interpret patterns in street-level methane concentrations

    NASA Astrophysics Data System (ADS)

    von Fischer, Joseph; Salo, Jessica; Griebenow, Claire; Bischak, Linde; Cooley, Daniel; Ham, Jay; Schumacher, Russ

    2013-04-01

    Methane (CH4) is an important greenhouse gas that has 70x greater heat forcing per molecule than CO2 over its ~10 year atmospheric residence time. Given this short residence time, there has been a surge of interest in mitigating anthropogenic CH4 sources because they will have a more immediate effect on warming rates. Recent observations of CH4 concentrations around the city of Boston reveal that natural gas distribution systems can have a very large number of leaks. However, there are a number of conceptual and practical challenges associated with interpretation of CH4 data gathered by car at the street level. In this presentation, we detail our efforts to develop an "algorithm" or set of standard practices for interpreting these patterns based on our own findings. At the most basic, we have evaluated approaches for vehicle driving patterns and management of the raw data. We also identify techniques for evaluating data quality and discerning when elevated CH4 may be due to other vehicles (e.g., CNG-powered city buses). We then compare methods for identifying "peaks" in CH4 concentration, and we discuss several approaches for relating concentration, space and wind data to emission rates. Finally, we provide some considerations for how the data from individual peaks might be aggregated to larger spatial scales.

  2. Development and Implementation of Image-based Algorithms for Measurement of Deformations in Material Testing

    PubMed Central

    Barazzetti, Luigi; Scaioni, Marco

    2010-01-01

    This paper presents the development and implementation of three image-based methods used to detect and measure the displacements of a vast number of points in the case of laboratory testing on construction materials. Starting from the needs of structural engineers, three ad hoc tools for crack measurement in fibre-reinforced specimens and 2D or 3D deformation analysis through digital images were implemented and tested. These tools make use of advanced image processing algorithms and can integrate or even substitute some traditional sensors employed today in most laboratories. In addition, the automation provided by the implemented software, the limited cost of the instruments and the possibility to operate with an indefinite number of points offer new and more extensive analysis in the field of material testing. Several comparisons with other traditional sensors widely adopted inside most laboratories were carried out in order to demonstrate the accuracy of the implemented software. Implementation details, simulations and real applications are reported and discussed in this paper. PMID:22163612

  3. High-order derivative spectroscopy for selecting spectral regions and channels for remote sensing algorithm development

    NASA Astrophysics Data System (ADS)

    Bostater, Charles R., Jr.

    1999-12-01

    A remote sensing reflectance model, which describes the transfer of irradiant light within a plant canopy or water column has previously been used to simulate the nadir viewing reflectance of vegetation canopies and leaves under solar induced or an artificial light source and the water surface reflectance. Wavelength dependent features such as canopy reflectance leaf absorption and canopy bottom reflectance as well as water absorption and water bottom reflectance have been used to simulate or generate synthetic canopy and water surface reflectance signatures. This paper describes how derivative spectroscopy can be utilized to invert the synthetic or modeled as well as measured reflectance signatures with the goal of selecting the optimal spectral channels or regions of these environmental media. Specifically, in this paper synthetic and measured reflectance signatures are used for selecting vegetative dysfunction variables for different plant species. The measured reflectance signatures as well as model derived or synthetic signatures are processed using extremely fast higher order derivative processing techniques which filter the synthetic/modeled or measured spectra and automatically selects the optimal channels for automatic and direct algorithm application. The higher order derivative filtering technique makes use of a translating and dilating, derivative spectroscopy signal processing (TDDS-SPR) approach based upon remote sensing science and radiative transfer theory. Thus the technique described, unlike other signal processing techniques being developed for hyperspectral signatures and associated imagery, is based upon radiative transfer theory instead of statistical or purely mathematical operational techniques such as wavelets.

  4. Development of Pressurized Water Reactor Integrated Safety Analysis Methodology Using Multilevel Coupling Algorithm

    SciTech Connect

    Ziabletsev, Dmitri; Avramova, Maria; Ivanov, Kostadin

    2004-11-15

    The subchannel code COBRA-TF has been introduced for an evaluation of thermal margins on the local pin-by-pin level in a pressurized water reactor. The coupling of COBRA-TF with TRAC-PF1/NEM is performed by providing from TRAC to COBRA-TF axial and radial thermal-hydraulic boundary conditions and relative pin-power profiles, obtained with the pin power reconstruction model of the nodal expansion method (NEM). An efficient algorithm for coupling of the subchannel code COBRA-TF with TRAC-PF1/NEM in the parallel virtual machine environment was developed addressing the issues of time synchronization, data exchange, spatial overlays, and coupled convergence. Local feedback modeling on the pin level was implemented into COBRA-TF, which enabled updating the local form functions and the recalculation of the pin powers in TRAC-PF1/NEM after obtaining the local feedback parameters. The coupled TRAC-PF1/NEM/COBRA-TF code system was tested on the rod ejection accident and main steam line break benchmark problems. In both problems, the local results are closer than before the introduced multilevel coupling to the corresponding critical limits. This fact indicates that the assembly average results tend to underestimate the accident consequences in terms of local safety margins. The capability of local safety evaluation, performed simultaneously (online) with coupled global three-dimensional neutron kinetics/thermal-hydraulic calculations, is introduced and tested. The obtained results demonstrate the importance of the current work.

  5. Development of Variational Guiding Center Algorithms for Parallel Calculations in Experimental Magnetic Equilibria

    SciTech Connect

    Ellison, C. Leland; Finn, J. M.; Qin, H.; Tang, William M.

    2014-10-01

    Structure-preserving algorithms obtained via discrete variational principles exhibit strong promise for the calculation of guiding center test particle trajectories. The non-canonical Hamiltonian structure of the guiding center equations forms a novel and challenging context for geometric integration. To demonstrate the practical relevance of these methods, a prototypical variational midpoint algorithm is applied to an experimental magnetic equilibrium. The stability characteristics, conservation properties, and implementation requirements associated with the variational algorithms are addressed. Furthermore, computational run time is reduced for large numbers of particles by parallelizing the calculation on GPU hardware.

  6. Development, analysis, and testing of robust nonlinear guidance algorithms for space applications

    NASA Astrophysics Data System (ADS)

    Wibben, Daniel R.

    This work focuses on the analysis and application of various nonlinear, autonomous guidance algorithms that utilize sliding mode control to guarantee system stability and robustness. While the basis for the algorithms has previously been proposed, past efforts barely scratched the surface of the theoretical details and implications of these algorithms. Of the three algorithms that are the subject of this research, two are directly derived from optimal control theory and augmented using sliding mode control. Analysis of the derivation of these algorithms has shown that they are two different representations of the same result, one of which uses a simple error state model (Delta r/Deltav) and the other uses definitions of the zero-effort miss and zero-effort velocity (ZEM/ZEV) values. By investigating the dynamics of the defined sliding surfaces and their impact on the overall system, many implications have been deduced regarding the behavior of these systems which are noted to feature time-varying sliding modes. A formal finite time stability analysis has also been performed to theoretically demonstrate that the algorithms globally stabilize the system in finite time in the presence of perturbations and unmodeled dynamics. The third algorithm that has been subject to analysis is derived from a direct application of higher-order sliding mode control and Lyapunov stability analysis without consideration of optimal control theory and has been named the Multiple Sliding Surface Guidance (MSSG). Via use of reinforcement learning methods an optimal set of gains has been found that make the guidance perform similarly to an open-loop optimal solution. Careful side-by-side inspection of the MSSG and Optimal Sliding Guidance (OSG) algorithms has shown some striking similarities. A detailed comparison of the algorithms has demonstrated that though they are nearly indistinguishable at first glance, there are some key differences between the two algorithms and they are indeed

  7. Development of adaptive noise reduction filter algorithm for pediatric body images in a multi-detector CT

    NASA Astrophysics Data System (ADS)

    Nishimaru, Eiji; Ichikawa, Katsuhiro; Okita, Izumi; Ninomiya, Yuuji; Tomoshige, Yukihiro; Kurokawa, Takehiro; Ono, Yutaka; Nakamura, Yuko; Suzuki, Masayuki

    2008-03-01

    Recently, several kinds of post-processing image filters which reduce the noise of computed tomography (CT) images have been proposed. However, these image filters are mostly for adults. Because these are not very effective in small (< 20 cm) display fields of view (FOV), we cannot use them for pediatric body images (e.g., premature babies and infant children). We have developed a new noise reduction filter algorithm for pediatric body CT images. This algorithm is based on a 3D post-processing in which the output pixel values are calculated by nonlinear interpolation in z-directions on original volumetric-data-sets. This algorithm does not need the in-plane (axial plane) processing, so the spatial resolution does not change. From the phantom studies, our algorithm could reduce SD up to 40% without affecting the spatial resolution of x-y plane and z-axis, and improved the CNR up to 30%. This newly developed filter algorithm will be useful for the diagnosis and radiation dose reduction of the pediatric body CT images.

  8. Development of layout split algorithms and printability evaluation for double patterning technology

    NASA Astrophysics Data System (ADS)

    Chiou, Tsann-Bim; Socha, Robert; Chen, Hong; Chen, Luoqi; Hsu, Stephen; Nikolsky, Peter; van Oosten, Anton; Chen, Alek C.

    2008-03-01

    When using the most advanced water-based immersion scanner at the 32nm node half-pitch, the image resolution will be below the k1 limit of 0.25. If EUV technology is not ready for mass production, double patterning technology (DPT) is one of the solutions to bridge the gap between wet ArF and EUV platforms. DPT technology implies a patterning process with two photolithography/etching steps. As a result, the critical pitch is reduced by a factor of 2, which means the k1 value could increase by a factor of 2. Due to the superimposition of patterns printed by two separate patterning steps, the overlay capability, in addition to image capability, contributes to critical dimension uniformity (CDU). The wafer throughput as well as cost is a concern because of the increased number of process steps. Therefore, the performance of imaging, overlay, and throughput of a scanner must be improved in order to implement DPT cost effectively. In addition, DPT requires an innovative software to evenly split the patterns into two layers for the full chip. Although current electronic design automation (EDA) tools can split the pattern through abundant geometry-manipulation functions, these functions, however, are insufficient. A rigorous pattern split requires more DPT-specific functions such as tagging/grouping critical features with two colors (and hence two layers), controlling the coloring sequence, correcting the printing error on stitching boundaries, dealing with color conflicts, increasing the coloring accuracy, considering full-chip possibility, etc. Therefore, in this paper we cover these issues by demonstrating a newly developed DPT pattern-split algorithm using a rule-based method. This method has one strong advantage of achieving very fast processing speed, so a full-chip DPT pattern split is practical. After the pattern split, all of the color conflicts are highlighted. Some of the color conflicts can be resolved by aggressive model-based methods, while the un

  9. The performance and development for the Inner Detector Trigger algorithms at ATLAS

    NASA Astrophysics Data System (ADS)

    Penc, Ondrej

    2015-05-01

    A redesign of the tracking algorithms for the ATLAS trigger for LHC's Run 2 starting in 2015 is in progress. The ATLAS HLT software has been restructured to run as a more flexible single stage HLT, instead of two separate stages (Level 2 and Event Filter) as in Run 1. The new tracking strategy employed for Run 2 will use a Fast Track Finder (FTF) algorithm to seed subsequent Precision Tracking, and will result in improved track parameter resolution and faster execution times than achieved during Run 1. The performance of the new algorithms has been evaluated to identify those aspects where code optimisation would be most beneficial. The performance and timing of the algorithms for electron and muon reconstruction in the trigger are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performance improvements as the code evolves.

  10. Development of effluent removal prediction model efficiency in septic sludge treatment plant through clonal selection algorithm.

    PubMed

    Ting, Sie Chun; Ismail, A R; Malek, M A

    2013-11-15

    This study aims at developing a novel effluent removal management tool for septic sludge treatment plants (SSTP) using a clonal selection algorithm (CSA). The proposed CSA articulates the idea of utilizing an artificial immune system (AIS) to identify the behaviour of the SSTP, that is, using a sequence batch reactor (SBR) technology for treatment processes. The novelty of this study is the development of a predictive SSTP model for effluent discharge adopting the human immune system. Septic sludge from the individual septic tanks and package plants will be desuldged and treated in SSTP before discharging the wastewater into a waterway. The Borneo Island of Sarawak is selected as the case study. Currently, there are only two SSTPs in Sarawak, namely the Matang SSTP and the Sibu SSTP, and they are both using SBR technology. Monthly effluent discharges from 2007 to 2011 in the Matang SSTP are used in this study. Cross-validation is performed using data from the Sibu SSTP from April 2011 to July 2012. Both chemical oxygen demand (COD) and total suspended solids (TSS) in the effluent were analysed in this study. The model was validated and tested before forecasting the future effluent performance. The CSA-based SSTP model was simulated using MATLAB 7.10. The root mean square error (RMSE), mean absolute percentage error (MAPE), and correction coefficient (R) were used as performance indexes. In this study, it was found that the proposed prediction model was successful up to 84 months for the COD and 109 months for the TSS. In conclusion, the proposed CSA-based SSTP prediction model is indeed beneficial as an engineering tool to forecast the long-run performance of the SSTP and in turn, prevents infringement of future environmental balance in other towns in Sarawak. PMID:23968912

  11. The Development of Several Electromagnetic Monitoring Strategies and Algorithms for Validating Pre-Earthquake Electromagnetic Signals

    NASA Astrophysics Data System (ADS)

    Bleier, T. E.; Dunson, J. C.; Roth, S.; Mueller, S.; Lindholm, C.; Heraud, J. A.

    2012-12-01

    QuakeFinder, a private research group in California, reports on the development of a 100+ station network consisting of 3-axis induction magnetometers, and air conductivity sensors to collect and characterize pre-seismic electromagnetic (EM) signals. These signals are combined with daily Infra Red signals collected from the GOES weather satellite infrared (IR) instrument to compare and correlate with the ground EM signals, both from actual earthquakes and boulder stressing experiments. This presentation describes the efforts QuakeFinder has undertaken to automatically detect these pulse patterns using their historical data as a reference, and to develop other discriminative algorithms that can be used with air conductivity sensors, and IR instruments from the GOES satellites. The overall big picture results of the QuakeFinder experiment are presented. In 2007, QuakeFinder discovered the occurrence of strong uni-polar pulses in their magnetometer coil data that increased in tempo dramatically prior to the M5.1 earthquake at Alum Rock, California. Suggestions that these pulses might have been lightning or power-line arcing did not fit with the data actually recorded as was reported in Bleier [2009]. Then a second earthquake occurred near the same site on January 7, 2010 as was reported in Dunson [2011], and the pattern of pulse count increases before the earthquake occurred similarly to the 2007 event. There were fewer pulses, and the magnitude of them was decreased, both consistent with the fact that the earthquake was smaller (M4.0 vs M5.4) and farther away (7Km vs 2km). At the same time similar effects were observed at the QuakeFinder Tacna, Peru site before the May 5th, 2010 M6.2 earthquake and a cluster of several M4-5 earthquakes.

  12. A preliminary report on the development of MATLAB tensor classes for fast algorithm prototyping.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson

    2004-07-01

    We describe three MATLAB classes for manipulating tensors in order to allow fast algorithm prototyping. A tensor is a multidimensional or N-way array. We present a tensor class for manipulating tensors which allows for tensor multiplication and 'matricization.' We have further added two classes for representing tensors in decomposed format: cp{_}tensor and tucker{_}tensor. We demonstrate the use of these classes by implementing several algorithms that have appeared in the literature.

  13. Quantum algorithms

    NASA Astrophysics Data System (ADS)

    Abrams, Daniel S.

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  14. Detection of fruit-fly infestation in olives using X-ray imaging: Algorithm development and prospects

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An algorithm using a Bayesian classifier was developed to automatically detect olive fruit fly infestations in x-ray images of olives. The data set consisted of 249 olives with various degrees of infestation and 161 non-infested olives. Each olive was x-rayed on film and digital images were acquired...

  15. Successive smoothing algorithm for constructing the semiempirical model developed at ONERA to predict unsteady aerodynamic forces. [aeroelasticity in helicopters

    NASA Technical Reports Server (NTRS)

    Petot, D.; Loiseau, H.

    1982-01-01

    Unsteady aerodynamic methods adopted for the study of aeroelasticity in helicopters are considered with focus on the development of a semiempirical model of unsteady aerodynamic forces acting on an oscillating profile at high incidence. The successive smoothing algorithm described leads to the model's coefficients in a very satisfactory manner.

  16. Molecular descriptor subset selection in theoretical peptide quantitative structure-retention relationship model development using nature-inspired optimization algorithms.

    PubMed

    Žuvela, Petar; Liu, J Jay; Macur, Katarzyna; Bączek, Tomasz

    2015-10-01

    In this work, performance of five nature-inspired optimization algorithms, genetic algorithm (GA), particle swarm optimization (PSO), artificial bee colony (ABC), firefly algorithm (FA), and flower pollination algorithm (FPA), was compared in molecular descriptor selection for development of quantitative structure-retention relationship (QSRR) models for 83 peptides that originate from eight model proteins. The matrix with 423 descriptors was used as input, and QSRR models based on selected descriptors were built using partial least squares (PLS), whereas root mean square error of prediction (RMSEP) was used as a fitness function for their selection. Three performance criteria, prediction accuracy, computational cost, and the number of selected descriptors, were used to evaluate the developed QSRR models. The results show that all five variable selection methods outperform interval PLS (iPLS), sparse PLS (sPLS), and the full PLS model, whereas GA is superior because of its lowest computational cost and higher accuracy (RMSEP of 5.534%) with a smaller number of variables (nine descriptors). The GA-QSRR model was validated initially through Y-randomization. In addition, it was successfully validated with an external testing set out of 102 peptides originating from Bacillus subtilis proteomes (RMSEP of 22.030%). Its applicability domain was defined, from which it was evident that the developed GA-QSRR exhibited strong robustness. All the sources of the model's error were identified, thus allowing for further application of the developed methodology in proteomics. PMID:26346190

  17. Preliminary Development and Evaluation of Lightning Jump Algorithms for the Real-Time Detection of Severe Weather

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2009-01-01

    Previous studies have demonstrated that rapid increases in total lightning activity (intracloud + cloud-to-ground) are often observed tens of minutes in advance of the occurrence of severe weather at the ground. These rapid increases in lightning activity have been termed "lightning jumps." Herein, we document a positive correlation between lightning jumps and the manifestation of severe weather in thunderstorms occurring across the Tennessee Valley and Washington D.C. A total of 107 thunderstorms were examined in this study, with 69 of the 107 thunderstorms falling into the category of non-severe, and 38 into the category of severe. From the dataset of 69 isolated non-severe thunderstorms, an average peak 1 minute flash rate of 10 flashes/min was determined. A variety of severe thunderstorm types were examined for this study including an MCS, MCV, tornadic outer rainbands of tropical remnants, supercells, and pulse severe thunderstorms. Of the 107 thunderstorms, 85 thunderstorms (47 non-severe, 38 severe) from the Tennessee Valley and Washington D.C tested 6 lightning jump algorithm configurations (Gatlin, Gatlin 45, 2(sigma), 3(sigma), Threshold 10, and Threshold 8). Performance metrics for each algorithm were then calculated, yielding encouraging results from the limited sample of 85 thunderstorms. The 2(sigma) lightning jump algorithm had a high probability of detection (POD; 87%), a modest false alarm rate (FAR; 33%), and a solid Heidke Skill Score (HSS; 0.75). A second and more simplistic lightning jump algorithm named the Threshold 8 lightning jump algorithm also shows promise, with a POD of 81% and a FAR of 41%. Average lead times to severe weather occurrence for these two algorithms were 23 minutes and 20 minutes, respectively. The overall goal of this study is to advance the development of an operationally-applicable jump algorithm that can be used with either total lightning observations made from the ground, or in the near future from space using the

  18. Characterizing the Preturbulence Environment for Sensor Development, New Hazard Algorithms and NASA Experimental Flight Planning

    NASA Technical Reports Server (NTRS)

    Kaplan, Michael L.; Lin, Yuh-Lang

    2004-01-01

    During the grant period, several tasks were performed in support of the NASA Turbulence Prediction and Warning Systems (TPAWS) program. The primary focus of the research was on characterizing the preturbulence environment by developing predictive tools and simulating atmospheric conditions that preceded severe turbulence. The goal of the research being to provide both dynamical understanding of conditions that preceded turbulence as well as providing predictive tools in support of operational NASA B-757 turbulence research flights. The advancements in characterizing the preturbulence environment will be applied by NASA to sensor development for predicting turbulence onboard commercial aircraft. Numerical simulations with atmospheric models as well as multi-scale observational analyses provided insights into the environment organizing turbulence in a total of forty-eight specific case studies of severe accident producing turbulence on commercial aircraft. These accidents exclusively affected commercial aircraft. A paradigm was developed which diagnosed specific atmospheric circulation systems from the synoptic scale down to the meso-y scale that preceded turbulence in both clear air and in proximity to convection. The emphasis was primarily on convective turbulence as that is what the TPAWS program is most focused on in terms of developing improved sensors for turbulence warning and avoidance. However, the dynamical paradigm also has applicability to clear air and mountain turbulence. This dynamical sequence of events was then employed to formulate and test new hazard prediction indices that were first tested in research simulation studies and then ultimately were further tested in support of the NASA B-757 turbulence research flights. The new hazard characterization algorithms were utilized in a Real Time Turbulence Model (RTTM) that was operationally employed to support the NASA B-757 turbulence research flights. Improvements in the RTTM were implemented in an

  19. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: Algorithm development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Vicroy, D. D.; Simmon, D. A.

    1985-01-01

    A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

  20. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: algorithm development and flight test results

    SciTech Connect

    Knox, C.E.; Vicroy, D.D.; Simmon, D.A.

    1985-05-01

    A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

  1. Challenges and Recent Developments in Hearing Aids: Part I. Speech Understanding in Noise, Microphone Technologies and Noise Reduction Algorithms

    PubMed Central

    Chung, King

    2004-01-01

    This review discusses the challenges in hearing aid design and fitting and the recent developments in advanced signal processing technologies to meet these challenges. The first part of the review discusses the basic concepts and the building blocks of digital signal processing algorithms, namely, the signal detection and analysis unit, the decision rules, and the time constants involved in the execution of the decision. In addition, mechanisms and the differences in the implementation of various strategies used to reduce the negative effects of noise are discussed. These technologies include the microphone technologies that take advantage of the spatial differences between speech and noise and the noise reduction algorithms that take advantage of the spectral difference and temporal separation between speech and noise. The specific technologies discussed in this paper include first-order directional microphones, adaptive directional microphones, second-order directional microphones, microphone matching algorithms, array microphones, multichannel adaptive noise reduction algorithms, and synchrony detection noise reduction algorithms. Verification data for these technologies, if available, are also summarized. PMID:15678225

  2. Development of a dose algorithm for the modified panasonic UD-802 personal dosimeter used at three mile island

    SciTech Connect

    Miklos, J. A.; Plato, P.

    1988-01-01

    During the fall of 1981, the personnel dosimetry group at GPU Nuclear Corporation at Three Mile Island (TMI) requested assistance from The University of Michigan (UM) in developing a dose algorithm for use at TMI-2. The dose algorithm had to satisfy the specific needs of TMI-2, particularly the need to distinguish beta-particle emitters of different energies, as well as having the capability of satisfying the requirements of the American National Standards Institute (ANSI) N13.11-1983 standard. A standard Panasonic UD-802 dosimeter was modified by having the plastic filter over element 2 removed. The dosimeter and hanger consists of the elements with a 14 mg/cm/sup 2/ density thickness and the filtrations shown. The hanger on this dosimeter had a double open window to facilitate monitoring for low-energy beta particles. The dose algorithm was written to satisfy the requirements of the ANSI N13.11-1983 standard, to include /sup 204/Tl with mixtures of /sup 204/Tl with /sup 90/Sr//sup 90/Y and /sup 137/Cs, and to include 81- and 200-keV average energy X-ray spectra. Stress tests were conducted to observe the algorithm performance to low doses, temperature, humidity, and the residual response following high-dose irradiations. The ability of the algorithm to determine dose from the beta particles of /sup 147/Pm was also investigated.

  3. Development of an Evidence-Based Clinical Algorithm for Practice in Hypotonia Assessment: A Proposal

    PubMed Central

    2014-01-01

    Background Assessing muscle tone in children is essential during the neurological assessment and is often essential in ensuring a more accurate diagnosis for appropriate management. While there have been advances in child neurology, there remains much contention around the subjectivity of the clinical assessment of hypotonia, which is often the first step in the diagnostic process. Objective In response to this challenge, the objective of the study is to develop and validate a prototype of a decision making process in the form of a clinical algorithm that will guide clinicians during this assessment process. Methods Design research within a pragmatic stance will be employed in this study. Multi-phase stages of assessment, prototyping and evaluation will occur. These will include processes that include a systematic review, processes of reflection and action as well as validation methods. Given the mixed methods nature of this study, use of NVIVO or ATLAS-ti will be used in the analysis of qualitative data and SPSS for quantitative data. Results Initial results from the systematic review revealed a paucity of scientific literature that documented the objective assessment of hypotonia in children. The review identified the need for more studies with greater methodological rigor in order to determine best practice with respect to the methods used in the assessment of low muscle tone in the paediatric population. Conclusions It is envisaged that this proposal will contribute to a more accurate clinical diagnosis of children with low muscle tone in the absence of a gold standard. We anticipate that the use of this tool will ultimately assist clinicians towards moving to evidenced based practice whilst upholding best practice in the care of children with hypotonia. PMID:25485571

  4. Diagnosis and treatment of acute ankle injuries: development of an evidence-based algorithm

    PubMed Central

    Polzer, Hans; Kanz, Karl Georg; Prall, Wolf Christian; Haasters, Florian; Ockert, Ben; Mutschler, Wolf; Grote, Stefan

    2011-01-01

    Acute ankle injuries are among the most common injuries in emergency departments. However, there are still no standardized examination procedures or evidence-based treatment. Therefore, the aim of this study was to systematically search the current literature, classify the evidence, and develop an algorithm for the diagnosis and treatment of acute ankle injuries. We systematically searched PubMed and the Cochrane Database for randomized controlled trials, meta-analyses, systematic reviews or, if applicable, observational studies and classified them according to their level of evidence. According to the currently available literature, the following recommendations have been formulated: i) the Ottawa Ankle/Foot Rule should be applied in order to rule out fractures; ii) physical examination is sufficient for diagnosing injuries to the lateral ligament complex; iii) classification into stable and unstable injuries is applicable and of clinical importance; iv) the squeeze-, crossed leg- and external rotation test are indicative for injuries of the syndesmosis; v) magnetic resonance imaging is recommended to verify injuries of the syndesmosis; vi) stable ankle sprains have a good prognosis while for unstable ankle sprains, conservative treatment is at least as effective as operative treatment without the related possible complications; vii) early functional treatment leads to the fastest recovery and the least rate of reinjury; viii) supervised rehabilitation reduces residual symptoms and re-injuries. Taken these recommendations into account, we present an applicable and evidence-based, step by step, decision pathway for the diagnosis and treatment of acute ankle injuries, which can be implemented in any emergency department or doctor's practice. It provides quality assurance for the patient and promotes confidence in the attending physician. PMID:22577506

  5. Earlier Family Factors and Self-Silencing as Predictors of Depression in Late Adolescence: A Longitudinal Study.

    ERIC Educational Resources Information Center

    Fox, Barbara; And Others

    A construct labeled "self-silencing" or "loss of voice" as an aspect of female social development is emerging in current literature, and it is postulated that suppression of expression prescribes a more passive and muted role for women and lies at the root of depression in women. This study was a longitudinal project examining earlier parenting…

  6. Development of an Algorithm for MODIS and VIIRS Cloud Optical Property Data Record Continuity

    NASA Astrophysics Data System (ADS)

    Meyer, K.; Platnick, S. E.; Ackerman, S. A.; Heidinger, A. K.; Holz, R.; Wind, G.; Amarasinghe, N.; Marchant, B.

    2015-12-01

    The launch of Suomi NPP in the fall of 2011 began the next generation of U.S. operational polar orbiting environmental observations. Similar to MODIS, the VIIRS imager provides visible through IR observations at moderate spatial resolution with a 1330 LT equatorial crossing consistent with MODIS on the Aqua platform. However, unlike MODIS, VIIRS lacks key water vapor and CO2 absorbing channels used by the MODIS cloud algorithms for high cloud detection and cloud-top property retrievals. In addition, there is a significant change in the spectral location of the 2.1μm shortwave-infrared channel used by MODIS for cloud optical/microphysical retrievals. Given the instrument differences between MODIS EOS and VIIRS S-NPP/JPSS, we discuss our adopted method for merging the 15+ year MODIS observational record with VIIRS in order to generate cloud optical property data record continuity across the observing systems. The optical property retrieval code uses heritage algorithms that produce the existing MODIS cloud optical and microphysical properties product (MOD06). As explained in other presentations submitted to this session, the NOAA AWG/CLAVR-x cloud-top property algorithm and a common MODIS-VIIRS cloud mask feed into the optical property algorithm to account for the different channel sets of the two imagers. Data granule and aggregated examples for the current version of the algorithm will be shown.

  7. Kathu Townlands: A High Density Earlier Stone Age Locality in the Interior of South Africa

    PubMed Central

    Walker, Steven J. H.; Lukich, Vasa; Chazan, Michael

    2014-01-01

    Kathu Townlands is a high density Earlier Stone Age locality in the Northern Cape Province, South Africa. Here we present the first detailed information on this locality based on analysis of a sample of lithic material from excavations by P. Beaumont and field observations made in the course of fieldwork in 2013. The results confirm the remarkably high artefact density at Kathu Townlands and do not provide evidence consistent with high energy transport as a mechanism of site formation, suggesting that Kathu Townlands was the site of intensive exploitation of highly siliceous outcroppings of banded iron formation. The results presented here provide a first step towards understanding this complex locality and point to the need for further research and the importance of preserving this locality in the face of intensive and rapid development. PMID:25058317

  8. Towards developing robust algorithms for solving partial differential equations on MIMD machines

    NASA Technical Reports Server (NTRS)

    Saltz, Joel H.; Naik, Vijay K.

    1988-01-01

    Methods for efficient computation of numerical algorithms on a wide variety of MIMD machines are proposed. These techniques reorganize the data dependency patterns to improve the processor utilization. The model problem finds the time-accurate solution to a parabolic partial differential equation discretized in space and implicitly marched forward in time. The algorithms are extensions of Jacobi and SOR. The extensions consist of iterating over a window of several timesteps, allowing efficient overlap of computation with communication. The methods increase the degree to which work can be performed while data are communicated between processors. The effect of the window size and of domain partitioning on the system performance is examined both by implementing the algorithm on a simulated multiprocessor system.

  9. Towards developing robust algorithms for solving partial differential equations on MIMD machines

    NASA Technical Reports Server (NTRS)

    Saltz, J. H.; Naik, V. K.

    1985-01-01

    Methods for efficient computation of numerical algorithms on a wide variety of MIMD machines are proposed. These techniques reorganize the data dependency patterns to improve the processor utilization. The model problem finds the time-accurate solution to a parabolic partial differential equation discretized in space and implicitly marched forward in time. The algorithms are extensions of Jacobi and SOR. The extensions consist of iterating over a window of several timesteps, allowing efficient overlap of computation with communication. The methods increase the degree to which work can be performed while data are communicated between processors. The effect of the window size and of domain partitioning on the system performance is examined both by implementing the algorithm on a simulated multiprocessor system.

  10. Development of a Near Real-Time Hail Damage Swath Identification Algorithm for Vegetation

    NASA Technical Reports Server (NTRS)

    Bell, Jordan R.; Molthan, Andrew L.; Schultz, Kori A.; McGrath, Kevin M.; Burks, Jason E.

    2015-01-01

    Every year in the Midwest and Great Plains, widespread greenness forms in conjunction with the latter part of the spring-summer growing season. This prevalent greenness forms as a result of the high concentration of agricultural areas having their crops reach their maturity before the fall harvest. This time of year also coincides with an enhanced hail frequency for the Great Plains (Cintineo et al. 2012). These severe thunderstorms can bring damaging winds and large hail that can result in damage to the surface vegetation. The spatial extent of the damage can relatively small concentrated area or be a vast swath of damage that is visible from space. These large areas of damage have been well documented over the years. In the late 1960s aerial photography was used to evaluate crop damage caused by hail. As satellite remote sensing technology has evolved, the identification of these hail damage streaks has increased. Satellites have made it possible to view these streaks in additional spectrums. Parker et al. (2005) documented two streaks using the Moderate Resolution Imaging Spectroradiometer (MODIS) that occurred in South Dakota. He noted the potential impact that these streaks had on the surface temperature and associated surface fluxes that are impacted by a change in temperature. Gallo et al. (2012) examined at the correlation between radar signatures and ground observations from storms that produced a hail damage swath in Central Iowa also using MODIS. Finally, Molthan et al. (2013) identified hail damage streaks through MODIS, Landsat-7, and SPOT observations of different resolutions for the development of a potential near-real time applications. The manual analysis of hail damage streaks in satellite imagery is both tedious and time consuming, and may be inconsistent from event to event. This study focuses on development of an objective and automatic algorithm to detect these areas of damage in a more efficient and timely manner. This study utilizes the

  11. THE DEVELOPMENT OF A PARAMETERIZED SCATTER REMOVAL ALGORITHM FOR NUCLEAR MATERIALS IDENTIFICATION SYSTEM IMAGING

    SciTech Connect

    Grogan, Brandon R

    2010-05-01

    This report presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects nonintrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross sections of features inside the object can be determined. The cross sections can then be used to identify the materials, and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons that are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized, and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements, and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using the

  12. The Development of a Parameterized Scatter Removal Algorithm for Nuclear Materials Identification System Imaging

    SciTech Connect

    Grogan, Brandon R

    2010-03-01

    This dissertation presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects non-intrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross-sections of features inside the object can be determined. The cross sections can then be used to identify the materials and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons which are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using

  13. Development of homotopy algorithms for fixed-order mixed H2/H(infinity) controller synthesis

    NASA Technical Reports Server (NTRS)

    Whorton, M.; Buschek, H.; Calise, A. J.

    1994-01-01

    A major difficulty associated with H-infinity and mu-synthesis methods is the order of the resulting compensator. Whereas model and/or controller reduction techniques are sometimes applied, performance and robustness properties are not preserved. By directly constraining compensator order during the optimization process, these properties are better preserved, albeit at the expense of computational complexity. This paper presents a novel homotopy algorithm to synthesize fixed-order mixed H2/H-infinity compensators. Numerical results are presented for a four-disk flexible structure to evaluate the efficiency of the algorithm.

  14. The development of a bearing spectral analyzer and algorithms to detect turbopump bearing wear from deflectometer and strain gage data

    NASA Astrophysics Data System (ADS)

    Martinez, Carol L.

    1992-07-01

    Over the last several years, Rocketdyne has actively developed condition and health monitoring techniques and their elements for rocket engine components, specifically high pressure turbopumps. Of key interest is the development of bearing signature analysis systems for real-time monitoring of the cryogen-cooled turbopump shaft bearings, which spin at speeds up to 36,000 RPM. These system elements include advanced bearing vibration sensors, signal processing techniques, wear mode algorithms, and integrated control software. Results of development efforts in the areas of signal processing and wear mode identification and quantification algorithms based on strain gage and deflectometer data are presented. Wear modes investigated include: inner race wear, cage pocket wear, outer race wear, differential ball wear, cracked inner race, and nominal wear.

  15. Development of an algorithm to measure defect geometry using a 3D laser scanner

    NASA Astrophysics Data System (ADS)

    Kilambi, S.; Tipton, S. M.

    2012-08-01

    Current fatigue life prediction models for coiled tubing (CT) require accurate measurements of the defect geometry. Three-dimensional (3D) laser imaging has shown promise toward becoming a nondestructive, non-contacting method of surface defect characterization. Laser imaging provides a detailed photographic image of a flaw, in addition to a detailed 3D surface map from which its critical dimensions can be measured. This paper describes algorithms to determine defect characteristics, specifically depth, width, length and projected cross-sectional area. Curve-fitting methods were compared and implicit algebraic fits have higher probability of convergence compared to explicit geometric fits. Among the algebraic fits, the Taubin circle fit has the least error. The algorithm was able to extract the dimensions of the flaw geometry from the scanned data of CT to within a tolerance of about 0.127 mm, close to the tolerance specified for the laser scanner itself, compared to measurements made using traveling microscopes. The algorithm computes the projected surface area of the flaw, which could previously only be estimated from the dimension measurements and the assumptions made about cutter shape. Although shadows compromised the accuracy of the shape characterization, especially for deep and narrow flaws, the results indicate that the algorithm with laser scanner can be used for non-destructive evaluation of CT in the oil field industry. Further work is needed to improve accuracy, to eliminate shadow effects and to reduce radial deviation.

  16. Development and evaluation of multilead wavelet-based ECG delineation algorithms for embedded wireless sensor nodes.

    PubMed

    Rincón, Francisco; Recas, Joaquin; Khaled, Nadia; Atienza, David

    2011-11-01

    This work is devoted to the evaluation of multilead digital wavelet transform (DWT)-based electrocardiogram (ECG) wave delineation algorithms, which were optimized and ported to a commercial wearable sensor platform. More specifically, we investigate the use of root-mean squared (RMS)-based multilead followed by a single-lead online delineation algorithm, which is based on a state-of-the-art offline single-lead delineator. The algorithmic transformations and software optimizations necessary to enable embedded ECG delineation notwithstanding the limited processing and storage resources of the target platform are described, and the performance of the resulting implementations are analyzed in terms of delineation accuracy, execution time, and memory usage. Interestingly, RMS-based multilead delineation is shown to perform equivalently to the best single-lead delineation for the 2-lead QT database (QTDB), within a fraction of a sample duration of the Common Standards for Electrocardiography (CSE) committee tolerances. Finally, a comprehensive evaluation of the energy consumption entailed by the considered algorithms is proposed, which allows very relevant insights into the dominant energy-draining functionalities and which suggests suitable design guidelines for long-lasting wearable ECG monitoring systems. PMID:21827976

  17. Midlife adiposity predicts earlier onset of Alzheimer's dementia, neuropathology and presymptomatic cerebral amyloid accumulation.

    PubMed

    Chuang, Y-F; An, Y; Bilgel, M; Wong, D F; Troncoso, J C; O'Brien, R J; Breitner, J C; Ferruci, L; Resnick, S M; Thambisetty, M

    2016-07-01

    Understanding how midlife risk factors influence age at onset (AAO) of Alzheimer's disease (AD) may provide clues to delay disease expression. Although midlife adiposity predicts increased incidence of AD, it is unclear whether it affects AAO and severity of Alzheimer's neuropathology. Using a prospective population-based cohort, Baltimore Longitudinal Study of Aging (BLSA), this study aims to examine the relationships between midlife body mass index (BMI) and (1) AAO of AD (2) severity of Alzheimer's neuropathology and (3) fibrillar brain amyloid deposition during aging. We analyzed data on 1394 cognitively normal individuals at baseline (8643 visits; average follow-up interval 13.9 years), among whom 142 participants developed incident AD. In two subsamples of BLSA, 191 participants underwent autopsy and neuropathological assessment, and 75 non-demented individuals underwent brain amyloid imaging. Midlife adiposity was derived from BMI data at 50 years of age. We find that each unit increase in midlife BMI predicts earlier onset of AD by 6.7 months (P=0.013). Higher midlife BMI was associated with greater Braak neurofibrillary but not CERAD (Consortium to Establish a Registry for Alzheimer's Disease) neuritic plaque scores at autopsy overall. Associations between midlife BMI and brain amyloid burden approached statistical significance. Thus, higher midlife BMI was also associated with greater fibrillar amyloid measured by global mean cortical distribution volume ratio (P=0.075) and within the precuneus (left, P=0.061; right, P=0.079). In conclusion, midlife overweight predicts earlier onset of AD and greater burden of Alzheimer's neuropathology. A healthy BMI at midlife may delay the onset of AD. PMID:26324099

  18. Lightning Jump Algorithm Development for the GOES·R Geostationary Lightning Mapper

    NASA Technical Reports Server (NTRS)

    Schultz. E.; Schultz. C.; Chronis, T.; Stough, S.; Carey, L.; Calhoun, K.; Ortega, K.; Stano, G.; Cecil, D.; Bateman, M.; Goodman, S.

    2014-01-01

    Current work on the lightning jump algorithm to be used in GOES-R Geostationary Lightning Mapper (GLM)'s data stream is multifaceted due to the intricate interplay between the storm tracking, GLM proxy data, and the performance of the lightning jump itself. This work outlines the progress of the last year, where analysis and performance of the lightning jump algorithm with automated storm tracking and GLM proxy data were assessed using over 700 storms from North Alabama. The cases analyzed coincide with previous semi-objective work performed using total lightning mapping array (LMA) measurements in Schultz et al. (2011). Analysis shows that key components of the algorithm (flash rate and sigma thresholds) have the greatest influence on the performance of the algorithm when validating using severe storm reports. Automated objective analysis using the GLM proxy data has shown probability of detection (POD) values around 60% with false alarm rates (FAR) around 73% using similar methodology to Schultz et al. (2011). However, when applying verification methods similar to those employed by the National Weather Service, POD values increase slightly (69%) and FAR values decrease (63%). The relationship between storm tracking and lightning jump has also been tested in a real-time framework at NSSL. This system includes fully automated tracking by radar alone, real-time LMA and radar observations and the lightning jump. Results indicate that the POD is strong at 65%. However, the FAR is significantly higher than in Schultz et al. (2011) (50-80% depending on various tracking/lightning jump parameters) when using storm reports for verification. Given known issues with Storm Data, the performance of the real-time jump algorithm is also being tested with high density radar and surface observations from the NSSL Severe Hazards Analysis & Verification Experiment (SHAVE).

  19. Passive microwave remote sensing of rainfall with SSM/I: Algorithm development and implementation

    NASA Technical Reports Server (NTRS)

    Ferriday, James G.; Avery, Susan K.

    1994-01-01

    A physically based algorithm sensitive to emission and scattering is used to estimate rainfall using the Special Sensor Microwave/Imager (SSM/I). The algorithm is derived from radiative transfer calculations through an atmospheric cloud model specifying vertical distributions of ice and liquid hydrometeors as a function of rain rate. The algorithm is structured in two parts: SSM/I brightness temperatures are screened to detect rainfall and are then used in rain-rate calculation. The screening process distinguishes between nonraining background conditions and emission and scattering associated with hydrometeors. Thermometric temperature and polarization thresholds determined from the radiative transfer calculations are used to detect rain, whereas the rain-rate calculation is based on a linear function fit to a linear combination of channels. Separate calculations for ocean and land account for different background conditions. The rain-rate calculation is constructed to respond to both emission and scattering, reduce extraneous atmospheric and surface effects, and to correct for beam filling. The resulting SSM/I rain-rate estimates are compared to three precipitation radars as well as to a dynamically simulated rainfall event. Global estimates from the SSM/I algorithm are also compared to continental and shipboard measurements over a 4-month period. The algorithm is found to accurately describe both localized instantaneous rainfall events and global monthly patterns over both land and ovean. Over land the 4-month mean difference between SSM/I and the Global Precipitation Climatology Center continental rain gauge database is less than 10%. Over the ocean, the mean difference between SSM/I and the Legates and Willmott global shipboard rain gauge climatology is less than 20%.

  20. Structured interview for mild traumatic brain injury after military blast: inter-rater agreement and development of diagnostic algorithm.

    PubMed

    Walker, William C; Cifu, David X; Hudak, Anne M; Goldberg, Gary; Kunz, Richard D; Sima, Adam P

    2015-04-01

    The existing gold standard for diagnosing a suspected previous mild traumatic brain injury (mTBI) is clinical interview. But it is prone to bias, especially for parsing the physical versus psychological effects of traumatic combat events, and its inter-rater reliability is unknown. Several standardized TBI interview instruments have been developed for research use but have similar limitations. Therefore, we developed the Virginia Commonwealth University (VCU) retrospective concussion diagnostic interview, blast version (VCU rCDI-B), and undertook this cross-sectional study aiming to 1) measure agreement among clinicians' mTBI diagnosis ratings, 2) using clinician consensus develop a fully structured diagnostic algorithm, and 3) assess accuracy of this algorithm in a separate sample. Two samples (n = 66; n = 37) of individuals within 2 years of experiencing blast effects during military deployment underwent semistructured interview regarding their worst blast experience. Five highly trained TBI physicians independently reviewed and interpreted the interview content and gave blinded ratings of whether or not the experience was probably an mTBI. Paired inter-rater reliability was extremely variable, with kappa ranging from 0.194 to 0.825. In sample 1, the physician consensus prevalence of probable mTBI was 84%. Using these diagnosis ratings, an algorithm was developed and refined from the fully structured portion of the VCU rCDI-B. The final algorithm considered certain symptom patterns more specific for mTBI than others. For example, an isolated symptom of "saw stars" was deemed sufficient to indicate mTBI, whereas an isolated symptom of "dazed" was not. The accuracy of this algorithm, when applied against the actual physician consensus in sample 2, was almost perfect (correctly classified = 97%; Cohen's kappa = 0.91). In conclusion, we found that highly trained clinicians often disagree on historical blast-related mTBI determinations. A fully structured interview

  1. Algorithm and simulation development in support of response strategies for contamination events in air and water systems.

    SciTech Connect

    Waanders, Bart Van Bloemen

    2006-01-01

    Chemical/Biological/Radiological (CBR) contamination events pose a considerable threat to our nation's infrastructure, especially in large internal facilities, external flows, and water distribution systems. Because physical security can only be enforced to a limited degree, deployment of early warning systems is being considered. However to achieve reliable and efficient functionality, several complex questions must be answered: (1) where should sensors be placed, (2) how can sparse sensor information be efficiently used to determine the location of the original intrusion, (3) what are the model and data uncertainties, (4) how should these uncertainties be handled, and (5) how can our algorithms and forward simulations be sufficiently improved to achieve real time performance? This report presents the results of a three year algorithmic and application development to support the identification, mitigation, and risk assessment of CBR contamination events. The main thrust of this investigation was to develop (1) computationally efficient algorithms for strategically placing sensors, (2) identification process of contamination events by using sparse observations, (3) characterization of uncertainty through developing accurate demands forecasts and through investigating uncertain simulation model parameters, (4) risk assessment capabilities, and (5) reduced order modeling methods. The development effort was focused on water distribution systems, large internal facilities, and outdoor areas.

  2. Decoding neural events from fMRI BOLD signal: A comparison of existing approaches and development of a new algorithm

    PubMed Central

    Bush, Keith; Cisler, Josh

    2013-01-01

    Neuroimaging methodology predominantly relies on the blood oxygenation level dependent (BOLD) signal. While the BOLD signal is a valid measure of neuronal activity, variance in fluctuations of the BOLD signal are not only due to fluctuations in neural activity. Thus, a remaining problem in neuroimaging analyses is developing methods that ensure specific inferences about neural activity that are not confounded by unrelated sources of noise in the BOLD signal. Here, we develop and test a new algorithm for performing semi-blind (i.e., no knowledge of stimulus timings) deconvolution of the BOLD signal that treats the neural event as an observable, but intermediate, probabilistic representation of the system’s state. We test and compare this new algorithm against three other recent deconvolution algorithms under varied levels of autocorrelated and Gaussian noise, hemodynamic response function (HRF) misspecification, and observation sampling rate (i.e., TR). Further, we compare the algorithms’ performance using two models to simulate BOLD data: a convolution of neural events with a known (or misspecified) HRF versus a biophysically accurate balloon model of hemodynamics. We also examine the algorithms’ performance on real task data. The results demonstrated good performance of all algorithms, though the new algorithm generally outperformed the others (3.0% improvement) under simulated resting state experimental conditions exhibiting multiple, realistic confounding factors (as well as 10.3% improvement on a real Stroop task). The simulations also demonstrate that the greatest negative influence on deconvolution accuracy is observation sampling rate. Practical and theoretical implications of these results for improving inferences about neural activity from fMRI BOLD signal are discussed. PMID:23602664

  3. TH-E-BRE-07: Development of Dose Calculation Error Predictors for a Widely Implemented Clinical Algorithm

    SciTech Connect

    Egan, A; Laub, W

    2014-06-15

    Purpose: Several shortcomings of the current implementation of the analytic anisotropic algorithm (AAA) may lead to dose calculation errors in highly modulated treatments delivered to highly heterogeneous geometries. Here we introduce a set of dosimetric error predictors that can be applied to a clinical treatment plan and patient geometry in order to identify high risk plans. Once a problematic plan is identified, the treatment can be recalculated with more accurate algorithm in order to better assess its viability. Methods: Here we focus on three distinct sources dosimetric error in the AAA algorithm. First, due to a combination of discrepancies in smallfield beam modeling as well as volume averaging effects, dose calculated through small MLC apertures can be underestimated, while that behind small MLC blocks can overestimated. Second, due the rectilinear scaling of the Monte Carlo generated pencil beam kernel, energy is not properly transported through heterogeneities near, but not impeding, the central axis of the beamlet. And third, AAA overestimates dose in regions very low density (< 0.2 g/cm{sup 3}). We have developed an algorithm to detect the location and magnitude of each scenario within the patient geometry, namely the field-size index (FSI), the heterogeneous scatter index (HSI), and the lowdensity index (LDI) respectively. Results: Error indices successfully identify deviations between AAA and Monte Carlo dose distributions in simple phantom geometries. Algorithms are currently implemented in the MATLAB computing environment and are able to run on a typical RapidArc head and neck geometry in less than an hour. Conclusion: Because these error indices successfully identify each type of error in contrived cases, with sufficient benchmarking, this method can be developed into a clinical tool that may be able to help estimate AAA dose calculation errors and when it might be advisable to use Monte Carlo calculations.

  4. A Novel Hybrid Classification Model of Genetic Algorithms, Modified k-Nearest Neighbor and Developed Backpropagation Neural Network

    PubMed Central

    Salari, Nader; Shohaimi, Shamarina; Najafi, Farid; Nallappan, Meenakshii; Karishnarajah, Isthrinayagy

    2014-01-01

    Among numerous artificial intelligence approaches, k-Nearest Neighbor algorithms, genetic algorithms, and artificial neural networks are considered as the most common and effective methods in classification problems in numerous studies. In the present study, the results of the implementation of a novel hybrid feature selection-classification model using the above mentioned methods are presented. The purpose is benefitting from the synergies obtained from combining these technologies for the development of classification models. Such a combination creates an opportunity to invest in the strength of each algorithm, and is an approach to make up for their deficiencies. To develop proposed model, with the aim of obtaining the best array of features, first, feature ranking techniques such as the Fisher's discriminant ratio and class separability criteria were used to prioritize features. Second, the obtained results that included arrays of the top-ranked features were used as the initial population of a genetic algorithm to produce optimum arrays of features. Third, using a modified k-Nearest Neighbor method as well as an improved method of backpropagation neural networks, the classification process was advanced based on optimum arrays of the features selected by genetic algorithms. The performance of the proposed model was compared with thirteen well-known classification models based on seven datasets. Furthermore, the statistical analysis was performed using the Friedman test followed by post-hoc tests. The experimental findings indicated that the novel proposed hybrid model resulted in significantly better classification performance compared with all 13 classification methods. Finally, the performance results of the proposed model was benchmarked against the best ones reported as the state-of-the-art classifiers in terms of classification accuracy for the same data sets. The substantial findings of the comprehensive comparative study revealed that performance of the

  5. A novel hybrid classification model of genetic algorithms, modified k-Nearest Neighbor and developed backpropagation neural network.

    PubMed

    Salari, Nader; Shohaimi, Shamarina; Najafi, Farid; Nallappan, Meenakshii; Karishnarajah, Isthrinayagy

    2014-01-01

    Among numerous artificial intelligence approaches, k-Nearest Neighbor algorithms, genetic algorithms, and artificial neural networks are considered as the most common and effective methods in classification problems in numerous studies. In the present study, the results of the implementation of a novel hybrid feature selection-classification model using the above mentioned methods are presented. The purpose is benefitting from the synergies obtained from combining these technologies for the development of classification models. Such a combination creates an opportunity to invest in the strength of each algorithm, and is an approach to make up for their deficiencies. To develop proposed model, with the aim of obtaining the best array of features, first, feature ranking techniques such as the Fisher's discriminant ratio and class separability criteria were used to prioritize features. Second, the obtained results that included arrays of the top-ranked features were used as the initial population of a genetic algorithm to produce optimum arrays of features. Third, using a modified k-Nearest Neighbor method as well as an improved method of backpropagation neural networks, the classification process was advanced based on optimum arrays of the features selected by genetic algorithms. The performance of the proposed model was compared with thirteen well-known classification models based on seven datasets. Furthermore, the statistical analysis was performed using the Friedman test followed by post-hoc tests. The experimental findings indicated that the novel proposed hybrid model resulted in significantly better classification performance compared with all 13 classification methods. Finally, the performance results of the proposed model was benchmarked against the best ones reported as the state-of-the-art classifiers in terms of classification accuracy for the same data sets. The substantial findings of the comprehensive comparative study revealed that performance of the

  6. A simple algorithm to predict the development of radiological erosions in patients with early rheumatoid arthritis: prospective cohort study.

    PubMed Central

    Brennan, P.; Harrison, B.; Barrett, E.; Chakravarty, K.; Scott, D.; Silman, A.; Symmons, D.

    1996-01-01

    OBJECTIVE: To produce a practical algorithm to predict which patients with early rheumatoid arthritis will develop radiological erosions. DESIGN: Primary care based prospective cohort study. SETTING: All general practices in the Norwich Health Authority, Norfolk. SUBJECTS: 175 patients notified to the Norfolk Arthritis Register were visited by a metrologist soon after they had presented to their general practitioners with inflammatory polyarthritis, and again after a further 12 months. All the patients satisfied the American Rheumatism Association's 1987 criteria for rheumatoid arthritis and were seen by a metrologist within six months of the onset of symptoms. The study population was randomly split into a prediction sample (n = 105) for generating the algorithm and a validation sample (n = 70) for testing it. MAIN OUTCOME MEASURES: Predictor variables measured at baseline included rheumatoid factor status, swelling of specific joint areas, duration of morning stiffness, nodules, disability score, age, sex, and disease duration when the patient first presented. The outcome variable was the presence of radiological erosions in the hands or feet, or both, after 12 months. RESULTS: A simple algorithm based on a combination of three variables--a positive rheumatoid factor test, swelling of at least two large joints, and a disease duration of more than three months--was best able to predict erosions. When the accuracy of this algorithm was tested with the validation sample, the erosion status of 79% of patients was predicted correctly. CONCLUSIONS: A simple algorithm based on three easily measured items of information can predict which patients are at high risk and which are at low risk of developing radiological erosions. PMID:8776318

  7. A survey of DNA motif finding algorithms

    PubMed Central

    Das, Modan K; Dai, Ho-Kwok

    2007-01-01

    Background Unraveling the mechanisms that regulate gene expression is a major challenge in biology. An important task in this challenge is to identify regulatory elements, especially the binding sites in deoxyribonucleic acid (DNA) for transcription factors. These binding sites are short DNA segments that are called motifs. Recent advances in genome sequence availability and in high-throughput gene expression analysis technologies have allowed for the development of computational methods for motif finding. As a result, a large number of motif finding algorithms have been implemented and applied to various motif models over the past decade. This survey reviews the latest developments in DNA motif finding algorithms. Results Earlier algorithms use promoter sequences of coregulated genes from single genome and search for statistically overrepresented motifs. Recent algorithms are designed to use phylogenetic footprinting or orthologous sequences and also an integrated approach where promoter sequences of coregulated genes and phylogenetic footprinting are used. All the algorithms studied have been reported to correctly detect the motifs that have been previously detected by laboratory experimental approaches, and some algorithms were able to find novel motifs. However, most of these motif finding algorithms have been shown to work successfully in yeast and other lower organisms, but perform significantly worse in higher organisms. Conclusion Despite considerable efforts to date, DNA motif finding remains a complex challenge for biologists and computer scientists. Researchers have taken many different approaches in developing motif discovery tools and the progress made in this area of research is very encouraging. Performance comparison of different motif finding tools and identification of the best tools have proven to be a difficult task because tools are designed based on algorithms and motif models that are diverse and complex and our incomplete understanding of

  8. Algorithm Development and Validation of CDOM Properties for Estuarine and Continental Shelf Waters Along the Northeastern U.S. Coast

    NASA Technical Reports Server (NTRS)

    Mannino, Antonio; Novak, Michael G.; Hooker, Stanford B.; Hyde, Kimberly; Aurin, Dick

    2014-01-01

    An extensive set of field measurements have been collected throughout the continental margin of the northeastern U.S. from 2004 to 2011 to develop and validate ocean color satellite algorithms for the retrieval of the absorption coefficient of chromophoric dissolved organic matter (aCDOM) and CDOM spectral slopes for the 275:295 nm and 300:600 nm spectral range (S275:295 and S300:600). Remote sensing reflectance (Rrs) measurements computed from in-water radiometry profiles along with aCDOM() data are applied to develop several types of algorithms for the SeaWiFS and MODIS-Aqua ocean color satellite sensors, which involve least squares linear regression of aCDOM() with (1) Rrs band ratios, (2) quasi-analytical algorithm-based (QAA based) products of total absorption coefficients, (3) multiple Rrs bands within a multiple linear regression (MLR) analysis, and (4) diffuse attenuation coefficient (Kd). The relative error (mean absolute percent difference; MAPD) for the MLR retrievals of aCDOM(275), aCDOM(355), aCDOM(380), aCDOM(412) and aCDOM(443) for our study region range from 20.4-23.9 for MODIS-Aqua and 27.3-30 for SeaWiFS. Because of the narrower range of CDOM spectral slope values, the MAPD for the MLR S275:295 and QAA-based S300:600 algorithms are much lower ranging from 9.9 and 8.3 for SeaWiFS, respectively, and 8.7 and 6.3 for MODIS, respectively. Seasonal and spatial MODIS-Aqua and SeaWiFS distributions of aCDOM, S275:295 and S300:600 processed with these algorithms are consistent with field measurements and the processes that impact CDOM levels along the continental shelf of the northeastern U.S. Several satellite data processing factors correlate with higher uncertainty in satellite retrievals of aCDOM, S275:295 and S300:600 within the coastal ocean, including solar zenith angle, sensor viewing angle, and atmospheric products applied for atmospheric corrections. Algorithms that include ultraviolet Rrs bands provide a better fit to field measurements than

  9. Development of a Real-Time Pulse Processing Algorithm for TES-Based X-Ray Microcalorimeters

    NASA Technical Reports Server (NTRS)

    Tan, Hui; Hennig, Wolfgang; Warburton, William K.; Doriese, W. Bertrand; Kilbourne, Caroline A.

    2011-01-01

    We report here a real-time pulse processing algorithm for superconducting transition-edge sensor (TES) based x-ray microcalorimeters. TES-based. microca1orimeters offer ultra-high energy resolutions, but the small volume of each pixel requires that large arrays of identical microcalorimeter pixe1s be built to achieve sufficient detection efficiency. That in turn requires as much pulse processing as possible must be performed at the front end of readout electronics to avoid transferring large amounts of data to a host computer for post-processing. Therefore, a real-time pulse processing algorithm that not only can be implemented in the readout electronics but also achieve satisfactory energy resolutions is desired. We have developed an algorithm that can be easily implemented. in hardware. We then tested the algorithm offline using several data sets acquired with an 8 x 8 Goddard TES x-ray calorimeter array and 2x16 NIST time-division SQUID multiplexer. We obtained an average energy resolution of close to 3.0 eV at 6 keV for the multiplexed pixels while preserving over 99% of the events in the data sets.

  10. Development of sleep apnea syndrome screening algorithm by using heart rate variability analysis and support vector machine.

    PubMed

    Nakayama, Chikao; Fujiwara, Koichi; Matsuo, Masahiro; Kano, Manabu; Kadotani, Hiroshi

    2015-08-01

    Although sleep apnea syndrome (SAS) is a common sleep disorder, most patients with sleep apnea are undiagnosed and untreated because it is difficult for patients themselves to notice SAS in daily living. Polysomnography (PSG) is a gold standard test for sleep disorder diagnosis, however PSG cannot be performed in many hospitals. This fact motivates us to develop an SAS screening system that can be used easily at home. The autonomic nervous function of a patient changes during apnea. Since changes in the autonomic nervous function affect fluctuation of the R-R interval (RRI) of an electrocardiogram (ECG), called heart rate variability (HRV), SAS can be detected through monitoring HRV. The present work proposes a new HRV-based SAS screening algorithm by utilizing support vector machine (SVM), which is a well-known pattern recognition method. In the proposed algorithm, various HRV features are derived from RRI data in both apnea and normal respiration periods of patients and healthy people, and an apnea/normal respiration (A/N) discriminant model is built from the derived HRV features by SVM. The result of applying the proposed SAS screening algorithm to clinical data demonstrates that it can discriminate patients with sleep apnea and healthy people appropriately. The sensitivity and the specificity of the proposed algorithm were 100% and 86%, respectively. PMID:26738189

  11. The development of a line-scan imaging algorithm for the detection of fecal contamination on leafy geens

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Chieh; Kim, Moon S.; Chuang, Yung-Kun; Lee, Hoyoung

    2013-05-01

    This paper reports the development of a multispectral algorithm, using the line-scan hyperspectral imaging system, to detect fecal contamination on leafy greens. Fresh bovine feces were applied to the surfaces of washed loose baby spinach leaves. A hyperspectral line-scan imaging system was used to acquire hyperspectral fluorescence images of the contaminated leaves. Hyperspectral image analysis resulted in the selection of the 666 nm and 688 nm wavebands for a multispectral algorithm to rapidly detect feces on leafy greens, by use of the ratio of fluorescence intensities measured at those two wavebands (666 nm over 688 nm). The algorithm successfully distinguished most of the lowly diluted fecal spots (0.05 g feces/ml water and 0.025 g feces/ml water) and some of the highly diluted spots (0.0125 g feces/ml water and 0.00625 g feces/ml water) from the clean spinach leaves. The results showed the potential of the multispectral algorithm with line-scan imaging system for application to automated food processing lines for food safety inspection of leafy green vegetables.

  12. Towards earlier inclusion of Children in Tuberculosis (TB) drugs trials: Consensus statements from an Expert Panel

    PubMed Central

    Nachman, Sharon; Ahmed, Amina; Amanullah, Farhana; Becerra, Mercedes C; Botgros, Radu; Brigden, Grania; Browning, Renee; Gardiner, Elizabeth; Hafner, Richard; Hesseling, Anneke; How, Cleotilde; Jean-Philippe, Patrick; Lessem, Erica; Makhene, Mamodikoe; Mbelle, Nontombi; Marais, Ben; McIlleron, Helen; Mc Neeley, David F; Mendel, Carl; Murray, Stephen; Navarro, Eileen; Oramasionwu, Gloria E; Porcalla, Ariel R; Powell, Clydette; Powell, Mair; Rigaud, Mona; Rouzier, Vanessa; Samson, Pearl; Schaaf, H. Simon; Shah, Seema; Starke, Jeff; Swaminathan, Soumya; Wobudeya, Eric; Worrell, Carol

    2015-01-01

    Children represent a significant proportion of the global tuberculosis (TB) burden, and may be disproportionately more affected by its most severe clinical manifestations. Currently available treatments for pediatric drug-susceptible (DS) and drug-resistant (DR) TB, albeit generally effective, are hampered by high pill burden, long duration of treatment, coexistent toxicities, and an overall lack of suitable, child-friendly formulations. The complex and burdensome nature of administering the existing regimens to treat DS TB also contributes to the rise of DR TB strains. Despite the availability and use of these therapies for decades, a dearth of dosing evidence in children underscores the importance of sustained efforts for TB drug development to better meet the treatment needs of children with TB. Several new TB drugs and regimens with promising activity against both DS and DR TB strains have recently entered clinical development and are in various phases of clinical evaluation in adults or have received marketing authorization for adults. However, initiation of clinical trials to evaluate these drugs in children is often deferred, pending the availability of complete safety and efficacy data in adults or after drug approval. This document summarizes consensus statements from an international panel of childhood TB opinion leaders which support the initiation of evaluation of new TB drugs and regimens in children at earlier phases of the TB Drug development cycle. PMID:25957923

  13. Development of the neural network algorithm projecting system Neural Architecture and its application in combining medical expert systems

    NASA Astrophysics Data System (ADS)

    Timofeew, Sergey; Eliseev, Vladimir; Tcherkassov, Oleg; Birukow, Valentin; Orbachevskyi, Leonid; Shamsutdinov, Uriy

    1998-04-01

    Some problems of creation of medical expert systems and the ways of their overcoming using artificial neural networks are discussed. The instrumental system for projecting neural network algorithms `Neural Architector', developed by the authors, is described. It allows to perform effective modeling of artificial neural networks and to analyze their work. The example of the application of the `Neural Architector' system in composing an expert system for diagnostics of pulmonological diseases is shown.

  14. Ocean Observations with EOS/MODIS: Algorithm Development and Post Launch Studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.; Conboy, B. (Technical Monitor)

    1999-01-01

    Significant accomplishments made during the present reporting period include: 1) Installed spectral optimization algorithm in the SeaDas image processing environment and successfully processed SeaWiFS imagery. The results were superior to the standard SeaWiFS algorithm (the MODIS prototype) in a turbid atmosphere off the US East Coast, but similar in a clear (typical) oceanic atmosphere; 2) Inverted ACE-2 LIDAR measurements coupled with sun photometer-derived aerosol optical thickness to obtain the vertical profile of aerosol optical thickness. The profile was validated with simultaneous aircraft measurements; and 3) Obtained LIDAR and CIMEL measurements of typical maritime and mineral dust-dominated marine atmosphere in the U.S. Virgin Islands. Contemporaneous SeaWiFS imagery were also acquired.

  15. The design and development of signal-processing algorithms for an airborne x-band Doppler weather radar

    NASA Technical Reports Server (NTRS)

    Nicholson, Shaun R.

    1994-01-01

    Improved measurements of precipitation will aid our understanding of the role of latent heating on global circulations. Spaceborne meteorological sensors such as the planned precipitation radar and microwave radiometers on the Tropical Rainfall Measurement Mission (TRMM) provide for the first time a comprehensive means of making these global measurements. Pre-TRMM activities include development of precipitation algorithms using existing satellite data, computer simulations, and measurements from limited aircraft campaigns. Since the TRMM radar will be the first spaceborne precipitation radar, there is limited experience with such measurements, and only recently have airborne radars become available that can attempt to address the issue of the limitations of a spaceborne radar. There are many questions regarding how much attenuation occurs in various cloud types and the effect of cloud vertical motions on the estimation of precipitation rates. The EDOP program being developed by NASA GSFC will provide data useful for testing both rain-retrieval algorithms and the importance of vertical motions on the rain measurements. The purpose of this report is to describe the design and development of real-time embedded parallel algorithms used by EDOP to extract reflectivity and Doppler products (velocity, spectrum width, and signal-to-noise ratio) as the first step in the aforementioned goals.

  16. Signal processing algorithm of newly developed transmission-type extrinsic Fabry-Perot interferometric optical fiber sensor

    NASA Astrophysics Data System (ADS)

    Kim, Sang-Hoon; Lee, Jung-Ju; Kwon, Il-Bum

    2000-06-01

    The newly developed TEFPI (transmission-type extrinsic Fabry- Perot interferometric) optical fiber sensor can distinguish the direction of measurement more simply and effectively than the conventional reflection-type EFPI optical fiber sensors. The output signal of the TEFPI optical fiber sensor has the characteristics that the signal level of fringes shows a negative slope for a tensile direction and a positive slope for a compressive direction. Based on these characteristics, the direction of measurement of the TEFPI optical fiber sensor can be distinguished with ease. In this paper, the signal processing algorithm adequate to the TEFPI optical fiber sensor was developed. This algorithm can process signal with recognition of the positions of peaks, valleys and signal levels of fringes. Thus this can determine a measurement direction and the positions of direction changes by using the change trend of signal levels. The developed algorithm makes the post-process and real-time process of the signal of the TEFPI optical fiber sensor possible.

  17. Development and characterization of an anthropomorphic breast software phantom based upon region-growing algorithm

    PubMed Central

    Bakic, Predrag R.; Zhang, Cuiping; Maidment, Andrew D. A.

    2011-01-01

    Purpose: We present a novel algorithm for computer simulation of breast anatomy for generation of anthropomorphic software breast phantoms. A realistic breast simulation is necessary for preclinical validation of volumetric imaging modalities.Methods: The anthropomorphic software breast phantom simulates the skin, regions of adipose and fibroglandular tissue, and the matrix of Cooper’s ligaments and adipose compartments. The adipose compartments are simulated using a seeded region-growing algorithm; compartments are grown from a set of seed points with specific orientation and growing speed. The resulting adipose compartments vary in shape and size similar to real breasts; the adipose region has a compact coverage by adipose compartments of various sizes, while the fibroglandular region has fewer, more widely separated adipose compartments. Simulation parameters can be selected to cover the breadth of variations in breast anatomy observed clinically.Results: When simulating breasts of the same glandularity with different numbers of adipose compartments, the average compartment volume was proportional to the phantom size and inversely proportional to the number of simulated compartments. The use of the software phantom in clinical image simulation is illustrated by synthetic digital breast tomosynthesis images of the phantom. The proposed phantom design was capable of simulating breasts of different size, glandularity, and adipose compartment distribution. The region-growing approach allowed us to simulate adipose compartments with various size and shape. Qualitatively, simulated x-ray projections of the phantoms, generated using the proposed algorithm, have a more realistic appearance compared to previous versions of the phantom.Conclusions: A new algorithm for computer simulation of breast anatomy has been proposed that improved the realism of the anthropomorphic software breast phantom. PMID:21815391

  18. Ocean observations with EOS/MODIS: Algorithm development and post launch studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1996-01-01

    An investigation of the influence of stratospheric aerosol on the performance of the atmospheric correction algorithm is nearly complete. The results indicate how the performance of the algorithm is degraded if the stratospheric aerosol is ignored. Use of the MODIS 1380 nm band to effect a correction for stratospheric aerosols was also studied. Simple algorithms such as subtracting the reflectance at 1380 nm from the visible and near infrared bands can significantly reduce the error; however, only if the diffuse transmittance of the aerosol layer is taken into account. The atmospheric correction code has been modified for use with absorbing aerosols. Tests of the code showed that, in contrast to non absorbing aerosols, the retrievals were strongly influenced by the vertical structure of the aerosol, even when the candidate aerosol set was restricted to a set appropriate to the absorbing aerosol. This will further complicate the problem of atmospheric correction in an atmosphere with strongly absorbing aerosols. Our whitecap radiometer system and solar aureole camera were both tested at sea and performed well. Investigation of a technique to remove the effects of residual instrument polarization sensitivity were initiated and applied to an instrument possessing (approx.) 3-4 times the polarization sensitivity expected for MODIS. Preliminary results suggest that for such an instrument, elimination of the polarization effect is possible at the required level of accuracy by estimating the polarization of the top-of-atmosphere radiance to be that expected for a pure Rayleigh scattering atmosphere. This may be of significance for design of a follow-on MODIS instrument. W.M. Balch participated on two month-long cruises to the Arabian sea, measuring coccolithophore abundance, production, and optical properties. A thorough understanding of the relationship between calcite abundance and light scatter, in situ, will provide the basis for a generic suspended calcite algorithm.

  19. A new augmentation based algorithm for extracting maximal chordal subgraphs

    SciTech Connect

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2014-10-18

    If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’ parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.

  20. A new augmentation based algorithm for extracting maximal chordal subgraphs

    DOE PAGESBeta

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2014-10-18

    If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’more » parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.« less

  1. A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs

    PubMed Central

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2014-01-01

    A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’ parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph. PMID:25767331

  2. Performance and development plans for the Inner Detector trigger algorithms at ATLAS

    NASA Astrophysics Data System (ADS)

    Martin-Haugh, Stewart

    2015-12-01

    A description of the design and performance of the newly re-implemented tracking algorithms for the ATLAS trigger for LHC Run 2, to commence in spring 2015, is presented. The ATLAS High Level Trigger (HLT) has been restructured to run as a more flexible single stage process, rather than the two separate Level 2 and Event Filter stages used during Run 1. To make optimal use of this new scenario, a new tracking strategy has been implemented for Run 2. This new strategy will use a FastTrackFinder algorithm to directly seed the subsequent Precision Tracking, and will result in improved track parameter resolution and significantly faster execution times than achieved during Run 1 and with better efficiency. The timings of the algorithms for electron and tau track triggers are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performance improvements as the code evolves. The online deployment and commissioning are also discussed.

  3. Selection and collection of multi parameter physiological data for cardiac rhythm diagnostic algorithm development

    NASA Astrophysics Data System (ADS)

    Bostock, J.; Weller, P.; Cooklin, M.

    2010-07-01

    Automated diagnostic algorithms are used in implantable cardioverter-defibrillators (ICD's) to detect abnormal heart rhythms. Algorithms misdiagnose and improved specificity is needed to prevent inappropriate therapy. Knowledge engineering (KE) and artificial intelligence (AI) could improve this. A pilot study of KE was performed with artificial neural network (ANN) as AI system. A case note review analysed arrhythmic events stored in patients ICD memory. 13.2% patients received inappropriate therapy. The best ICD algorithm had sensitivity 1.00, specificity 0.69 (p<0.001 different to gold standard). A subset of data was used to train and test an ANN. A feed-forward, back-propagation network with 7 inputs, a 4 node hidden layer and 1 output had sensitivity 1.00, specificity 0.71 (p<0.001). A prospective study was performed using KE to list arrhythmias, factors and indicators for which measurable parameters were evaluated and results reviewed by a domain expert. Waveforms from electrodes in the heart and thoracic bio-impedance; temperature and motion data were collected from 65 patients during cardiac electrophysiological studies. 5 incomplete datasets were due to technical failures. We concluded that KE successfully guided selection of parameters and ANN produced a usable system and that complex data collection carries greater risk of technical failure, leading to data loss.

  4. The Development of a Factorizable Multigrid Algorithm for Subsonic and Transonic Flow

    NASA Technical Reports Server (NTRS)

    Roberts, Thomas W.

    2001-01-01

    The factorizable discretization of Sidilkover for the compressible Euler equations previously demonstrated for channel flows has been extended to external flows.The dissipation of the original scheme has been modified to maintain stability for moderately stretched grids. The discrete equations are solved by symmetric collective Gauss-Seidel relaxation and FAS multigrid. Unlike the earlier work ordering the grid vertices in the flow direction has been found to be unnecessary. Solutions for essential incompressible flow (Mach 0.01) and supercritical flows have obtained for a Karman-Trefftz airfoil with it conformally mapped grid,as well as a NACA 0012 on an algebraically generated grid. The current work demonstrates nearly 0(n) convergence for subsonic and slightly transonic flows.

  5. Atmospheric Correction, Vicarious Calibration and Development of Algorithms for Quantifying Cyanobacteria Blooms from Oceansat-1 OCM Satellite Data

    NASA Astrophysics Data System (ADS)

    Dash, P.; Walker, N. D.; Mishra, D. R.; Hu, C.; D'Sa, E. J.; Pinckney, J. L.

    2011-12-01

    Cyanobacteria represent a major harmful algal group in fresh to brackish water environments. Lac des Allemands, a freshwater lake located southwest of New Orleans, Louisiana on the upper end of the Barataria Estuary, provides a natural laboratory for remote characterization of cyanobacteria blooms because of their seasonal occurrence. The Ocean Colour Monitor (OCM) sensor provides radiance measurements similar to SeaWiFS but with higher spatial resolution. However, OCM does not have a standard atmospheric correction procedure, and it is difficult to find a detailed description of the entire atmospheric correction procedure for ocean (or lake) in one place. Atmospheric correction of satellite data over small lakes and estuaries (Case 2 waters) is also challenging due to difficulties in estimation of aerosol scattering accurately in these areas. Therefore, an atmospheric correction procedure was written for processing OCM data, based on the extensive work done for SeaWiFS. Since OCM-retrieved radiances were abnormally low in the blue wavelength region, a vicarious calibration procedure was also developed. Empirical inversion algorithms were developed to convert the OCM remote sensing reflectance (Rrs) at bands centered at 510.6 and 556.4 nm to concentrations of phycocyanin (PC), the primary cyanobacterial pigment. A holistic approach was followed to minimize the influence of other optically active constituents on the PC algorithm. Similarly, empirical algorithms to estimate chlorophyll a (Chl a) concentrations were developed using OCM bands centered at 556.4 and 669 nm. The best PC algorithm (R2=0.7450, p<0.0001, n=72) yielded a root mean square error (RMSE) of 36.92 μg/L with a relative RMSE of 10.27% (PC from 2.75-363.50 μg/L, n=48). The best algorithm for Chl a (R2=0.7510, p<0.0001, n=72) produced an RMSE of 31.19 μg/L with a relative RMSE of 16.56% (Chl a from 9.46-212.76 μg/L, n=48). While more field data are required to further validate the long

  6. Development and validation of a segmentation-free polyenergetic algorithm for dynamic perfusion computed tomography.

    PubMed

    Lin, Yuan; Samei, Ehsan

    2016-07-01

    Dynamic perfusion imaging can provide the morphologic details of the scanned organs as well as the dynamic information of blood perfusion. However, due to the polyenergetic property of the x-ray spectra, beam hardening effect results in undesirable artifacts and inaccurate CT values. To address this problem, this study proposes a segmentation-free polyenergetic dynamic perfusion imaging algorithm (pDP) to provide superior perfusion imaging. Dynamic perfusion usually is composed of two phases, i.e., a precontrast phase and a postcontrast phase. In the precontrast phase, the attenuation properties of diverse base materials (e.g., in a thorax perfusion exam, base materials can include lung, fat, breast, soft tissue, bone, and metal implants) can be incorporated to reconstruct artifact-free precontrast images. If patient motions are negligible or can be corrected by registration, the precontrast images can then be employed as a priori information to derive linearized iodine projections from the postcontrast images. With the linearized iodine projections, iodine perfusion maps can be reconstructed directly without the influence of various influential factors, such as iodine location, patient size, x-ray spectrum, and background tissue type. A series of simulations were conducted on a dynamic iodine calibration phantom and a dynamic anthropomorphic thorax phantom to validate the proposed algorithm. The simulations with the dynamic iodine calibration phantom showed that the proposed algorithm could effectively eliminate the beam hardening effect and enable quantitative iodine map reconstruction across various influential factors. The error range of the iodine concentration factors ([Formula: see text]) was reduced from [Formula: see text] for filtered back-projection (FBP) to [Formula: see text] for pDP. The quantitative results of the simulations with the dynamic anthropomorphic thorax phantom indicated that the maximum error of iodine concentrations can be reduced from

  7. Development and validation of an automated operational modal analysis algorithm for vibration-based monitoring and tensile load estimation

    NASA Astrophysics Data System (ADS)

    Rainieri, Carlo; Fabbrocino, Giovanni

    2015-08-01

    In the last few decades large research efforts have been devoted to the development of methods for automated detection of damage and degradation phenomena at an early stage. Modal-based damage detection techniques are well-established methods, whose effectiveness for Level 1 (existence) and Level 2 (location) damage detection is demonstrated by several studies. The indirect estimation of tensile loads in cables and tie-rods is another attractive application of vibration measurements. It provides interesting opportunities for cheap and fast quality checks in the construction phase, as well as for safety evaluations and structural maintenance over the structure lifespan. However, the lack of automated modal identification and tracking procedures has been for long a relevant drawback to the extensive application of the above-mentioned techniques in the engineering practice. An increasing number of field applications of modal-based structural health and performance assessment are appearing after the development of several automated output-only modal identification procedures in the last few years. Nevertheless, additional efforts are still needed to enhance the robustness of automated modal identification algorithms, control the computational efforts and improve the reliability of modal parameter estimates (in particular, damping). This paper deals with an original algorithm for automated output-only modal parameter estimation. Particular emphasis is given to the extensive validation of the algorithm based on simulated and real datasets in view of continuous monitoring applications. The results point out that the algorithm is fairly robust and demonstrate its ability to provide accurate and precise estimates of the modal parameters, including damping ratios. As a result, it has been used to develop systems for vibration-based estimation of tensile loads in cables and tie-rods. Promising results have been achieved for non-destructive testing as well as continuous

  8. Watershed model calibration framework developed using an influence coefficient algorithm and a genetic algorithm and analysis of pollutant discharge characteristics and load reduction in a TMDL planning area.

    PubMed

    Cho, Jae Heon; Lee, Jong Ho

    2015-11-01

    Manual calibration is common in rainfall-runoff model applications. However, rainfall-runoff models include several complicated parameters; thus, significant time and effort are required to manually calibrate the parameters individually and repeatedly. Automatic calibration has relative merit regarding time efficiency and objectivity but shortcomings regarding understanding indigenous processes in the basin. In this study, a watershed model calibration framework was developed using an influence coefficient algorithm and genetic algorithm (WMCIG) to automatically calibrate the distributed models. The optimization problem used to minimize the sum of squares of the normalized residuals of the observed and predicted values was solved using a genetic algorithm (GA). The final model parameters were determined from the iteration with the smallest sum of squares of the normalized residuals of all iterations. The WMCIG was applied to a Gomakwoncheon watershed located in an area that presents a total maximum daily load (TMDL) in Korea. The proportion of urbanized area in this watershed is low, and the diffuse pollution loads of nutrients such as phosphorus are greater than the point-source pollution loads because of the concentration of rainfall that occurs during the summer. The pollution discharges from the watershed were estimated for each land-use type, and the seasonal variations of the pollution loads were analyzed. Consecutive flow measurement gauges have not been installed in this area, and it is difficult to survey the flow and water quality in this area during the frequent heavy rainfall that occurs during the wet season. The Hydrological Simulation Program-Fortran (HSPF) model was used to calculate the runoff flow and water quality in this basin. Using the water quality results, a load duration curve was constructed for the basin, the exceedance frequency of the water quality standard was calculated for each hydrologic condition class, and the percent reduction

  9. Development and comparative assessment of Raman spectroscopic classification algorithms for lesion discrimination in stereotactic breast biopsies with microcalcifications

    PubMed Central

    Dingari, Narahara Chari; Barman, Ishan; Saha, Anushree; McGee, Sasha; Galindo, Luis H.; Liu, Wendy; Plecha, Donna; Klein, Nina; Dasari, Ramachandra Rao; Fitzmaurice, Maryann

    2014-01-01

    Microcalcifications are an early mammographic sign of breast cancer and a target for stereotactic breast needle biopsy. Here, we develop and compare different approaches for developing Raman classification algorithms to diagnose invasive and in situ breast cancer, fibrocystic change and fibroadenoma that can be associated with microcalcifications. In this study, Raman spectra were acquired from tissue cores obtained from fresh breast biopsies and analyzed using a constituent-based breast model. Diagnostic algorithms based on the breast model fit coefficients were devised using logistic regression, C4.5 decision tree classification, k-nearest neighbor (k-NN) and support vector machine (SVM) analysis, and subjected to leave-one-out cross validation. The best performing algorithm was based on SVM analysis (with radial basis function), which yielded a positive predictive value of 100% and negative predictive value of 96% for cancer diagnosis. Importantly, these results demonstrate that Raman spectroscopy provides adequate diagnostic information for lesion discrimination even in the presence of microcalcifications, which to the best of our knowledge has not been previously reported. Raman spectroscopy and multivariate classification provide accurate discrimination among lesions in stereotactic breast biopsies, irrespective of microcalcification status. PMID:22815240

  10. Development and comparative assessment of Raman spectroscopic classification algorithms for lesion discrimination in stereotactic breast biopsies with microcalcifications.

    PubMed

    Dingari, Narahara Chari; Barman, Ishan; Saha, Anushree; McGee, Sasha; Galindo, Luis H; Liu, Wendy; Plecha, Donna; Klein, Nina; Dasari, Ramachandra Rao; Fitzmaurice, Maryann

    2013-04-01

    Microcalcifications are an early mammographic sign of breast cancer and a target for stereotactic breast needle biopsy. Here, we develop and compare different approaches for developing Raman classification algorithms to diagnose invasive and in situ breast cancer, fibrocystic change and fibroadenoma that can be associated with microcalcifications. In this study, Raman spectra were acquired from tissue cores obtained from fresh breast biopsies and analyzed using a constituent-based breast model. Diagnostic algorithms based on the breast model fit coefficients were devised using logistic regression, C4.5 decision tree classification, k-nearest neighbor (k -NN) and support vector machine (SVM) analysis, and subjected to leave-one-out cross validation. The best performing algorithm was based on SVM analysis (with radial basis function), which yielded a positive predictive value of 100% and negative predictive value of 96% for cancer diagnosis. Importantly, these results demonstrate that Raman spectroscopy provides adequate diagnostic information for lesion discrimination even in the presence of microcalcifications, which to the best of our knowledge has not been previously reported. PMID:22815240

  11. Drowsiness/alertness algorithm development and validation using synchronized EEG and cognitive performance to individualize a generalized model

    PubMed Central

    Johnson, Robin R.; Popovic, Djordje P.; Olmstead, Richard E.; Stikic, Maja; Levendowski, Daniel J.; Berka, Chris

    2011-01-01

    A great deal of research over the last century has focused on drowsiness/alertness detection, as fatigue-related physical and cognitive impairments pose a serious risk to public health and safety. Available drowsiness/alertness detection solutions are unsatisfactory for a number of reasons: 1) lack of generalizability, 2) failure to address individual variability in generalized models, and/or 3) they lack a portable, un-tethered application. The current study aimed to address these issues, and determine if an individualized electroencephalography (EEG) based algorithm could be defined to track performance decrements associated with sleep loss, as this is the first step in developing a field deployable drowsiness/alertness detection system. The results indicated that an EEG-based algorithm, individualized using a series of brief "identification" tasks, was able to effectively track performance decrements associated with sleep deprivation. Future development will address the need for the algorithm to predict performance decrements due to sleep loss, and provide field applicability. PMID:21419826

  12. Development of a remote sensing algorithm for cyanobacterial phycocyanin pigment in the Baltic Sea using neural network approach

    NASA Astrophysics Data System (ADS)

    Riha, Stefan; Krawczyk, Harald

    2011-11-01

    Water quality monitoring in the Baltic Sea is of high ecological importance for all its neighbouring countries. They are highly interested in a regular monitoring of water quality parameters of their regional zones. A special attention is paid to the occurrence and dissemination of algae blooms. Among the appearing blooms the possibly toxicological or harmful cyanobacteria cultures are a special case of investigation, due to their specific optical properties and due to the negative influence on the ecological state of the aquatic system. Satellite remote sensing, with its high temporal and spatial resolution opportunities, allows the frequent observations of large areas of the Baltic Sea with special focus on its two seasonal algae blooms. For a better monitoring of the cyanobacteria dominated summer blooms, adapted algorithms are needed which take into account the special optical properties of blue-green algae. Chlorophyll-a standard algorithms typically fail in a correct recognition of these occurrences. To significantly improve the opportunities of observation and propagation of the cyanobacteria blooms, the Marine Remote Sensing group of DLR has started the development of a model based inversion algorithm that includes a four component bio-optical water model for Case2 waters, which extends the commonly calculated parameter set chlorophyll, Suspended Matter and CDOM with an additional parameter for the estimation of phycocyanin absorption. It was necessary to carry out detailed optical laboratory measurements with different cyanobacteria cultures, occurring in the Baltic Sea, for the generation of a specific bio-optical model. The inversion of satellite remote sensing data is based on an artificial Neural Network technique. This is a model based multivariate non-linear inversion approach. The specifically designed Neural Network is trained with a comprehensive dataset of simulated reflectance values taking into account the laboratory obtained specific optical

  13. Study report on interfacing major physiological subsystem models: An approach for developing a whole-body algorithm

    NASA Technical Reports Server (NTRS)

    Fitzjerrell, D. G.; Grounds, D. J.; Leonard, J. I.

    1975-01-01

    Using a whole body algorithm simulation model, a wide variety and large number of stresses as well as different stress levels were simulated including environmental disturbances, metabolic changes, and special experimental situations. Simulation of short term stresses resulted in simultaneous and integrated responses from the cardiovascular, respiratory, and thermoregulatory subsystems and the accuracy of a large number of responding variables was verified. The capability of simulating significantly longer responses was demonstrated by validating a four week bed rest study. In this case, the long term subsystem model was found to reproduce many experimentally observed changes in circulatory dynamics, body fluid-electrolyte regulation, and renal function. The value of systems analysis and the selected design approach for developing a whole body algorithm was demonstrated.

  14. Some computational challenges of developing efficient parallel algorithms for data-dependent computations in thermal-hydraulics supercomputer applications

    SciTech Connect

    Woodruff, S.B.

    1992-05-01

    The Transient Reactor Analysis Code (TRAC), which features a two- fluid treatment of thermal-hydraulics, is designed to model transients in water reactors and related facilities. One of the major computational costs associated with TRAC and similar codes is calculating constitutive coefficients. Although the formulations for these coefficients are local the costs are flow-regime- or data-dependent; i.e., the computations needed for a given spatial node often vary widely as a function of time. Consequently, poor load balancing will degrade efficiency on either vector or data parallel architectures when the data are organized according to spatial location. Unfortunately, a general automatic solution to the load-balancing problem associated with data-dependent computations is not yet available for massively parallel architectures. This document discusses why developers algorithms, such as a neural net representation, that do not exhibit algorithms, such as a neural net representation, that do not exhibit load-balancing problems.

  15. Data and software tools for gamma radiation spectral threat detection and nuclide identification algorithm development and evaluation

    NASA Astrophysics Data System (ADS)

    Portnoy, David; Fisher, Brian; Phifer, Daniel

    2015-06-01

    The detection of radiological and nuclear threats is extremely important to national security. The federal government is spending significant resources developing new detection systems and attempting to increase the performance of existing ones. The detection of illicit radionuclides that may pose a radiological or nuclear threat is a challenging problem complicated by benign radiation sources (e.g., cat litter and medical treatments), shielding, and large variations in background radiation. Although there is a growing acceptance within the community that concentrating efforts on algorithm development (independent of the specifics of fully assembled systems) has the potential for significant overall system performance gains, there are two major hindrances to advancements in gamma spectral analysis algorithms under the current paradigm: access to data and common performance metrics along with baseline performance measures. Because many of the signatures collected during performance measurement campaigns are classified, dissemination to algorithm developers is extremely limited. This leaves developers no choice but to collect their own data if they are lucky enough to have access to material and sensors. This is often combined with their own definition of metrics for measuring performance. These two conditions make it all but impossible for developers and external reviewers to make meaningful comparisons between algorithms. Without meaningful comparisons, performance advancements become very hard to achieve and (more importantly) recognize. The objective of this work is to overcome these obstacles by developing and freely distributing real and synthetically generated gamma-spectra data sets as well as software tools for performance evaluation with associated performance baselines to national labs, academic institutions, government agencies, and industry. At present, datasets for two tracks, or application domains, have been developed: one that includes temporal

  16. Development of an apnea detection algorithm based on temporal analysis of thoracic respiratory effort signal

    NASA Astrophysics Data System (ADS)

    Dell’Aquila, C. R.; Cañadas, G. E.; Correa, L. S.; Laciar, E.

    2016-04-01

    This work describes the design of an algorithm for detecting apnea episodes, based on analysis of thorax respiratory effort signal. Inspiration and expiration time, and range amplitude of respiratory cycle were evaluated. For range analysis the standard deviation statistical tool was used over respiratory signal temporal windows. The validity of its performance was carried out in 8 records of Apnea-ECG database that has annotations of apnea episodes. The results are: sensitivity (Se) 73%, specificity (Sp) 83%. These values can be improving eliminating artifact of signal records.

  17. Use of a Stochastic Joint Inversion Modeling Algorithm to Develop a Hydrothermal Flow Model at a Geothermal Prospect

    NASA Astrophysics Data System (ADS)

    Tompson, A. F. B.; Mellors, R. J.; Dyer, K.; Yang, X.; Chen, M.; Trainor Guitton, W.; Wagoner, J. L.; Ramirez, A. L.

    2014-12-01

    A stochastic joint inverse algorithm is used to analyze diverse geophysical and hydrologic data associated with a geothermal prospect. The approach uses a Markov Chain Monte Carlo (MCMC) global search algorithm to develop an ensemble of hydrothermal groundwater flow models that are most consistent with the observations. The algorithm utilizes an initial conceptual model descriptive of structural (geology), parametric (permeability) and hydrothermal (saturation, temperature) characteristics of the geologic system. Initial (a-priori) estimates of uncertainty in these characteristics are used to drive simulations of hydrothermal fluid flow and related geophysical processes in a large number of random realizations of the conceptual geothermal system spanning these uncertainties. The process seeks to improve the conceptual model by developing a ranked subset of model realizations that best match all available data within a specified norm or tolerance. Statistical (posterior) characteristics of these solutions reflect reductions in the a-priori uncertainties. The algorithm has been tested on a geothermal prospect located at Superstition Mountain, California and has been successful in creating a suite of models compatible with available temperature, surface resistivity, and magnetotelluric (MT) data. Although the MCMC method is highly flexible and capable of accommodating multiple and diverse datasets, a typical inversion may require the evaluation of thousands of possible model runs whose sophistication and complexity may evolve with the magnitude of data considered. As a result, we are testing the use of sensitivity analyses to better identify critical uncertain variables, lower order surrogate models to streamline computational costs, and value of information analyses to better assess optimal use of related data. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL

  18. Robust integration schemes for generalized viscoplasticity with internal-state variables. Part 2: Algorithmic developments and implementation

    NASA Technical Reports Server (NTRS)

    Li, Wei; Saleeb, Atef F.

    1995-01-01

    This two-part report is concerned with the development of a general framework for the implicit time-stepping integrators for the flow and evolution equations in generalized viscoplastic models. The primary goal is to present a complete theoretical formulation, and to address in detail the algorithmic and numerical analysis aspects involved in its finite element implementation, as well as to critically assess the numerical performance of the developed schemes in a comprehensive set of test cases. On the theoretical side, the general framework is developed on the basis of the unconditionally-stable, backward-Euler difference scheme as a starting point. Its mathematical structure is of sufficient generality to allow a unified treatment of different classes of viscoplastic models with internal variables. In particular, two specific models of this type, which are representative of the present start-of-art in metal viscoplasticity, are considered in applications reported here; i.e., fully associative (GVIPS) and non-associative (NAV) models. The matrix forms developed for both these models are directly applicable for both initially isotropic and anisotropic materials, in general (three-dimensional) situations as well as subspace applications (i.e., plane stress/strain, axisymmetric, generalized plane stress in shells). On the computational side, issues related to efficiency and robustness are emphasized in developing the (local) interative algorithm. In particular, closed-form expressions for residual vectors and (consistent) material tangent stiffness arrays are given explicitly for both GVIPS and NAV models, with their maximum sizes 'optimized' to depend only on the number of independent stress components (but independent of the number of viscoplastic internal state parameters). Significant robustness of the local iterative solution is provided by complementing the basic Newton-Raphson scheme with a line-search strategy for convergence. In the present second part of

  19. Stream-reach Identification for New Run-of-River Hydropower Development through a Merit Matrix Based Geospatial Algorithm

    SciTech Connect

    Pasha, M. Fayzul K.; Yeasmin, Dilruba; Kao, Shih-Chieh; Hadjerioua, Boualem; Wei, Yaxing; Smith, Brennan T

    2014-01-01

    Even after a century of development, the total hydropower potential from undeveloped rivers is still considered to be abundant in the United States. However, unlike evaluating hydropower potential at existing hydropower plants or non-powered dams, locating a feasible new hydropower plant involves many unknowns, and hence the total undeveloped potential is harder to quantify. In light of the rapid development of multiple national geospatial datasets for topography, hydrology, and environmental characteristics, a merit matrix based geospatial algorithm is proposed to help identify possible hydropower stream-reaches for future development. These hydropower stream-reaches sections of natural streams with suitable head, flow, and slope for possible future development are identified and compared using three different scenarios. A case study was conducted in the Alabama-Coosa-Tallapoosa (ACT) and Apalachicola-Chattahoochee-Flint (ACF) hydrologic subregions. It was found that a merit matrix based algorithm, which is based on the product of hydraulic head, annual mean flow, and average channel slope, can help effectively identify stream-reaches with high power density and small surface inundation. The identified stream-reaches can then be efficiently evaluated for their potential environmental impact, land development cost, and other competing water usage in detailed feasibility studies . Given that the selected datasets are available nationally (at least within the conterminous US), the proposed methodology will have wide applicability across the country.

  20. Biological master games: using biologists' reasoning to guide algorithm development for integrated functional genomics.

    PubMed

    Breitling, Rainer; Herzyk, Pawel

    2005-01-01

    We review some powerful new algorithms that build on the intuitive biological interpretation techniques for statistical analysis of functional genomics experiments. Although they were originally designed for transcriptomics, we argue that these algorithms are applicable to any type of -omics study (transcriptomics, proteomics, metabolomics). Rank Products (RP), a strictly non-parametric test statistic to detect differentially regulated elements (genes, proteins, metabolites) in genome-wide screens. RP is particularly powerful for noisy data and low numbers of replicates and makes full use of the availability of a large number of parallel measurements that is typical of modern large-scale experiments. Iterative Group Analysis (iGA), a statistical method that makes the transition from regulated single elements to significant classes of elements, and thus provides an automatic functional annotation of an experiment. Graph-based iGA (GiGA), an extension of iGA that combines experimental data with a broad variety of biological annotations to highlight physiologically relevant regions in a given "evidence graph" (e.g., metabolic networks, signaling pathway diagrams, protein interaction maps). The sequential application of these techniques yields an increasingly abstract interpretation of experimental data that is at the same time quantitative, statistically rigorous, and biologically significant. The results can be used either as helpful tools to guide data visualization and exploration, or as the input for downstream computational applications in a systems biology framework. PMID:16209637

  1. SeaWiFS Technical Report Series. Volume 42; Satellite Primary Productivity Data and Algorithm Development: A Science Plan for Mission to Planet Earth

    NASA Technical Reports Server (NTRS)

    Falkowski, Paul G.; Behrenfeld, Michael J.; Esaias, Wayne E.; Balch, William; Campbell, Janet W.; Iverson, Richard L.; Kiefer, Dale A.; Morel, Andre; Yoder, James A.; Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor)

    1998-01-01

    Two issues regarding primary productivity, as it pertains to the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Program and the National Aeronautics and Space Administration (NASA) Mission to Planet Earth (MTPE) are presented in this volume. Chapter 1 describes the development of a science plan for deriving primary production for the world ocean using satellite measurements, by the Ocean Primary Productivity Working Group (OPPWG). Chapter 2 presents discussions by the same group, of algorithm classification, algorithm parameterization and data availability, algorithm testing and validation, and the benefits of a consensus primary productivity algorithm.

  2. Earlier onset of motor deficits in mice with double mutations in Dyt1 and Sgce

    PubMed Central

    Yokoi, Fumiaki; Yang, Guang; Li, JinDong; DeAndrade, Mark P.; Zhou, Tong; Li, Yuqing

    2010-01-01

    DYT1 early-onset generalized torsion dystonia is an inherited movement disorder caused by mutations in DYT1 coding for torsinA with ∼30% penetrance. Most of the DYT1 dystonia patients exhibit symptoms during childhood and adolescence. On the other hand, DYT1 mutation carriers without symptoms during these periods mostly do not exhibit symptoms later in their life. Little is known about what controls the timing of the onset, a critical issue for DYT1 mutation carriers. DYT11 myoclonus-dystonia is caused by mutations in SGCE coding for ε-sarcoglycan. Two dystonia patients from a single family with double mutations in DYT1 and SGCE exhibited more severe symptoms. A recent study suggested that torsinA contributes to the quality control of ε-sarcoglycan. Here, we derived mice carrying mutations in both Dyt1 and Sgce and found that these double mutant mice showed earlier onset of motor deficits in beam-walking test. A novel monoclonal antibody against mouse ε-sarcoglycan was developed by using Sgce knock-out mice to avoid the immune tolerance. Western blot analysis suggested that functional deficits of torsinA and ε-sarcoglycan may independently cause motor deficits. Examining additional mutations in other dystonia genes may be beneficial to predict the onset in DYT1 mutation carriers. PMID:20627944

  3. Development of an effective lidar retrieval algorithm using lidar measurements during 2008 China-US joined dust field campaign

    NASA Astrophysics Data System (ADS)

    Huang, Z.; Huang, J.

    2009-12-01

    In this study, an effective algorithm was developed to retrieve aerosol optical properties and vertical profile using ground-based lidar measurements. The advantage of this algorithm is that aerosol optical depth retrieving from lidar measurements do not need so-called lidar ratio for same quality retrieved by Sun photometer of AERONET. Also, errors were apparently reduced when retrieving other optical properties using obtained-AOD as constraint. This effective algorithm was applied to retrieve the dust aerosol vertical profiles measured by three MPL-net Micro-Pulse Lidar system, which are located at one permanent site (Semi-Arid Climate & Environment Observatory of Lanzhou University (SACOL)) (located in Yuzhong, 35.95N/104.1E), one SACOL’s Mobile Facility (SMF) (deployed in Jintai, 37.57N/104.23E) and the U.S. Department of Energy Atmospheric Radiation Measurements(ARM) Ancillary Facility (AAF mobile laboratories, SMART-COMMIT) (deployed in Zhangye, 39.08N/100.27E)., during 2008 China-US joined dust field campaign (March-June 2008). A dust storm case which widely influenced Northwest China for 2 May, 2008 was studied using the three ground-based lidar and satellite-borne instruments measurements. The results show the different aerosol vertical structures at each site. Characteristics of aerosol vertical structure in spring over Northwest China were also investigated using the new method.

  4. Development and validation of an algorithm to identify patients newly diagnosed with HIV infection from electronic health records.

    PubMed

    Goetz, Matthew Bidwell; Hoang, Tuyen; Kan, Virginia L; Rimland, David; Rodriguez-Barradas, Maria

    2014-07-01

    An algorithm was developed that identifies patients with new diagnoses of HIV infection by the use of electronic health records. It was based on the sequence of HIV diagnostic tests, entry of ICD-9-CM diagnostic codes, and measurement of HIV-1 plasma RNA levels in persons undergoing HIV testing from 2006 to 2012 at four large urban Veterans Health Administration (VHA) facilities. Source data were obtained from the VHA National Corporate Data Warehouse. Chart review was done by a single trained abstractor to validate site-level data regarding new diagnoses. We identified 1,153 patients as having a positive HIV diagnostic test within the VHA. Of these, 57% were determined to have prior knowledge of their HIV status from testing at non-VHA facilities. An algorithm based on the sequence and results of available laboratory tests and ICD-9-CM entries identified new HIV diagnoses with a sensitivity of 83%, specificity of 86%, positive predictive value of 85%, and negative predictive value of 90%. There were no meaningful demographic or clinical differences between newly diagnosed patients who were correctly or incorrectly classified by the algorithm. We have validated a method to identify cases of new diagnosis of HIV infection in large administrative datasets. This method, which has a sensitivity of 83%, specificity of 86%, positive predictive value of 85%, and negative predictive value of 90% can be used in analyses of the epidemiology of newly diagnosed HIV infection. PMID:24564256

  5. A dike-groyne algorithm in a terrain-following coordinate ocean model (FVCOM): Development, validation and application

    NASA Astrophysics Data System (ADS)

    Ge, Jianzhong; Chen, Changsheng; Qi, Jianhua; Ding, Pingxing; Beardsley, Robert C.

    A dike-groyne module is developed and implemented into the unstructured-grid, three-dimensional primitive equation finite-volume coastal ocean model (FVCOM) for the study of the hydrodynamics around human-made construction in the coastal area. The unstructured-grid finite-volume flux discrete algorithm makes this module capable of realistically including narrow-width dikes and groynes with free exchange in the upper column and solid blocking in the lower column in a terrain-following coordinate system. This algorithm used in the module is validated for idealized cases with emerged and/or submerged dikes and a coastal seawall where either analytical solutions or laboratory experiments are available for comparison. As an example, this module is applied to the Changjiang Estuary where a dike-groyne structure was constructed in the Deep Waterway channel in the inner shelf of the East China Sea (ECS). Driven by the same forcing under given initial and boundary conditions, a comparison was made for model-predicted flow and salinity via observations between dike-groyne and bed-conforming slope algorithms. The results show that with realistic resolution of water transport above and below the dike-groyne structures, the new method provides more accurate results. FVCOM with this MPI-architecture parallelized dike-groyne module provides a new tool for ocean engineering and inundation applications in coastal regions with dike, seawall and/or dam structures.

  6. Detection of surface algal blooms using the newly developed algorithm surface algal bloom index (SABI)

    NASA Astrophysics Data System (ADS)

    Alawadi, Fahad

    2010-10-01

    Quantifying ocean colour properties has evolved over the past two decades from being able to merely detect their biological activity to the ability to estimate chlorophyll concentration using optical satellite sensors like MODIS and MERIS. The production of chlorophyll spatial distribution maps is a good indicator of plankton biomass (primary production) and is useful for the tracing of oceanographic currents, jets and blooms, including harmful algal blooms (HABs). Depending on the type of HABs involved and the environmental conditions, if their concentration rises above a critical threshold, it can impact the flora and fauna of the aquatic habitat through the introduction of the so called "red tide" phenomenon. The estimation of chlorophyll concentration is derived from quantifying the spectral relationship between the blue and the green bands reflected from the water column. This spectral relationship is employed in the standard ocean colour chlorophyll-a (Chlor-a) product, but is incapable of detecting certain macro-algal species that float near to or at the water surface in the form of dense filaments or mats. The ability to accurately identify algal formations that sometimes appear as oil spill look-alikes in satellite imagery, contributes towards the reduction of false-positive incidents arising from oil spill monitoring operations. Such algal formations that occur in relatively high concentrations may experience, as in land vegetation, what is known as the "red-edge" effect. This phenomena occurs at the highest reflectance slope between the maximum absorption in the red due to the surrounding ocean water and the maximum reflectance in the infra-red due to the photosynthetic pigments present in the surface algae. A new algorithm termed the surface algal bloom index (SABI), has been proposed to delineate the spatial distributions of floating micro-algal species like for example cyanobacteria or exposed inter-tidal vegetation like seagrass. This algorithm was

  7. MUlti-Dimensional Spline-Based Estimator (MUSE) for Motion Estimation: Algorithm Development and Initial Results

    PubMed Central

    Viola, Francesco; Coe, Ryan L.; Owen, Kevin; Guenther, Drake A.; Walker, William F.

    2008-01-01

    Image registration and motion estimation play central roles in many fields, including RADAR, SONAR, light microscopy, and medical imaging. Because of its central significance, estimator accuracy, precision, and computational cost are of critical importance. We have previously presented a highly accurate, spline-based time delay estimator that directly determines sub-sample time delay estimates from sampled data. The algorithm uses cubic splines to produce a continuous representation of a reference signal and then computes an analytical matching function between this reference and a delayed signal. The location of the minima of this function yields estimates of the time delay. In this paper we describe the MUlti-dimensional Spline-based Estimator (MUSE) that allows accurate and precise estimation of multidimensional displacements/strain components from multidimensional data sets. We describe the mathematical formulation for two- and three-dimensional motion/strain estimation and present simulation results to assess the intrinsic bias and standard deviation of this algorithm and compare it to currently available multi-dimensional estimators. In 1000 noise-free simulations of ultrasound data we found that 2D MUSE exhibits maximum bias of 2.6 × 10−4 samples in range and 2.2 × 10−3 samples in azimuth (corresponding to 4.8 and 297 nm, respectively). The maximum simulated standard deviation of estimates in both dimensions was comparable at roughly 2.8 × 10−3 samples (corresponding to 54 nm axially and 378 nm laterally). These results are between two and three orders of magnitude better than currently used 2D tracking methods. Simulation of performance in 3D yielded similar results to those observed in 2D. We also present experimental results obtained using 2D MUSE on data acquired by an Ultrasonix Sonix RP imaging system with an L14-5/38 linear array transducer operating at 6.6 MHz. While our validation of the algorithm was performed using ultrasound data, MUSE

  8. Evidence-Based Skin Care: A Systematic Literature Review and the Development of a Basic Skin Care Algorithm.

    PubMed

    Lichterfeld, Andrea; Hauss, Armin; Surber, Christian; Peters, Tina; Blume-Peytavi, Ulrike; Kottner, Jan

    2015-01-01

    Patients in acute and long-term care settings receive daily routine skin care, including washing, bathing, and showering, often followed by application of lotions, creams, and/or ointments. These personal hygiene and skin care activities are integral parts of nursing practice, but little is known about their benefits or clinical efficacy. The aim of this article was to summarize the empirical evidence supporting basic skin care procedures and interventions and to develop a clinical algorithm for basic skin care. Electronic databases MEDLINE, EMBASE, and CINAHL were searched and afterward a forward search was conducted using Scopus and Web of Science. In order to evaluate a broad range of basic skin care interventions systematic reviews, intervention studies, and guidelines, consensus statements and best practice standards also were included in the analysis. One hundred twenty-one articles were read in full text; 41documents were included in this report about skin care for prevention of dry skin, prevention of incontinence-associated dermatitis and prevention of skin injuries. The methodological quality of the included publications was variable. Review results and expert input were used to create a clinical algorithm for basic skin care. A 2-step approach is proposed including general and special skin care. Interventions focus primarily on skin that is either too dry or too moist. The target groups for the algorithm are adult patients or residents with intact or preclinical damaged skin in care settings. The goal of the skin care algorithm is a first attempt to provide guidance for practitioners to improve basic skin care in clinical settings in order to maintain or increase skin health. PMID:26165590

  9. Development of an Interval Management Algorithm Using Ground Speed Feedback for Delayed Traffic

    NASA Technical Reports Server (NTRS)

    Barmore, Bryan E.; Swieringa, Kurt A.; Underwood, Matthew C.; Abbott, Terence; Leonard, Robert D.

    2016-01-01

    One of the goals of NextGen is to enable frequent use of Optimized Profile Descents (OPD) for aircraft, even during periods of peak traffic demand. NASA is currently testing three new technologies that enable air traffic controllers to use speed adjustments to space aircraft during arrival and approach operations. This will allow an aircraft to remain close to their OPD. During the integration of these technologies, it was discovered that, due to a lack of accurate trajectory information for the leading aircraft, Interval Management aircraft were exhibiting poor behavior. NASA's Interval Management algorithm was modified to address the impact of inaccurate trajectory information and a series of studies were performed to assess the impact of this modification. These studies show that the modification provided some improvement when the Interval Management system lacked accurate trajectory information for the leading aircraft.

  10. Development of a Multiview Time Domain Imaging Algorithm (MTDI) with a Fermat Correction

    SciTech Connect

    Fisher, K A; Lehman, S K; Chambers, D H

    2004-09-22

    An imaging algorithm is presented based on the standard assumption that the total scattered field can be separated into an elastic component with monopole like dependence and an inertial component with a dipole like dependence. The resulting inversion generates two separate image maps corresponding to the monopole and dipole terms of the forward model. The complexity of imaging flaws and defects in layered elastic media is further compounded by the existence of high contrast gradients in either sound speed and/or density from layer to layer. To compensate for these gradients, we have incorporated Fermat's method of least time into our forward model to determine the appropriate delays between individual source-receiver pairs. Preliminary numerical and experimental results are in good agreement with each other.

  11. Development of a SiPM-based PET detector using a digital positioning algorithm

    NASA Astrophysics Data System (ADS)

    Lee, Jin Hyung; Lee, Seung-Jae; An, Su Jung; Kim, Hyun-Il; Chung, Yong Hyun

    2016-05-01

    A decreased number of readout method is investigated here to provide precise pixel information for small-animal positron emission tomography (PET). Small-animal PET consists of eight modules, each being composed of a 3 × 3 array of 2 mm × 2 mm × 20 mm lutetium yttrium orthosilicate (LYSO) crystals optically coupled to a 2 × 2 array of 3 mm × 3 mm silicon photomultipliers (SiPMs). The number of readout channels is reduced by one-quarter that of the conventional method by applying a simplified pixel-determination algorithm. The performances of the PET system and detector module were evaluated with experimental verifications. In the results, all pixels of the 3 × 3 LYSO array were decoded well, and the performances of the PET detector module were measured.

  12. System design and algorithmic development for computational steering in distributed environments

    SciTech Connect

    Wu, Qishi; Zhu, Mengxia; Gu, Yi; Rao, Nageswara S

    2010-03-01

    Supporting visualization pipelines over wide-area networks is critical to enabling large-scale scientific applications that require visual feedback to interactively steer online computations. We propose a remote computational steering system that employs analytical models to estimate the cost of computing and communication components and optimizes the overall system performance in distributed environments with heterogeneous resources. We formulate and categorize the visualization pipeline configuration problems for maximum frame rate into three classes according to the constraints on node reuse or resource sharing, namely no, contiguous, and arbitrary reuse. We prove all three problems to be NP-complete and present heuristic approaches based on a dynamic programming strategy. The superior performance of the proposed solution is demonstrated with extensive simulation results in comparison with existing algorithms and is further evidenced by experimental results collected on a prototype implementation deployed over the Internet.

  13. Development of a prototype algorithm for the operational retrieval of height-resolved products from GOME

    NASA Technical Reports Server (NTRS)

    Spurr, Robert J. D.

    1997-01-01

    Global ozone monitoring experiment (GOME) level 2 products of total ozone column amounts have been generated on a routine operational basis since July 1996. These products and the level 1 radiance products are the major outputs from the ERS-2 ground segment GOME data processor (GDP) at DLR in Germany. Off-line scientific work has already shown the feasibility of ozone profile retrieval from GOME. It is demonstrated how the retrievals can be performed in an operational context. Height-resolved retrieval is based on the optimal estimation technique, #and cloud-contaminated scenes are treated in an equivalent reflecting surface approximation. The prototype must be able to handle GOME measurements routinely on a global basis. Requirements for the major components of the algorithm are described: this incorporates an overall strategy for operational height-resolved retrieval from GOME.

  14. Development of a Genetic Algorithm to Automate Clustering of a Dependency Structure Matrix

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; Korte, John J.; Bilardo, Vincent J.

    2006-01-01

    Much technology assessment and organization design data exists in Microsoft Excel spreadsheets. Tools are needed to put this data into a form that can be used by design managers to make design decisions. One need is to cluster data that is highly coupled. Tools such as the Dependency Structure Matrix (DSM) and a Genetic Algorithm (GA) can be of great benefit. However, no tool currently combines the DSM and a GA to solve the clustering problem. This paper describes a new software tool that interfaces a GA written as an Excel macro with a DSM in spreadsheet format. The results of several test cases are included to demonstrate how well this new tool works.

  15. Developing an Algorithm for Finding Deep-Sea Corals on Seamounts Using Bathymetry and Photographic Data

    NASA Astrophysics Data System (ADS)

    Fernandez, D. P.; Adkins, J. F.; Scheirer, D. P.

    2006-12-01

    Over the last three years we have conducted several cruises on seamounts in the North Atlantic to sample and characterize the distribution of deep-sea corals in space and time. Using the deep submergence vehicle Alvin and the ROV Hercules we have spent over 80 hours on the seafloor. With the autonomous vehicle ABE and a towed camera sled, we collected over 10,000 bottom photographs and over 60 hours of micro- bathymetry over 120 km of seafloor. While there are very few living scleractinia (Desmophyllum dianthus, Solenosmilia sp. and, Lophilia sp.), we recovered over 5,000 fossil D. dianthus and over 60 kg of fossil Solenosmilia sp. The large numbers of fossil corals mean that a perceived lack of material does not have to limit the use of this new archive of the deep ocean. However, we need a better strategy for finding and returning samples to the lab. Corals clearly prefer to grow on steep slopes and at the tops of scarps of all scales. They are preferentially found along ridges and on small knolls flanking a larger edifice. There is also a clear preference for D. dianthus to recruit onto carbonate substrate. Overall, our sample collection, bathymetry and bottom photographs allow us to create an algorithm for finding corals based only on knowledge of the seafloor topography. We can test this algorithm against known sampling locations and visual surveys of the seafloor. Similar to the way seismic data are used to locate ideal coring locations, we propose that high-resolution bathymetry can be used to predict the most likely locations for finding fossil deep-sea corals.

  16. Toward an Earlier Diagnosis of Primary Ciliary Dyskinesia. Which Patients Should Undergo Detailed Diagnostic Testing?

    PubMed

    Kuehni, Claudia E; Lucas, Jane S

    2016-08-01

    Primary ciliary dyskinesia (PCD) is a rare, heterogeneous, recessive, genetic disorder of motile cilia, leading to chronic upper and lower respiratory symptoms. Prevalence is estimated at around 1:10,000, but many patients remain undiagnosed, whereas others receive the label incorrectly. Proper diagnosis is complicated by the fact that the key symptoms, such as wet cough, chronic rhinitis, and recurrent upper and lower respiratory infection, are common and nonspecific. There is no single gold standard test to diagnose PCD. Currently, the diagnosis is made in patients with a compatible medical history after a demanding combination of tests including nasal nitric oxide, high-speed video microscopy, and transmission electron microscopy and genetic and ciliary culture testing. These tests are costly and need sophisticated equipment and experienced staff, restricting use to highly specialized centers. Therefore, it would be desirable to have a screening test for identifying those patients who should undergo detailed diagnostic testing. Three recent studies focused on potential screening tools: one study assessed the validity of nasal nitric oxide for screening, and two studies developed new symptom-based screening tools. These simple tools are welcome, and it is hoped that they will assist physicians in determining whom to refer for definitive testing. However, they have been developed in tertiary care settings, where 10 to 50% of tested patients have PCD. The sensitivity and specificity of the tools are reasonable, but positive and negative predictive values may be poor in primary or secondary care settings. Although these studies are an important step toward an earlier diagnosis of PCD, more remains to be done before we have tools tailored to different health care settings. PMID:27258773

  17. Development of a Pedestrian Indoor Navigation System Based on Multi-Sensor Fusion and Fuzzy Logic Estimation Algorithms

    NASA Astrophysics Data System (ADS)

    Lai, Y. C.; Chang, C. C.; Tsai, C. M.; Lin, S. Y.; Huang, S. C.

    2015-05-01

    This paper presents a pedestrian indoor navigation system based on the multi-sensor fusion and fuzzy logic estimation algorithms. The proposed navigation system is a self-contained dead reckoning navigation that means no other outside signal is demanded. In order to achieve the self-contained capability, a portable and wearable inertial measure unit (IMU) has been developed. Its adopted sensors are the low-cost inertial sensors, accelerometer and gyroscope, based on the micro electro-mechanical system (MEMS). There are two types of the IMU modules, handheld and waist-mounted. The low-cost MEMS sensors suffer from various errors due to the results of manufacturing imperfections and other effects. Therefore, a sensor calibration procedure based on the scalar calibration and the least squares methods has been induced in this study to improve the accuracy of the inertial sensors. With the calibrated data acquired from the inertial sensors, the step length and strength of the pedestrian are estimated by multi-sensor fusion and fuzzy logic estimation algorithms. The developed multi-sensor fusion algorithm provides the amount of the walking steps and the strength of each steps in real-time. Consequently, the estimated walking amount and strength per step are taken into the proposed fuzzy logic estimation algorithm to estimates the step lengths of the user. Since the walking length and direction are both the required information of the dead reckoning navigation, the walking direction is calculated by integrating the angular rate acquired by the gyroscope of the developed IMU module. Both the walking length and direction are calculated on the IMU module and transmit to a smartphone with Bluetooth to perform the dead reckoning navigation which is run on a self-developed APP. Due to the error accumulating of dead reckoning navigation, a particle filter and a pre-loaded map of indoor environment have been applied to the APP of the proposed navigation system to extend its

  18. User's Manual for the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA)

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.; Cheatwood, F. McNeil

    1996-01-01

    This user's manual provides detailed instructions for the installation and the application of version 4.1 of the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA). Also provides simulation of flow field in thermochemical nonequilibrium around vehicles traveling at hypersonic velocities through the atmosphere. Earlier versions of LAURA were predominantly research codes, and they had minimal (or no) documentation. This manual describes UNIX-based utilities for customizing the code for special applications that also minimize system resource requirements. The algorithm is reviewed, and the various program options are related to specific equations and variables in the theoretical development.

  19. Modeling design iteration in product design and development and its solution by a novel artificial bee colony algorithm.

    PubMed

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness. PMID:25431584

  20. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    SciTech Connect

    Cheung, Howard; Braun, James E.

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment in the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.

  1. Modeling Design Iteration in Product Design and Development and Its Solution by a Novel Artificial Bee Colony Algorithm

    PubMed Central

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness. PMID:25431584

  2. Development of a phantom to validate high-dose-rate brachytherapy treatment planning systems with heterogeneous algorithms

    SciTech Connect

    Moura, Eduardo S.; Rostelato, Maria Elisa C. M.; Zeituni, Carlos A.

    2015-04-15

    Purpose: This work presents the development of a phantom to verify the treatment planning system (TPS) algorithms used for high-dose-rate (HDR) brachytherapy. It is designed to measure the relative dose in a heterogeneous media. The experimental details used, simulation methods, and comparisons with a commercial TPS are also provided. Methods: To simulate heterogeneous conditions, four materials were used: Virtual Water™ (VM), BR50/50™, cork, and aluminum. The materials were arranged in 11 heterogeneity configurations. Three dosimeters were used to measure the relative response from a HDR {sup 192}Ir source: TLD-100™, Gafchromic{sup ®} EBT3 film, and an Exradin™ A1SL ionization chamber. To compare the results from the experimental measurements, the various configurations were modeled in the PENELOPE/penEasy Monte Carlo code. Images of each setup geometry were acquired from a CT scanner and imported into BrachyVision™ TPS software, which includes a grid-based Boltzmann solver Acuros™. The results of the measurements performed in the heterogeneous setups were normalized to the dose values measured in the homogeneous Virtual Water™ setup and the respective differences due to the heterogeneities were considered. Additionally, dose values calculated based on the American Association of Physicists in Medicine-Task Group 43 formalism were compared to dose values calculated with the Acuros™ algorithm in the phantom. Calculated doses were compared at the same points, where measurements have been performed. Results: Differences in the relative response as high as 11.5% were found from the homogeneous setup when the heterogeneous materials were inserted into the experimental phantom. The aluminum and cork materials produced larger differences than the plastic materials, with the BR50/50™ material producing results similar to the Virtual Water™ results. Our experimental methods agree with the PENELOPE/penEasy simulations for most setups and dosimeters. The

  3. Development of hybrid particle tracking algorithms and their applications in airflow measurement within an aircraft cabin mock-up

    NASA Astrophysics Data System (ADS)

    Yan, Wei

    Obtaining reliable experimental airflow data within an indoor environment is a challenging task and critical in studying and solving indoor air quality problems. The Hybrid Particle Tracking Velocimetry (HPTV) system is aimed at fulfilling this need. It was developed based on existing Particle Tracking Velocimety (PTV) and Volumetric Particle Tracking Velocimetry (VPTV) techniques. The HPTV system requires three charge-coupled device (CCD) cameras to view the illuminated flow field and capture the trajectories of the seeded particles. By adopting the hybrid spatial matching and object tracking algorithms, this system can acquire the 3-Dimensional velocity components within a large volume with relatively high spatial and temporal resolution. Synthetic images were employed to validate the performance of three components of the system: image processing, camera calibration and 3D velocity reconstruction. These three components are also the main error sources. The accuracy of the whole algorithm was analyzed and discussed through a back projection approach. The results showed that the algorithms performed effectively and accurately. The reconstructed 3D trajectories and streaks agreed well with the simulated streamline of the particles. As an overall testing and application of the system, HPTV was applied to measure the airflow pattern within a full-scale, five-row section of a Boeing 767-300 aircraft cabin mockup. A complete experimental procedure was developed and strictly followed throughout the experiment. Both global flow field at the whole cabin scale and the local flow field at the breathing zone of one passenger were studied. Each test case was also simulated numerically using a commercial computational fluid dynamic (CFD) package. Through comparison between the results from the numerical simulation and the experimental measurement, the potential model validation capability of the system was demonstrated. Possible reasons explaining the difference between

  4. Initial development of a temporal-envelope-preserving nonlinear hearing aid prescription using a genetic algorithm.

    PubMed

    Sabin, Andrew T; Souza, Pamela E

    2013-06-01

    Most hearing aid prescriptions focus on the optimization of a metric derived from the long-term average spectrum of speech, and do not consider how the prescribed values might distort the temporal envelope shape. A growing body of evidence suggests that such distortions can lead to systematic errors in speech perception, and therefore hearing aid prescriptions might benefit by including preservation of the temporal envelope shape in their rationale. To begin to explore this possibility, we designed a genetic algorithm (GA) to find the multiband compression settings that preserve the shape of the original temporal envelope while placing that envelope in the listener's audiometric dynamic range. The resulting prescription had a low compression threshold, short attack and release times, and a combination of compression ratio and gain that placed the output signal within the listener's audiometric dynamic range. Initial behavioral tests of individuals with impaired hearing revealed no difference in speech-in-noise perception between the GA and the NAL-NL2 prescription. However, gap detection performance was superior with the GA in comparison to NAL-NL2. Overall, this work is a proof of concept that consideration of temporal envelope distortions can be incorporated into hearing aid prescriptions. PMID:24028890

  5. Development and applications of various optimization algorithms for diesel engine combustion and emissions optimization

    NASA Astrophysics Data System (ADS)

    Ogren, Ryan M.

    For this work, Hybrid PSO-GA and Artificial Bee Colony Optimization (ABC) algorithms are applied to the optimization of experimental diesel engine performance, to meet Environmental Protection Agency, off-road, diesel engine standards. This work is the first to apply ABC optimization to experimental engine testing. All trials were conducted at partial load on a four-cylinder, turbocharged, John Deere engine using neat-Biodiesel for PSO-GA and regular pump diesel for ABC. Key variables were altered throughout the experiments, including, fuel pressure, intake gas temperature, exhaust gas recirculation flow, fuel injection quantity for two injections, pilot injection timing and main injection timing. Both forms of optimization proved effective for optimizing engine operation. The PSO-GA hybrid was able to find a superior solution to that of ABC within fewer engine runs. Both solutions call for high exhaust gas recirculation to reduce oxide of nitrogen (NOx) emissions while also moving pilot and main fuel injections to near top dead center for improved tradeoffs between NOx and particulate matter.

  6. Later endogenous circadian temperature nadir relative to an earlier wake time in older people

    NASA Technical Reports Server (NTRS)

    Duffy, J. F.; Dijk, D. J.; Klerman, E. B.; Czeisler, C. A.

    1998-01-01

    The contribution of the circadian timing system to the age-related advance of sleep-wake timing was investigated in two experiments. In a constant routine protocol, we found that the average wake time and endogenous circadian phase of 44 older subjects were earlier than that of 101 young men. However, the earlier circadian phase of the older subjects actually occurred later relative to their habitual wake time than it did in young men. These results indicate that an age-related advance of circadian phase cannot fully account for the high prevalence of early morning awakening in healthy older people. In a second study, 13 older subjects and 10 young men were scheduled to a 28-h day, such that they were scheduled to sleep at many circadian phases. Self-reported awakening from scheduled sleep episodes and cognitive throughput during the second half of the wake episode varied markedly as a function of circadian phase in both groups. The rising phase of both rhythms was advanced in the older subjects, suggesting an age-related change in the circadian regulation of sleep-wake propensity. We hypothesize that under entrained conditions, these age-related changes in the relationship between circadian phase and wake time are likely associated with self-selected light exposure at an earlier circadian phase. This earlier exposure to light could account for the earlier clock hour to which the endogenous circadian pacemaker is entrained in older people and thereby further increase their propensity to awaken at an even earlier time.

  7. Improving our understanding of flood forecasting using earlier hydro-meteorological intelligence

    NASA Astrophysics Data System (ADS)

    Shih, Dong-Sin; Chen, Cheng-Hsin; Yeh, Gour-Tsyh

    2014-05-01

    In recent decades, Taiwan has suffered from severe bouts of torrential rain, and typhoon induced floods have become the major natural threat to Taiwan. In order to warn the public of potential risks, authorities are considering establishing an early warning system derived from an integrated hydro-meteorological estimation process. This study aims at the development and accuracy of such a warning system. So it is first necessary to understand the distinctive features of flood forecasting in integrated rainfall-runoff simulations. Additionally the adequacies of a warning system that is based on extracting useful intelligence from earlier, possibly faulty numerical simulation results are discussed. In order to precisely model flooding, hydrological simulations based upon spot measured rainfall data have been utilized in prior studies to calibrate model parameters. Here, precipitation inputs from an ensemble of almost 20 different realizations of rainfall fields have been used to derive flood forecasts. The flood warning system therefore integrates rainfall-runoff calculations, field observations and data assimilations. Simulation results indicate that the ensemble precipitation estimates generated by a Weather Research Forecasting (WRF) mesoscale model produce divergent estimates. Considerable flooding is often shown in the simulated hydrographs, but the results as to the peak time and peak stage are not always in agreement with the observations. In brief, such forecasts can be good for warning against potential damaging floods in the near future, but the meteorological inputs are not good enough to forecast the time and magnitude of the peaks. The key for such warning system is not to expect highly accurate rainfall predictions, but to improve our understanding from individual ensemble flood forecasts.

  8. NPH Log: Validation of a New Assessment Tool Leading to Earlier Diagnosis of Normal Pressure Hydrocephalus

    PubMed Central

    Lu, Jennifer; Robison, Jamie; Hoffberger, Jamie B; Hulbert, Alicia; Sanyal, Abanti; Wemmer, Jan; Elder, Benjamin D; Rigamonti, Daniele

    2016-01-01

    Introduction: Early treatment of normal pressure hydrocephalus (NPH) yields better postoperative outcomes. Our current tests often fail to detect significant changes at early stages. We developed a new scoring system (LP log score) to determine if this tool is more sensitive in detecting clinical differences than current tests. Material and Methods: Sixty-two consecutive new patients with suspected idiopathic NPH were studied. Secondary, previously treated and obstructive cases were not included. We collected age, pre- and post-lumbar puncture (LP) Tinetti, Timed Up and Go (TUG) Test, European NPH scale, and LP log scores. The LP log score is recorded at baseline and for seven consecutive days after removing 40 cc of cerebrospinal fluid (CSF) via LP. We studied the diagnostic accuracy of the tests for surgical indication. Results: The post-LP log showed improvement in 90% of people with good baseline gait tests and in 93% of people who did not show any pre-LP and post-LP change in gait tests. Sensitivity, specificity, and accuracy to detect intention to treat when positive post-LP improvements were 4%, 100%, and 24%, respectively, for TUG, 21%, 86%, and 34%, respectively, for the Tinetti Mobility Test, 66%, 29%, and 58%, respectively, for Medical College of Virginia (MCV) grade, and 98%, 33%, and 85%, respectively, for LP log score. Pre-LP and post-LP TUG improvement and pre-LP and post-LP Tinetti improvement were not associated with a surgical indication (p > 0.05). LP log improvement was associated with surgical indication odds ratio (OR): 24.5 95% CI (2.4-248.12) (p = 0.007). Conclusions: LP log showed better sensitivity, diagnostic accuracy, and association with surgical indication than the current diagnostic approach. An LP log may be useful detecting NPH patients at earlier stages and, therefore, yield better surgical outcomes. PMID:27489752

  9. Facilitation of Third-party Development of Advanced Algorithms for Explosive Detection Using Workshops and Grand Challenges

    SciTech Connect

    Martz, H E; Crawford, C R; Beaty, J S; Castanon, D

    2011-02-15

    The US Department of Homeland Security (DHS) has requirements for future explosive detection scanners that include dealing with a larger number of threats, higher probability of detection, lower false alarm rates and lower operating costs. One tactic that DHS is pursuing to achieve these requirements is to augment the capabilities of the established security vendors with third-party algorithm developers. The purposes of this presentation are to review DHS's objectives for involving third parties in the development of advanced algorithms and then to discuss how these objectives are achieved using workshops and grand challenges. Terrorists are still trying and they are getting more sophisticated. There is a need to increase the number of smart people working on homeland security. Augmenting capabilities and capacities of system vendors with third-parties is one tactic. Third parties can be accessed via workshops and grand challenges. Successes have been achieved to date. There are issues that need to be resolved to further increase third party involvement.

  10. Development of a computer algorithm for the detection of phase singularities and initial application to analyze simulations of atrial fibrillation.

    PubMed

    Zou, Renqiang; Kneller, James; Leon, L. Joshua; Nattel, Stanley

    2002-09-01

    Atrial fibrillation (AF) is a common cardiac arrhythmia, but its mechanisms are incompletely understood. The identification of phase singularities (PSs) has been used to define spiral waves involved in maintaining the arrhythmia, as well as daughter wavelets. In the past, PSs have often been identified manually. Automated PS detection algorithms have been described previously, but when we attempted to apply a previously developed algorithm we experienced problems with false positives that made the results difficult to use directly. We therefore developed a tool for PS identification that uses multiple strategies incorporating both image analysis and mathematical convolution for automated detection with optimized sensitivity and specificity, followed by manual verification. The tool was then applied to analyze PS behavior in simulations of AF maintained in the presence of spatially distributed acetylcholine effects in cell grids of varying size. These analyses indicated that in almost all cases, a single PS lasted throughout the simulation, corresponding to the central-core tip of a single spiral wave that maintained AF. The sustained PS always localized to an area of low acetylcholine concentration. When the grid became very small and no area of low acetylcholine concentration was surrounded by zones of higher concentration, AF could not be sustained. The behavior of PSs and the mechanisms of AF were qualitatively constant over an 11.1-fold range of atrial grid size, suggesting that the classical emphasis on tissue size as a primary determinant of fibrillatory behavior may be overstated. (c) 2002 American Institute of Physics. PMID:12779605

  11. Analysis and Classification of Stride Patterns Associated with Children Development Using Gait Signal Dynamics Parameters and Ensemble Learning Algorithms

    PubMed Central

    Wu, Meihong; Liao, Lifang; Luo, Xin; Ye, Xiaoquan; Yao, Yuchen; Chen, Pinnan; Shi, Lei; Huang, Hui

    2016-01-01

    Measuring stride variability and dynamics in children is useful for the quantitative study of gait maturation and neuromotor development in childhood and adolescence. In this paper, we computed the sample entropy (SampEn) and average stride interval (ASI) parameters to quantify the stride series of 50 gender-matched children participants in three age groups. We also normalized the SampEn and ASI values by leg length and body mass for each participant, respectively. Results show that the original and normalized SampEn values consistently decrease over the significance level of the Mann-Whitney U test (p < 0.01) in children of 3–14 years old, which indicates the stride irregularity has been significantly ameliorated with the body growth. The original and normalized ASI values are also significantly changing when comparing between any two groups of young (aged 3–5 years), middle (aged 6–8 years), and elder (aged 10–14 years) children. Such results suggest that healthy children may better modulate their gait cadence rhythm with the development of their musculoskeletal and neurological systems. In addition, the AdaBoost.M2 and Bagging algorithms were used to effectively distinguish the children's gait patterns. These ensemble learning algorithms both provided excellent gait classification results in terms of overall accuracy (≥90%), recall (≥0.8), and precision (≥0.8077). PMID:27034952

  12. Development of a Reduction Algorithm of GEO Satellite Optical Observation Data for Optical Wide Field Patrol (OWL)

    NASA Astrophysics Data System (ADS)

    Park, Sun-youp; Choi, Jin; Jo, Jung Hyun; Son, Ju Young; Park, Yung-Sik; Yim, Hong-Suh; Moon, Hong-Kyu; Bae, Young-Ho; Choi, Young-Jun; Park, Jang-Hyun

    2015-09-01

    An algorithm to automatically extract coordinate and time information from optical observation data of geostationary orbit satellites (GEO satellites) or geosynchronous orbit satellites (GOS satellites) is developed. The optical wide-field patrol system is capable of automatic observation using a pre-arranged schedule. Therefore, if this type of automatic analysis algorithm is available, daily unmanned monitoring of GEO satellites can be possible. For data acquisition for development, the COMS1 satellite was observed with 1-s exposure time and 1-m interval. The images were grouped and processed in terms of ¡°action¡±, and each action was composed of six or nine successive images. First, a reference image with the best quality in one action was selected. Next, the rest of the images in the action were geometrically transformed to fit in the horizontal coordinate system (expressed in azimuthal angle and elevation) of the reference image. Then, these images were median-combined to retain only the possible non-moving GEO candidates. By reverting the coordinate transformation of the positions of these GEO satellite candidates, the final coordinates could be calculated.

  13. Development of an algorithm to regulate pump output for a closed air-loop type pneumatic biventricular assist device.

    PubMed

    Nam, Kyoung Won; Lee, Jung Joo; Hwang, Chang Mo; Choi, Jaesoon; Choi, Hyuk; Choi, Seong Wook; Sun, Kyung

    2009-12-01

    The closed air space-type of extracorporeal pneumatic ventricular assist device (VAD) developed by the Korea Artificial Organ Center utilizes a bellows-transforming mechanism to generate the air pressure required to pump blood. This operating mechanism can reduce the size and weight of the driving unit; however, the output of the blood pump can be affected by the pressure loading conditions of the blood sac. Therefore, to guarantee a proper pump output level, regardless of the pressure loading conditions that vary over time, automatic pump output regulation of the blood pump is required. We describe herein a pump output regulation algorithm that was developed to maintain pump output around a reference level against various afterload pressures, and verified the pump performance in vitro. Based on actual operating conditions in animal experiments, the pumping rate was limited to 40-84 beats per minute, and the afterload pressure was limited to 80-150 mm Hg. The tested reference pump output was 4.0 L/min. During experiments, the pump output was successfully and automatically regulated within the preset area regardless of the varying afterload conditions. The results of this preliminary experiment can be used as the basis for an automatic control algorithm that can enhance the stability and reliability of the applied VAD. PMID:19604228

  14. Ocean observations with EOS/MODIS: Algorithm development and post launch studies

    NASA Technical Reports Server (NTRS)

    Gordan, Howard R.

    1996-01-01

    Several significant accomplishments were made during the present reporting period. We have completed our basic study of using the 1.38 micron MODIS band for removal of the effects of thin cirrus clouds and stratospheric aerosol. The results suggest that it should be possible to correct imagery for thin cirrus clouds with optical thicknesses as large as 0.5 to 1.0. We have also acquired reflectance data for oceanic whitecaps during a cruise on the RV Malcolm Baldrige in the Gulf of Mexico. The reflectance spectrum of whitecaps was found to be similar to that for breaking waves in the surf zone measured by Frouin, Schwindling and Deschamps. We installed a CIMEL sun photometer at Fort Jefferson on the Dry Tortugas off Key West in the Gulf of Mexico. The instrument has yielded a continuous stream of data since February. It shows that the aerosol optical thickness at 669 nm is often less than 0.1 in winter. This suggests that the Southern Gulf of Mexico will be an excellent winter site for vicarious calibration. In addition, we completed a study of the effect of vicarious calibration, i.e., the accuracy with which the radiance at the top of the atmosphere (TOA) can be predicted from measurement of the sky radiance at the bottom of the atmosphere (BOA). The results suggest that the neglect of polarization in the aerosol optical property inversion algorithm and in the prediction code for the TOA radiances is the largest error associated with the radiative transfer process. Overall, the study showed that the accuracy of the TOA radiance prediction is now limited by the racliometric calibration error in the sky radiometer. Finally, considerable coccolith light scattering data were obtained in the Gulf of Maine with a flow-through instrument, along with data relating to calcite concentration and the rate of calcite production.

  15. Layered analytical radiative transfer model for simulating water color of coastal waters and algorithm development

    NASA Astrophysics Data System (ADS)

    Bostater, Charles R., Jr.; Huddleston, Lisa H.

    2000-12-01

    A remote sensing reflectance model, which describes the transfer of irradiant light within a homogeneous water column has previously been used to simulate the nadir viewing reflectance just above or below the water surface by Bostater, et al. Wavelength dependent features in the water surface reflectance depend upon the nature of the down welling irradiance, bottom reflectance and the water absorption and backscatter coefficients. The latter are very important coefficients, and depend upon the constituents in water and both vary as a function of the water depth and wavelength in actual water bodies. This paper describes a preliminary approach for the analytical solution of the radiative transfer equations in a two-stream representation of the irradiance field with variable coefficients due to the depth dependent water concentrations of substances such as chlorophyl pigments, dissolved organic matter and suspended particulate matter. The analytical model formulation makes use of analytically based solutions to the 2-flow equations. However, in this paper we describe the use of the unique Cauchy boundary conditions previously used, along with a matrix solution to allow for the prediction of the synthetic water surface reflectance signatures within a nonhomogeneous medium. Observed reflectance signatures as well as model derived 'synthetic signatures' are processed using efficient algorithms which demonstrate the error induced using the layered matrix approach is much less than 1 percent when compared to the analytical homogeneous water column solution. The influence of vertical gradients of water constituents may be extremely important in remote sensing of coastal water constituents as well as in remote sensing of submerged targets and different bottom types such as corals, sea grasses and sand.

  16. Development of a Massively Parallel Particle-Mesh Algorithm for Simulations of Galaxy Dynamics and Plasmas

    NASA Astrophysics Data System (ADS)

    Wallin, John

    1996-01-01

    Particle-mesh calculations treat forces and potentials as field quantities which are represented approximately on a mesh. A system of particles is mapped onto this mesh as a density distribution of mass or charge. The Fourier transform is used to convolve this distribution with the Green's function of the potential, and a finite difference scheme is used to calculate the forces acting on the particles. The computation time scales as the Ng log Ng, where Ng is the size of the computational grid. In contrast, the particle-particle method's computing time relies on direct summation, so the time for each calculation is given by Np2, where Np is the number of particles. The particle-mesh method is best suited for simulations with a fixed minimum resolution and for collisionless systems, while hierarchical tree codes have proven to be superior for collisional systems where two-body interactions are important. Particle mesh methods still dominate in plasma physics where collisionless systems are modeled. The CM-200 Connection Machine produced by Thinking Machines Corp. is a data parallel system. On this system, the front-end computer controls the timing and execution of the parallel processing units. The programming paradigm is Single-Instruction, Multiple Data (SIMD). The processors on the CM-200 are connected in an N-dimensional hypercube; the largest number of links a message will ever have to make is N. As in all parallel computing, the efficiency of an algorithm is primarily determined by the fraction of the time spent communicating compared to that spent computing. Because of the topology of the processors, nearest neighbor communication is more efficient than general communication.

  17. Genetic algorithm guided population pharmacokinetic model development for simvastatin, concurrently or non-concurrently co-administered with amlodipine.

    PubMed

    Chaturvedula, Ayyappa; Sale, Mark E; Lee, Howard

    2014-02-01

    An automated model development was performed for simvastatin, co-administered with amlodipine concurrently or non-concurrently (i.e., 4 hours later) in 17 patients with coexisting hyperlipidemia and hypertension. The single objective hybrid genetic algorithm (SOHGA) was implemented in the NONMEM software by defining the search space for structural, statistical and covariate models. Candidate models obtained from the SOHGA runs were further assessed for biological plausibility and the precision of parameter estimates, followed by traditional backward elimination process for model refinement. The final population pharmacokinetic model shows that the elimination rate constant for simvastatin acid, the active form by hydrolysis of its lactone prodrug (i.e., simvastatin), is only 44% in the concurrent amlodipine administration group compared with the non-concurrent group. The application of SOHGA for automated model selection, combined with traditional model selection strategies, appears to save time for model development, which also can generate new hypotheses that are biologically more plausible. PMID:24114976

  18. Preliminary evaluation of the Environmental Research Institute of Michigan crop calendar shift algorithm for estimation of spring wheat development stage. [North Dakota, South Dakota, Montana, and Minnesota

    NASA Technical Reports Server (NTRS)

    Phinney, D. E. (Principal Investigator)

    1980-01-01

    An algorithm for estimating spectral crop calendar shifts of spring small grains was applied to 1978 spring wheat fields. The algorithm provides estimates of the date of peak spectral response by maximizing the cross correlation between a reference profile and the observed multitemporal pattern of Kauth-Thomas greenness for a field. A methodology was developed for estimation of crop development stage from the date of peak spectral response. Evaluation studies showed that the algorithm provided stable estimates with no geographical bias. Crop development stage estimates had a root mean square error near 10 days. The algorithm was recommended for comparative testing against other models which are candidates for use in AgRISTARS experiments.

  19. 75 FR 47316 - National Science Board; Sunshine Act Meetings; Notice (Subject Matter Revised From Earlier Notice)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-05

    ... National Science Board; Sunshine Act Meetings; Notice (Subject Matter Revised From Earlier Notice) The... National Science Board business and other matters specified, as follows: Date and Time: August 12, 2010, at 3 p.m. EDT. Subject Matter: Review and Discussion of Current Mid-Scale Research Funding Support...

  20. Genetic change for earlier migration timing in a pink salmon population.

    PubMed

    Kovach, Ryan P; Gharrett, Anthony J; Tallmon, David A

    2012-09-22

    To predict how climate change will influence populations, it is necessary to understand the mechanisms, particularly microevolution and phenotypic plasticity, that allow populations to persist in novel environmental conditions. Although evidence for climate-induced phenotypic change in populations is widespread, evidence documenting that these phenotypic changes are due to microevolution is exceedingly rare. In this study, we use 32 years of genetic data (17 complete generations) to determine whether there has been a genetic change towards earlier migration timing in a population of pink salmon that shows phenotypic change; average migration time occurs nearly two weeks earlier than it did 40 years ago. Experimental genetic data support the hypothesis that there has been directional selection for earlier migration timing, resulting in a substantial decrease in the late-migrating phenotype (from more than 30% to less than 10% of the total abundance). From 1983 to 2011, there was a significant decrease--over threefold--in the frequency of a genetic marker for late-migration timing, but there were minimal changes in allele frequencies at other neutral loci. These results demonstrate that there has been rapid microevolution for earlier migration timing in this population. Circadian rhythm genes, however, did not show any evidence for selective changes from 1993 to 2009. PMID:22787027

  1. Left ventricular pseudoaneurysm found after mitral valve replacement performed 30 years earlier.

    PubMed

    Castilla, Elena; Gato, Manuel; Ruiz, José Ramón

    2010-03-01

    Pseudoaneurysm of the left ventricle (LV) is a rare cardiac disease that occurs after myocardial infarction or cardiac surgery. Because patients frequently present with nonspecific symptoms, a high index of suspicion is needed to make the diagnosis. This report describes an unusual case demonstrating a large LV pseudoaneurysm after mitral valve replacement performed 30 years earlier. PMID:20197588

  2. Children Learning to Read Later Catch up to Children Reading Earlier

    ERIC Educational Resources Information Center

    Suggate, Sebastian P.; Schaughency, Elizabeth A.; Reese, Elaine

    2013-01-01

    Two studies from English-speaking samples investigated the methodologically difficult question of whether the later reading achievement of children learning to read earlier or later differs. Children (n = 287) from predominantly state-funded schools were selected and they differed in whether the reading instruction age (RIA) was either five or…

  3. Development of a Geant4 based Monte Carlo Algorithm to evaluate the MONACO VMAT treatment accuracy.

    PubMed

    Fleckenstein, Jens; Jahnke, Lennart; Lohr, Frank; Wenz, Frederik; Hesser, Jürgen

    2013-02-01

    A method to evaluate the dosimetric accuracy of volumetric modulated arc therapy (VMAT) treatment plans, generated with the MONACO™ (version 3.0) treatment planning system in realistic CT-data with an independent Geant4 based dose calculation algorithm is presented. Therefore a model of an Elekta Synergy linear accelerator treatment head with an MLCi2 multileaf collimator was implemented in Geant4. The time dependent linear accelerator components were modeled by importing either logfiles of an actual plan delivery or a DICOM-RT plan sequence. Absolute dose calibration, depending on a reference measurement, was applied. The MONACO as well as the Geant4 treatment head model was commissioned with lateral profiles and depth dose curves of square fields in water and with film measurements in inhomogeneous phantoms. A VMAT treatment plan for a patient with a thoracic tumor and a VMAT treatment plan of a patient, who received treatment in the thoracic spine region including metallic implants, were used for evaluation. MONACO, as well as Geant4, depth dose curves and lateral profiles of square fields had a mean local gamma (2%, 2mm) tolerance criteria agreement of more than 95% for all fields. Film measurements in inhomogeneous phantoms with a global gamma of (3%, 3mm) showed a pass rate above 95% in all voxels receiving more than 25% of the maximum dose. A dose-volume-histogram comparison of the VMAT patient treatment plans showed mean deviations between Geant4 and MONACO of -0.2% (first patient) and 2.0% (second patient) for the PTVs and (0.5±1.0)% and (1.4±1.1)% for the organs at risk in relation to the prescription dose. The presented method can be used to validate VMAT dose distributions generated by a large number of small segments in regions with high electron density gradients. The MONACO dose distributions showed good agreement with Geant4 and film measurements within the simulation and measurement errors. PMID:22921843

  4. Development and validation of a MODIS colored dissolved organic matter (CDOM) algorithm in northwest Florida estuaries

    EPA Science Inventory

    Satellite remote sensing provides synoptic and frequent monitoring of water quality parameters that aids in determining the health of aquatic ecosystems and the development of effective management strategies. Northwest Florida estuaries are classified as optically-complex, or wat...

  5. Thermal algorithms analysis. [programming tasks supporting the development of a thermal model of the Earth's surface

    NASA Technical Reports Server (NTRS)

    Lien, T.

    1981-01-01

    The programming and analysis methods to support the development of a thermal model of the Earth's surface from detailed analysis of day/night registered data sets from the Heat Capacity Mapping Mission satellite are briefly described.

  6. Mathematical algorithm development and parametric studies with the GEOFRAC three-dimensional stochastic model of natural rock fracture systems

    NASA Astrophysics Data System (ADS)

    Ivanova, Violeta M.; Sousa, Rita; Murrihy, Brian; Einstein, Herbert H.

    2014-06-01

    This paper presents results from research conducted at MIT during 2010-2012 on modeling of natural rock fracture systems with the GEOFRAC three-dimensional stochastic model. Following a background summary of discrete fracture network models and a brief introduction of GEOFRAC, the paper provides a thorough description of the newly developed mathematical and computer algorithms for fracture intensity, aperture, and intersection representation, which have been implemented in MATLAB. The new methods optimize, in particular, the representation of fracture intensity in terms of cumulative fracture area per unit volume, P32, via the Poisson-Voronoi Tessellation of planes into polygonal fracture shapes. In addition, fracture apertures now can be represented probabilistically or deterministically whereas the newly implemented intersection algorithms allow for computing discrete pathways of interconnected fractures. In conclusion, results from a statistical parametric study, which was conducted with the enhanced GEOFRAC model and the new MATLAB-based Monte Carlo simulation program FRACSIM, demonstrate how fracture intensity, size, and orientations influence fracture connectivity.

  7. Preliminary results of real-time PPP-RTK positioning algorithm development for moving platforms and its performance validation

    NASA Astrophysics Data System (ADS)

    Won, Jihye; Park, Kwan-Dong

    2015-04-01

    Real-time PPP-RTK positioning algorithms were developed for the purpose of getting precise coordinates of moving platforms. In this implementation, corrections for the satellite orbit and satellite clock were taken from the IGS-RTS products while the ionospheric delay was removed through ionosphere-free combination and the tropospheric delay was either taken care of using the Global Pressure and Temperature (GPT) model or estimated as a stochastic parameter. To improve the convergence speed, all the available GPS and GLONASS measurements were used and Extended Kalman Filter parameters were optimized. To validate our algorithms, we collected the GPS and GLONASS data from a geodetic-quality receiver installed on a roof of a moving vehicle in an open-sky environment and used IGS final products of satellite orbits and clock offsets. The horizontal positioning error got less than 10 cm within 5 minutes, and the error stayed below 10 cm even after the vehicle start moving. When the IGS-RTS product and the GPT model were used instead of the IGS precise product, the positioning accuracy of the moving vehicle was maintained at better than 20 cm once convergence was achieved at around 6 minutes.

  8. The mass-action law based algorithm for cost-effective approach for cancer drug discovery and development.

    PubMed

    Chou, Ting-Chao

    2011-01-01

    The mass-action law based system analysis via mathematical induction and deduction lead to the generalized theory and algorithm that allows computerized simulation of dose-effect dynamics with small size experiments using a small number of data points in vitro, in animals, and in humans. The median-effect equation of the mass-action law deduced from over 300 mechanism specific-equations has been shown to be the unified theory that serves as the common-link for complicated biomedical systems. After using the median-effect principle as the common denominator, its applications are mechanism-independent, drug unit-independent, and dynamic order-independent; and can be used generally for single drug analysis or for multiple drug combinations in constant-ratio or non-constant ratios. Since the "median" is the common link and universal reference point in biological systems, these general enabling lead to computerized quantitative bio-informatics for econo-green bio-research in broad disciplines. Specific applications of the theory, especially relevant to drug discovery, drug combination, and clinical trials, have been cited or illustrated in terms of algorithms, experimental design and computerized simulation for data analysis. Lessons learned from cancer research during the past fifty years provide a valuable opportunity to reflect, and to improve the conventional divergent approach and to introduce a new convergent avenue, based on the mass-action law principle, for the efficient cancer drug discovery and the low-cost drug development. PMID:22016837

  9. Continued Research into Characterizing the Preturbulence Environment for Sensor Development, New Hazard Algorithms and Experimental Flight Planning

    NASA Technical Reports Server (NTRS)

    Kaplan, Michael L.; Lin, Yuh-Lang

    2005-01-01

    The purpose of the research was to develop and test improved hazard algorithms that could result in the development of sensors that are better able to anticipate potentially severe atmospheric turbulence, which affects aircraft safety. The research focused on employing numerical simulation models to develop improved algorithms for the prediction of aviation turbulence. This involved producing both research simulations and real-time simulations of environments predisposed to moderate and severe aviation turbulence. The research resulted in the following fundamental advancements toward the aforementioned goal: 1) very high resolution simulations of turbulent environments indicated how predictive hazard indices could be improved resulting in a candidate hazard index that indicated the potential for improvement over existing operational indices, 2) a real-time turbulence hazard numerical modeling system was improved by correcting deficiencies in its simulation of moist convection and 3) the same real-time predictive system was tested by running the code twice daily and the hazard prediction indices updated and improved. Additionally, a simple validation study was undertaken to determine how well a real time hazard predictive index performed when compared to commercial pilot observations of aviation turbulence. Simple statistical analyses were performed in this validation study indicating potential skill in employing the hazard prediction index to predict regions of varying intensities of aviation turbulence. Data sets from a research numerical model where provided to NASA for use in a large eddy simulation numerical model. A NASA contractor report and several refereed journal articles where prepared and submitted for publication during the course of this research.

  10. An Algorithm for Determining Potential Rill Areas from Gridded Elevation Data: Development and Integration in a Vegetated Filter Dimensioning Model

    NASA Astrophysics Data System (ADS)

    Rousseau, A. N.; Martin, M. N. M.; Savary, S.; Gumiere, S.

    2014-12-01

    Vegetated filter strips (VFSs) have been recognized as an effective and environmentally-friendly beneficial management practice for preventing sediments from cropland to enter surface water. Their trapping efficiency depends on many parameters (characteristics of the filter, vegetation, flow and sediments) and may be balanced with other factors such as implementation, management and opportunity costs. The Vegetated Filter Dimensioning Model (VFDM) is a mathematical tool coupled with PHYSITEL and HYDROTEL that was successfully developed to determine the optimal dimensions and/or efficiency of vegetated filters with respect to vegetation characteristics, topographical and hydrological parameters. The number of rills in a relatively homogeneous hydrological unit (RHHU, i.e. hillslope), the basic computational unit of VFDM, is a key parameter for the calculation of the surface runoff velocity, and therefore, the design of the vegetative filter. Until now, this parameter had been treated as a constant based on plan shape of each individual hillslope. The main objective of this project was to develop an algorithm for determining a priori potential areas where rills could develop and implement the ensuing procedure in VFDM. With the proposed algorithm, the number of rills is determined by using physical and hydrological criteria. To study the influence of concentrated flow and diffuse flow in the filter dimensioning calculations, surface runoff is apportioned according to the way it reaches the primary stream network (concentrated, if through the aforementioned rills, or diffuse, otherwise, that is if discharging laterally). By considering actual land covers and identifying areas likely to generate sediments, results can lead to the building of hotspot maps that may be used as decision-making tools to guide the implementation of vegetated filter strips at the watershed scale. The potential use of the algorithm was tested using data form a small (2.5 km2) agricultural

  11. An approach to the development of numerical algorithms for first order linear hyperbolic systems in multiple space dimensions: The constant coefficient case

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1995-01-01

    Two methods for developing high order single step explicit algorithms on symmetric stencils with data on only one time level are presented. Examples are given for the convection and linearized Euler equations with up to the eighth order accuracy in both space and time in one space dimension, and up to the sixth in two space dimensions. The method of characteristics is generalized to nondiagonalizable hyperbolic systems by using exact local polynominal solutions of the system, and the resulting exact propagator methods automatically incorporate the correct multidimensional wave propagation dynamics. Multivariate Taylor or Cauchy-Kowaleskaya expansions are also used to develop algorithms. Both of these methods can be applied to obtain algorithms of arbitrarily high order for hyperbolic systems in multiple space dimensions. Cross derivatives are included in the local approximations used to develop the algorithms in this paper in order to obtain high order accuracy, and improved isotropy and stability. Efficiency in meeting global error bounds is an important criterion for evaluating algorithms, and the higher order algorithms are shown to be up to several orders of magnitude more efficient even though they are more complex. Stable high order boundary conditions for the linearized Euler equations are developed in one space dimension, and demonstrated in two space dimensions.

  12. Development of algorithms for using satellite meteorological data sets to study global transport of stratospheric aerosols and ozone

    NASA Technical Reports Server (NTRS)

    Want, P. H.; Deepak, A.

    1985-01-01

    The utilization of stratospheric aerosol and ozone measurements obtained from the NASA developed SAM II and SAGE satellite instruments were investigated for their global scale transports. The stratospheric aerosols showed that during the stratospheric warming of the winter 1978 to 1979, the distribution of the zonal mean aerosol extinction ratio in the northern high latitude exhibited distinct changes. Dynamic processes might have played an important role in maintenance role in maintenance of this zonal mean distribution. As to the stratospheric ozone, large poleward ozone transports are shown to occur in the altitude region from 24 km to 38 km near 55N during this warming. This altitude region is shown to be a transition region of the phase relationship between ozone and temperature waves from an in-phase one above 38 km. It is shown that the ozone solar heating in the upper stratosphere might lead to enhancement of the damping rate of the planetary waves due to infrared radiation alone in agreement with theoretical analyses and an earlier observational study.

  13. HEAVY DUTY DIESEL VEHICLE LOAD ESTIMATION: DEVELOPMENT OF VEHICLE ACTIVITY OPTIMIZATION ALGORITHM

    EPA Science Inventory

    The Heavy-Duty Vehicle Modal Emission Model (HDDV-MEM) developed by the Georgia Institute of Technology(Georgia Tech) has a capability to model link-specific second-by-second emissions using speed/accleration matrices. To estimate emissions, engine power demand calculated usin...

  14. Development of a Water Treatment Plant Operation Manual Using an Algorithmic Approach.

    ERIC Educational Resources Information Center

    Counts, Cary A.

    This document describes the steps to be followed in the development of a prescription manual for training of water treatment plant operators. Suggestions on how to prepare both flow and narrative prescriptions are provided for a variety of water treatment systems, including: raw water, flocculation, rapid sand filter, caustic soda feed, alum feed,…

  15. Development of a Raman chemical image detection algorithm for authenticating dry milk

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This research developed a Raman chemical imaging method for detecting multiple adulterants in skim milk powder. Ammonium sulfate, dicyandiamide, melamine, and urea were mixed into the milk powder as chemical adulterants in the concentration range of 0.1–5.0%. A Raman imaging system using a 785-nm la...

  16. Scheduling language and algorithm development study. Volume 1: Study summary and overview

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A high level computer programming language and a program library were developed to be used in writing programs for scheduling complex systems such as the space transportation system. The objectives and requirements of the study are summarized and unique features of the specified language and program library are described and related to the why of the objectives and requirements.

  17. Development of a Raman chemical image detection algorithm for authenticating dry milk

    NASA Astrophysics Data System (ADS)

    Qin, Jianwei; Chao, Kuanglin; Kim, Moon S.

    2013-05-01

    This research developed a Raman chemical imaging method for detecting multiple adulterants in skim milk powder. Ammonium sulfate, dicyandiamide, melamine, and urea were mixed into the milk powder as chemical adulterants in the concentration range of 0.1-5.0%. A Raman imaging system using a 785-nm laser acquired hyperspectral images in the wavenumber range of 102-2538 cm-1 for a 25×25 mm2 area of each mixture. A polynomial curve-fitting method was used to correct fluorescence background in the Raman images. An image classification method was developed based on single-band fluorescence-free images at unique Raman peaks of the adulterants. Raman chemical images were created to visualize identification and distribution of the multiple adulterant particles in the milk powder. Linear relationship was found between adulterant pixel number and adulterant concentration, demonstrating the potential of the Raman chemical imaging for quantitative analysis of the adulterants in the milk powder.

  18. Tropical Forest Tree Height Retrieval With Tandem-X: Algorithm Development And Accuracy Analysis

    NASA Astrophysics Data System (ADS)

    Antropov, Oleg; Rauste, Yrjo; Hame, Tuomas; de Jong, Ben

    2013-12-01

    Two semi-empirical approaches suitable for forest tree height retrieval from interferometric SAR images were developed in this study. Methods developed are mainly meant for cases when reference elevation model is missing. Spaceborne interferometric SAR data from TanDEM-X mission was used in the study. The study site was located in the south-eastern part of the state of Chiapas, Mexico. The TanDEM-X images were acquired during spring and summer 2012. The height estimates obtained in the study varied between 15 and 35 meters in the area of interest covered by the TanDEM-X images. The Pearson's correlation coefficient between estimated tree heights and reference ground plots form National Forest Inventory were 0.25 and 0.32 for maximum and average tree height measures, respectively. This work on tropical forest tree height retrieval from interferometric TanDEM-X images was performed within the EU FP7 project ReCover.

  19. Development of an algorithm to model an aircraft equipped with a generic CDTI display

    NASA Technical Reports Server (NTRS)

    Driscoll, W. C.; Houck, J. A.

    1986-01-01

    A model of human pilot performance of a tracking task using a generic Cockpit Display of Traffic Information (CDTI) display is developed from experimental data. The tracking task is to use CDTI in tracking a leading aircraft at a nominal separation of three nautical miles over a prescribed trajectory in space. The analysis of the data resulting from a factorial design of experiments reveals that the tracking task performance depends on the pilot and his experience at performing the task. Performance was not strongly affected by the type of control system used (velocity vector control wheel steering versus 3D automatic flight path guidance and control). The model that is developed and verified results in state trajectories whose difference from the experimental state trajectories is small compared to the variation due to the pilot and experience factors.

  20. The development of flux-split algorithms for flows with non-equilibrium thermodynamics and chemical reactions

    NASA Technical Reports Server (NTRS)

    Grossman, B.; Cinella, P.

    1988-01-01

    A finite-volume method for the numerical computation of flows with nonequilibrium thermodynamics and chemistry is presented. A thermodynamic model is described which simplifies the coupling between the chemistry and thermodynamics and also results in the retention of the homogeneity property of the Euler equations (including all the species continuity and vibrational energy conservation equations). Flux-splitting procedures are developed for the fully coupled equations involving fluid dynamics, chemical production and thermodynamic relaxation processes. New forms of flux-vector split and flux-difference split algorithms are embodied in a fully coupled, implicit, large-block structure, including all the species conservation and energy production equations. Several numerical examples are presented, including high-temperature shock tube and nozzle flows. The methodology is compared to other existing techniques, including spectral and central-differenced procedures, and favorable comparisons are shown regarding accuracy, shock-capturing and convergence rates.

  1. Development of a computer algorithm for the analysis of variable-frequency AC drives: Case studies included

    NASA Technical Reports Server (NTRS)

    Kankam, M. David; Benjamin, Owen

    1991-01-01

    The development of computer software for performance prediction and analysis of voltage-fed, variable-frequency AC drives for space power applications is discussed. The AC drives discussed include the pulse width modulated inverter (PWMI), a six-step inverter and the pulse density modulated inverter (PDMI), each individually connected to a wound-rotor induction motor. Various d-q transformation models of the induction motor are incorporated for user-selection of the most applicable model for the intended purpose. Simulation results of selected AC drives correlate satisfactorily with published results. Future additions to the algorithm are indicated. These improvements should enhance the applicability of the computer program to the design and analysis of space power systems.

  2. A Study on Development of a New Algorithm for Predicting the Process Variables in GMA Welding Processes

    NASA Astrophysics Data System (ADS)

    Kim, Ill-Soo; Park, Chang-Eun; Cha, Yong Hoon; Jeong, Young-Jae; Kim, In Kwon; Kim, Jae Yoe; Son, Joon-Sik

    Gas Metal Arc(GMA) welding is extensively employed in the metal industries to weld a variety of ferrous and non-ferrous metals because of its potential for increasing the productivity and quality of welding which is controlled by the process parameters. The objective of this paper is to develop the algorithm that enables the determination of process variables from the optimized bead geometry for robotic GMA welding. It depends on the inversion of empirical equations derived from multiple regression analysis of the relationships between the process variables and the bead dimensions using the least square method. The method directly determines those variables which will give the desired set of bead geometry. This avoids the need to iterate with a succession of guesses employed Finite Element Method(FEM). These results suggest that process variable from experimental equation for robotic GMA welding may be employed to monitor and control the bead geometry in real time.

  3. Doppler Imaging with a Clean-Like Approach - Part One - a Newly Developed Algorithm Simulations and Tests

    NASA Astrophysics Data System (ADS)

    Kurster, M.

    1993-07-01

    A newly developed method for the Doppler imaging of star spot distributions on active late-type stars is presented. It comprises an algorithm particularly adapted to the (discrete) Doppler imaging problem (including eclipses) and is very efficient in determining the positions and shapes of star spots. A variety of tests demonstrates the capabilities as well as the limitations of the method by investigating the effects that uncertainties in various stellar parameters have on the image reconstruction. Any systematic errors within the reconstructed image are found to be a result of the ill-posed nature of the Doppler imaging problem and not a consequence of the adopted approach. The largest uncertainties are found with respect to the dynamical range of the image (brightness or temperature contrast). This kind of uncertainty is of little effect for studies of star spot migrations with the objectives of determining differential rotation and butterfly diagrams for late-type stars.

  4. Development of real-time diagnostics and feedback algorithms for JET in view of the next step

    NASA Astrophysics Data System (ADS)

    Murari, A.; Joffrin, E.; Felton, R.; Mazon, D.; Zabeo, L.; Albanese, R.; Arena, P.; Ambrosino, G.; Ariola, M.; Barana, O.; Bruno, M.; Laborde, L.; Moreau, D.; Piccolo, F.; Sartori, F.; Crisanti, F.; de la Luna, E.; Sanchez, J.; Contributors, EFDA-JET

    2005-03-01

    Real-time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of next step tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real-time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. Some of the signals now routinely provided in real time at JET are: (i) the internal inductance and the main confinement quantities obtained by calculating the Shafranov integrals from the pick-up coils with 2 ms time resolution; (ii) the electron temperature profile, from electron cylotron emission every 10 ms; (iii) the ion temperature and plasma toroidal velocity profiles, from charge exchange recombination spectroscopy, provided every 50 ms; and (iv) the safety factor profile, derived from the inversion of the polarimetric line integrals every 2 ms. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. With these new tools, several real-time schemes were implemented, among which the most significant is the simultaneous control of the safety factor and the plasma pressure profiles using the additional heating systems (LH, NBI, ICRH) as actuators. The control strategy adopted in this case consists of a multi-variable model-based technique, which was implemented as a truncated singular value decomposition of an integral operator. This approach is considered essential for systems like tokamak machines, characterized by a strong mutual dependence of the various parameters and the distributed nature of the quantities, the plasma profiles, to be controlled. First encouraging results were also obtained using non-algorithmic

  5. A vantage from space can detect earlier drought onset: an approach using relative humidity.

    PubMed

    Farahmand, Alireza; AghaKouchak, Amir; Teixeira, Joao

    2015-01-01

    Each year, droughts cause significant economic and agricultural losses across the world. The early warning and onset detection of drought is of particular importance for effective agriculture and water resource management. Previous studies show that the Standard Precipitation Index (SPI), a measure of precipitation deficit, detects drought onset earlier than other indicators. Here we show that satellite-based near surface air relative humidity data can further improve drought onset detection and early warning. This paper introduces the Standardized Relative Humidity Index (SRHI) based on the NASA Atmospheric Infrared Sounder (AIRS) observations. The results indicate that the SRHI typically detects the drought onset earlier than the SPI. While the AIRS mission was not originally designed for drought monitoring, we show that its relative humidity data offers a new and unique avenue for drought monitoring and early warning. We conclude that the early warning aspects of SRHI may have merit for integration into current drought monitoring systems. PMID:25711500

  6. A Vantage from Space Can Detect Earlier Drought Onset: An Approach Using Relative Humidity

    PubMed Central

    Farahmand, Alireza; AghaKouchak, Amir; Teixeira, Joao

    2015-01-01

    Each year, droughts cause significant economic and agricultural losses across the world. The early warning and onset detection of drought is of particular importance for effective agriculture and water resource management. Previous studies show that the Standard Precipitation Index (SPI), a measure of precipitation deficit, detects drought onset earlier than other indicators. Here we show that satellite-based near surface air relative humidity data can further improve drought onset detection and early warning. This paper introduces the Standardized Relative Humidity Index (SRHI) based on the NASA Atmospheric Infrared Sounder (AIRS) observations. The results indicate that the SRHI typically detects the drought onset earlier than the SPI. While the AIRS mission was not originally designed for drought monitoring, we show that its relative humidity data offers a new and unique avenue for drought monitoring and early warning. We conclude that the early warning aspects of SRHI may have merit for integration into current drought monitoring systems. PMID:25711500

  7. Observed shift towards earlier spring discharge in the main Alpine rivers.

    PubMed

    Zampieri, Matteo; Scoccimarro, Enrico; Gualdi, Silvio; Navarra, Antonio

    2015-01-15

    In this study, we analyse the observed long-term discharge time-series of the Rhine, the Danube, the Rhone and the Po rivers. These rivers are characterised by different seasonal cycles reflecting the diverse climates and morphologies of the Alpine basins. However, despite the intensive and varied water management adopted in the four basins, we found common features in the trend and low-frequency variability of the spring discharge timings. All the discharge time-series display a tendency towards earlier spring peaks of more than two weeks per century. These results can be explained in terms of snowmelt, total precipitation (i.e. the sum of snowfall and rainfall) and rainfall variability. The relative importance of these factors might be different in each basin. However, we show that the change of seasonality of total precipitation plays a major role in the earlier spring runoff over most of the Alps. PMID:25005239

  8. Peak tornado activity is occurring earlier in the heart of "Tornado Alley"

    NASA Astrophysics Data System (ADS)

    Long, John A.; Stoy, Paul C.

    2014-09-01

    Tornado frequency may increase as the factors that contribute to severe convection are altered by a changing climate. Attributing changes in tornado frequency to observed global climate change is complicated because observational effort has increased over time, but studies of the seasonal distribution of tornado activity may avoid sampling biases. We demonstrate that peak tornado activity has shifted 7 days earlier in the year over the past six decades in the central and southern US Great Plains, the area with the highest global incidence of tornado activity. Results are largely unrelated to large-scale climate oscillations, and observed climate trends cannot fully account for observations, which suggest that changes to regional climate dynamics should be further investigated. Tornado preparedness efforts at individual to national levels should be cognizant of the trend toward earlier peak tornado activity across the heart of "Tornado Alley".

  9. Compulsive Buying: Earlier Illicit Drug Use, Impulse Buying, Depression, and Adult ADHD Symptoms

    PubMed Central

    Brook, Judith S.; Zhang, Chenshu; Brook, David W.; Leukefeld, Carl G.

    2015-01-01

    This longitudinal study examined the association between psychosocial antecedents, including illicit drug use, and adult compulsive buying (CB) across a 29-year time period from mean age 14 to mean age 43. Participants originally came from a community-based random sample of residents in two upstate New York counties. Multivariate linear regression analysis was used to study the relationship between the participant’s earlier psychosocial antecedents and adult CB in the fifth decade of life. The results of the multivariate linear regression analyses showed that gender (female), earlier adult impulse buying (IB), depressive mood, illicit drug use, and concurrent ADHD symptoms were all significantly associated with adult CB at mean age 43. It is important that clinicians treating CB in adults should consider the role of drug use, symptoms of ADHD, IB, depression, and family factors in CB. PMID:26165963

  10. Compulsive buying: Earlier illicit drug use, impulse buying, depression, and adult ADHD symptoms.

    PubMed

    Brook, Judith S; Zhang, Chenshu; Brook, David W; Leukefeld, Carl G

    2015-08-30

    This longitudinal study examined the association between psychosocial antecedents, including illicit drug use, and adult compulsive buying (CB) across a 29-year time period from mean age 14 to mean age 43. Participants originally came from a community-based random sample of residents in two upstate New York counties. Multivariate linear regression analysis was used to study the relationship between the participant's earlier psychosocial antecedents and adult CB in the fifth decade of life. The results of the multivariate linear regression analyses showed that gender (female), earlier adult impulse buying (IB), depressive mood, illicit drug use, and concurrent ADHD symptoms were all significantly associated with adult CB at mean age 43. It is important that clinicians treating CB in adults should consider the role of drug use, symptoms of ADHD, IB, depression, and family factors in CB. PMID:26165963

  11. Major depression associated with earlier alcohol relapse in treated teens with AUD.

    PubMed

    Cornelius, Jack R; Maisto, Stephen A; Martin, Christopher S; Bukstein, Oscar G; Salloum, Ihsan M; Daley, Dennis C; Wood, D Scott; Clark, Duncan B

    2004-07-01

    This study evaluated whether the common comorbid diagnosis of major depressive disorder (MDD) is associated with an earlier relapse to alcohol use among adolescents with an alcohol use disorder (AUD). The study sample consisted of 116 adolescents between the ages of 14 and 18 with an AUD recruited from treatment facilities in the Pittsburgh area, 50 of whom demonstrated a current MDD. An extensive baseline interview was conducted, followed by monthly interviews of alcohol use conducted by telephone for the following year. Those with current comorbid MDD demonstrated a median survival time of only 19 days until the first drink, while those without MDD demonstrated a median survival time of 45 days, which was a significant difference (Kaplan-Meier survival analysis, Breslow Test Statistic=4.27, df=1, P=.039). These results suggest that the comorbid presence of MDD is associated with an earlier relapse to alcohol use among adolescents with an AUD. PMID:15219354

  12. A Review of Quality of Life after Predictive Testing for and Earlier Identification of Neurodegenerative Diseases

    PubMed Central

    Paulsen, Jane S.; Nance, Martha; Kim, Ji-In; Carlozzi, Noelle E.; Panegyres, Peter K.; Erwin, Cheryl; Goh, Anita; McCusker, Elizabeth; Williams, Janet K.

    2013-01-01

    The past decade has witnessed an explosion of evidence suggesting that many neurodegenerative diseases can be detected years, if not decades, earlier than previously thought. To date, these scientific advances have not provoked any parallel translational or clinical improvements. There is an urgency to capitalize on this momentum so earlier detection of disease can be more readily translated into improved health-related quality of life for families at risk for, or suffering with, neurodegenerative diseases. In this review, we discuss health-related quality of life (HRQOL) measurement in neurodegenerative diseases and the importance of these “patient reported outcomes” for all clinical research. Next, we address HRQOL following early identification or predictive genetic testing in some neurodegenerative diseases: Huntington disease, Alzheimer's disease, Parkinson's disease, Dementia with Lewy bodies, frontotemporal dementia, amyotrophic lateral sclerosis, prion diseases, hereditary ataxias, Dentatorubral-pallidoluysian atrophy and Wilson's disease. After a brief report of available direct-to-consumer genetic tests, we address the juxtaposition of earlier disease identification with assumed reluctance towards predictive genetic testing. Forty-one studies examining health related outcomes following predictive genetic testing for neurodegenerative disease suggested that (a) extreme or catastrophic outcomes are rare; (b) consequences commonly include transiently increased anxiety and/or depression; (c) most participants report no regret; (d) many persons report extensive benefits to receiving genetic information; and (e) stigmatization and discrimination for genetic diseases are poorly understood and policy and laws are needed. Caution is appropriate for earlier identification of neurodegenerative diseases but findings suggest further progress is safe, feasible and likely to advance clinical care. PMID:24036231

  13. Trends toward an earlier peak of the growing season in Northern Hemisphere mid-latitudes.

    PubMed

    Xu, Chongyang; Liu, Hongyan; Williams, A Park; Yin, Yi; Wu, Xiuchen

    2016-08-01

    Changes in peak photosynthesis timing (PPT) could substantially change the seasonality of the terrestrial carbon cycle. Spring PPT in dry regions has been documented for some individual plant species on a stand scale, but both the spatio-temporal pattern of shifting PPT on a continental scale and its determinants remain unclear. Here, we use satellite measurements of vegetation greenness to find that the majority of Northern Hemisphere, mid-latitude vegetated area experienced a trend toward earlier PPT during 1982-2012, with significant trends of an average of 0.61 day yr(-1) across 19.4% of areas. These shifts correspond to increased annual accumulation of growing degree days (GDD) due to warming and are most highly concentrated in the eastern United States and Europe. Earlier mean PPT is generally a trait common among areas with summer temperatures higher than 27.6 ± 2.9 °C, summer precipitation lower than 84.2 ± 41.5 mm, and fraction of cold season precipitation greater than 89.2 ± 1.5%. The trends toward earlier PPT discovered here have co-occurred with overall increases in vegetation greenness throughout the growing season, suggesting that summer drought is not a dominant driver of these trends. These results imply that continued warming may facilitate continued shifts toward earlier PPT and cause these trends to become more pervasive, with important implications for terrestrial carbon, water, nutrient, and energy budgets. PMID:26752300

  14. Development of an Outdoor Temperature Based Control Algorithm for Residential Mechanical Ventilation Control

    SciTech Connect

    Less, Brennan; Walker, Iain; Tang, Yihuan

    2014-08-01

    The Incremental Ventilation Energy (IVE) model developed in this study combines the output of simple air exchange models with a limited set of housing characteristics to estimate the associated change in energy demand of homes. The IVE model was designed specifically to enable modellers to use existing databases of housing characteristics to determine the impact of ventilation policy change on a population scale. The IVE model estimates of energy change when applied to US homes with limited parameterisation are shown to be comparable to the estimates of a well-validated, complex residential energy model.

  15. Development of signal processing algorithms for ultrasonic detection of coal seam interfaces

    NASA Technical Reports Server (NTRS)

    Purcell, D. D.; Ben-Bassat, M.

    1976-01-01

    A pattern recognition system is presented for determining the thickness of coal remaining on the roof and floor of a coal seam. The system was developed to recognize reflected pulse echo signals that are generated by an acoustical transducer and reflected from the coal seam interface. The flexibility of the system, however, should enable it to identify pulse-echo signals generated by radar or other techniques. The main difference being the specific features extracted from the recorded data as a basis for pattern recognition.

  16. Development of Corrections for Biomass Burning Effects in Version 2 of GEWEX/SRB Algorithm

    NASA Technical Reports Server (NTRS)

    Pinker, Rachel T.; Laszlo, I.; Dicus, Dennis L. (Technical Monitor)

    1999-01-01

    The objectives of this project were: (1) To incorporate into an existing version of the University of Maryland Surface Radiation Budget (SRB) model, optical parameters of forest fire aerosols, using best available information, as well as optical properties of other aerosols, identified as significant. (2) To run the model on regional scales with the new parametrization and information on forest fire occurrence and plume advection, as available from NASA LARC, and test improvements in inferring surface fluxes against daily values of measured fluxes. (3) Develop strategy how to incorporate the new parametrization on global scale and how to transfer modified model to NASA LARC.

  17. Earlier-Season Vegetation Has Greater Temperature Sensitivity of Spring Phenology in Northern Hemisphere

    PubMed Central

    Shen, Miaogen; Tang, Yanhong; Chen, Jin; Yang, Xi; Wang, Cong; Cui, Xiaoyong; Yang, Yongping; Han, Lijian; Li, Le; Du, Jianhui; Zhang, Gengxin; Cong, Nan

    2014-01-01

    In recent decades, satellite-derived start of vegetation growing season (SOS) has advanced in many northern temperate and boreal regions. Both the magnitude of temperature increase and the sensitivity of the greenness phenology to temperature–the phenological change per unit temperature–can contribute the advancement. To determine the temperature-sensitivity, we examined the satellite-derived SOS and the potentially effective pre-season temperature (Teff) from 1982 to 2008 for vegetated land between 30°N and 80°N. Earlier season vegetation types, i.e., the vegetation types with earlier SOSmean (mean SOS for 1982–2008), showed greater advancement of SOS during 1982–2008. The advancing rate of SOS against year was also greater in the vegetation with earlier SOSmean even the Teff increase was the same. These results suggest that the spring phenology of vegetation may have high temperature sensitivity in a warmer area. Therefore it is important to consider temperature-sensitivity in assessing broad-scale phenological responses to climatic warming. Further studies are needed to explore the mechanisms and ecological consequences of the temperature-sensitivity of start of growing season in a warming climate. PMID:24505418

  18. Variations in subject pool as a function of earlier or later participation.

    PubMed

    Bernard, L

    2000-04-01

    Data were obtained from 113 participants in a university subject pool during a 16-wk. semester. Without knowing the purpose of the study, participants self-selected to participate earlier (Weeks 3 and 4: n = 63) or later (Weeks 15 or 16) n = 50). Variations in scores on the NEO Personality Inventory--Revised, the Crowne-Marlowe Social Desirability Scale, the General Expectancy of Success Scale, the Shipley Institute of Living Scale, self-reported SATs and GPAs, and a measure of academic self-efficacy as a function of earlier or later participation were examined. Multivariate analysis of variance indicated that early participants differed significantly from later participants but not in predicted ways. Earlier participants scored higher on NEO PI-R Neuroticism; specifically men (n = 15) and women (n = 48) scored higher on Hostility, and women scored higher on Depression and Self-consciousness. An additional significant difference occurred for self-reported SAT Verbal scores for men, which were significantly higher for later participants. These temporal variations may represent confounds in research using university subject pools. PMID:10840925

  19. Floodplains within reservoirs promote earlier spawning of white crappies Pomoxis annularis

    USGS Publications Warehouse

    Miranda, Leandro E.; Dagel, Jonah D.; Kaczka, Levi J.; Mower, Ethan; Wigen, S. L.

    2015-01-01

    Reservoirs impounded over floodplain rivers are unique because they may include within their upper reaches extensive shallow water stored over preexistent floodplains. Because of their relatively flat topography and riverine origin, floodplains in the upper reaches of reservoirs provide broad expanses of vegetation within a narrow range of reservoir water levels. Elsewhere in the reservoir, topography creates a band of shallow water along the contour of the reservoir where vegetation often does not grow. Thus, as water levels rise, floodplains may be the first vegetated habitats inundated within the reservoir. We hypothesized that shallow water in reservoir floodplains would attract spawning white crappies Pomoxis annularis earlier than reservoir embayments. Crappie relative abundance over five years in floodplains and embayments of four reservoirs increased as spawning season approached, peaked, and decreased as fish exited shallow water. Relative abundance peaked earlier in floodplains than embayments, and the difference was magnified with higher water levels. Early access to suitable spawning habitat promotes earlier spawning and may increase population fitness. Recognition of the importance of reservoir floodplains, an understanding of how reservoir water levels can be managed to provide timely connectivity to floodplains, and conservation of reservoir floodplains may be focal points of environmental management in reservoirs.

  20. Development of Decision Making Algorithm for Control of Sea Cargo Containers by ``TAGGED'' Neutron Method

    NASA Astrophysics Data System (ADS)

    Anan'ev, A. A.; Belichenko, S. G.; Bogolyubov, E. P.; Bochkarev, O. V.; Petrov, E. V.; Polishchuk, A. M.; Udaltsov, A. Yu.

    2009-12-01

    Nowadays in Russia and abroad there are several groups of scientists, engaged in development of systems based on "tagged" neutron method (API method) and intended for detection of dangerous materials, including high explosives (HE). Particular attention is paid to possibility of detection of dangerous objects inside a sea cargo container. Energy gamma-spectrum, registered from object under inspection is used for determination of oxygen/carbon and nitrogen/carbon chemical ratios, according to which dangerous object is distinguished from not dangerous one. Material of filled container, however, gives rise to additional effects of rescattering and moderation of 14 MeV primary neutrons of generator, attenuation of secondary gamma-radiation from reactions of inelastic neutron scattering on objects under inspection. These effects lead to distortion of energy gamma-response from examined object and therefore prevent correct recognition of chemical ratios. These difficulties are taken into account in analytical method, presented in the paper. Method has been validated against experimental data, obtained by the system for HE detection in sea cargo, based on API method and developed in VNIIA. Influence of shielding materials on results of HE detection and identification is considered. Wood and iron were used as shielding materials. Results of method application for analysis of experimental data on HE simulator measurement (tetryl, trotyl, hexogen) are presented.

  1. Three-dimensional graphics simulator for testing mine machine computer-controlled algorithms -- phase 1 development

    SciTech Connect

    Ambrose, D.H. )

    1993-01-01

    Using three-dimensional (3-D) graphics computing to evaluate new technologies for computer-assisted mining systems illustrates how these visual techniques can redefine the way researchers look at raw scientific data. The US Bureau of Mines is using 3-D graphics computing to obtain cheaply, easily, and quickly information about the operation and design of current and proposed mechanical coal and metal-nonmetal mining systems. Bureau engineers developed a graphics simulator for a continuous miner that enables a realistic test for experimental software that controls the functions of a machine. Some of the specific simulated functions of the continuous miner are machine motion, appendage motion, machine position, and machine sensors. The simulator uses data files generated in the laboratory or mine using a computer-assisted mining machine. The data file contains information from a laser-based guidance system and a data acquisition system that records all control commands given to a computer-assisted mining machine. This report documents the first phase in developing the simulator and discusses simulator requirements, features of the initial simulator, and several examples of its application. During this endeavor, Bureau engineers discovered and appreciated the simulator's potential to assist their investigations of machine controls and navigation systems.

  2. Development of Subspace-based Hybrid Monte Carlo-Deterministric Algorithms for Reactor Physics Calculations

    SciTech Connect

    Abdel-Khalik, Hany S.; Zhang, Qiong

    2014-05-20

    The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 103 - 105 times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.

  3. DEVELOPMENT OF DECISION MAKING ALGORITHM FOR CONTROL OF SEA CARGO CONTAINERS BY 'TAGGED' NEUTRON METHOD

    SciTech Connect

    Anan'ev, A. A.; Belichenko, S. G.; Bogolyubov, E. P.; Bochkarev, O. V.; Petrov, E. V.; Polishchuk, A. M.; Udaltsov, A. Yu.

    2009-12-02

    Nowadays in Russia and abroad there are several groups of scientists, engaged in development of systems based on 'tagged' neutron method (API method) and intended for detection of dangerous materials, including high explosives (HE). Particular attention is paid to possibility of detection of dangerous objects inside a sea cargo container. Energy gamma-spectrum, registered from object under inspection is used for determination of oxygen/carbon and nitrogen/carbon chemical ratios, according to which dangerous object is distinguished from not dangerous one. Material of filled container, however, gives rise to additional effects of rescattering and moderation of 14 MeV primary neutrons of generator, attenuation of secondary gamma-radiation from reactions of inelastic neutron scattering on objects under inspection. These effects lead to distortion of energy gamma-response from examined object and therefore prevent correct recognition of chemical ratios. These difficulties are taken into account in analytical method, presented in the paper. Method has been validated against experimental data, obtained by the system for HE detection in sea cargo, based on API method and developed in VNIIA. Influence of shielding materials on results of HE detection and identification is considered. Wood and iron were used as shielding materials. Results of method application for analysis of experimental data on HE simulator measurement (tetryl, trotyl, hexogen) are presented.

  4. A Crowdsourcing Approach to Developing and Assessing Prediction Algorithms for AML Prognosis.

    PubMed

    Noren, David P; Long, Byron L; Norel, Raquel; Rrhissorrakrai, Kahn; Hess, Kenneth; Hu, Chenyue Wendy; Bisberg, Alex J; Schultz, Andre; Engquist, Erik; Liu, Li; Lin, Xihui; Chen, Gregory M; Xie, Honglei; Hunter, Geoffrey A M; Boutros, Paul C; Stepanov, Oleg; Norman, Thea; Friend, Stephen H; Stolovitzky, Gustavo; Kornblau, Steven; Qutub, Amina A

    2016-06-01

    Acute Myeloid Leukemia (AML) is a fatal hematological cancer. The genetic abnormalities underlying AML are extremely heterogeneous among patients, making prognosis and treatment selection very difficult. While clinical proteomics data has the potential to improve prognosis accuracy, thus far, the quantitative means to do so have yet to be developed. Here we report the results and insights gained from the DREAM 9 Acute Myeloid Prediction Outcome Prediction Challenge (AML-OPC), a crowdsourcing effort designed to promote the development of quantitative methods for AML prognosis prediction. We identify the most accurate and robust models in predicting patient response to therapy, remission duration, and overall survival. We further investigate patient response to therapy, a clinically actionable prediction, and find that patients that are classified as resistant to therapy are harder to predict than responsive patients across the 31 models submitted to the challenge. The top two performing models, which held a high sensitivity to these patients, substantially utilized the proteomics data to make predictions. Using these models, we also identify which signaling proteins were useful in predicting patient therapeutic response. PMID:27351836

  5. Development and applications of algorithms for calculating the transonic flow about harmonically oscillating wings

    NASA Technical Reports Server (NTRS)

    Ehlers, F. E.; Weatherill, W. H.; Yip, E. L.

    1984-01-01

    A finite difference method to solve the unsteady transonic flow about harmonically oscillating wings was investigated. The procedure is based on separating the velocity potential into steady and unsteady parts and linearizing the resulting unsteady differential equation for small disturbances. The differential equation for the unsteady velocity potential is linear with spatially varying coefficients and with the time variable eliminated by assuming harmonic motion. An alternating direction implicit procedure was investigated, and a pilot program was developed for both two and three dimensional wings. This program provides a relatively efficient relaxation solution without previously encountered solution instability problems. Pressure distributions for two rectangular wings are calculated. Conjugate gradient techniques were developed for the asymmetric, indefinite problem. The conjugate gradient procedure is evaluated for applications to the unsteady transonic problem. Different equations for the alternating direction procedure are derived using a coordinate transformation for swept and tapered wing planforms. Pressure distributions for swept, untaped wings of vanishing thickness are correlated with linear results for sweep angles up to 45 degrees.

  6. A Crowdsourcing Approach to Developing and Assessing Prediction Algorithms for AML Prognosis

    PubMed Central

    Noren, David P.; Long, Byron L.; Norel, Raquel; Rrhissorrakrai, Kahn; Hess, Kenneth; Hu, Chenyue Wendy; Bisberg, Alex J.; Schultz, Andre; Engquist, Erik; Liu, Li; Lin, Xihui; Chen, Gregory M.; Xie, Honglei; Hunter, Geoffrey A. M.; Norman, Thea; Friend, Stephen H.; Stolovitzky, Gustavo; Kornblau, Steven; Qutub, Amina A.

    2016-01-01

    Acute Myeloid Leukemia (AML) is a fatal hematological cancer. The genetic abnormalities underlying AML are extremely heterogeneous among patients, making prognosis and treatment selection very difficult. While clinical proteomics data has the potential to improve prognosis accuracy, thus far, the quantitative means to do so have yet to be developed. Here we report the results and insights gained from the DREAM 9 Acute Myeloid Prediction Outcome Prediction Challenge (AML-OPC), a crowdsourcing effort designed to promote the development of quantitative methods for AML prognosis prediction. We identify the most accurate and robust models in predicting patient response to therapy, remission duration, and overall survival. We further investigate patient response to therapy, a clinically actionable prediction, and find that patients that are classified as resistant to therapy are harder to predict than responsive patients across the 31 models submitted to the challenge. The top two performing models, which held a high sensitivity to these patients, substantially utilized the proteomics data to make predictions. Using these models, we also identify which signaling proteins were useful in predicting patient therapeutic response. PMID:27351836

  7. Development of an Experimental Phased Array Feed System and Algorithms for Radio Astronomy

    NASA Astrophysics Data System (ADS)

    Landon, Jonathan C.

    . Results are given for simulated and experimental data, demonstrating deeper beampattern nulls by 6 to 30dB. To increase the system bandwidth toward the hundreds of MHz bandwidth required by astronomers for a fully science-ready instrument, an FPGA digital backend is introduced using a 64-input analog-to-digital converter running at 50 Msamp/sec and the ROACH processing board developed at the University of California, Berkeley. International efforts to develop digital back ends for large antenna arrays are considered, and a road map is proposed for development of a hardware correlator/beamformer at BYU using three ROACH boards communicating over 10 gigabit Ethernet.

  8. A fuzzy hill-climbing algorithm for the development of a compact associative classifier

    NASA Astrophysics Data System (ADS)

    Mitra, Soumyaroop; Lam, Sarah S.

    2012-02-01

    Classification, a data mining technique, has widespread applications including medical diagnosis, targeted marketing, and others. Knowledge discovery from databases in the form of association rules is one of the important data mining tasks. An integrated approach, classification based on association rules, has drawn the attention of the data mining community over the last decade. While attention has been mainly focused on increasing classifier accuracies, not much efforts have been devoted towards building interpretable and less complex models. This paper discusses the development of a compact associative classification model using a hill-climbing approach and fuzzy sets. The proposed methodology builds the rule-base by selecting rules which contribute towards increasing training accuracy, thus balancing classification accuracy with the number of classification association rules. The results indicated that the proposed associative classification model can achieve competitive accuracies on benchmark datasets with continuous attributes and lend better interpretability, when compared with other rule-based systems.

  9. An approach to the development and analysis of wind turbine control algorithms

    SciTech Connect

    Wu, K.C.

    1998-03-01

    The objective of this project is to develop the capability of symbolically generating an analytical model of a wind turbine for studies of control systems. This report focuses on a theoretical formulation of the symbolic equations of motion (EOMs) modeler for horizontal axis wind turbines. In addition to the power train dynamics, a generic 7-axis rotor assembly is used as the base model from which the EOMs of various turbine configurations can be derived. A systematic approach to generate the EOMs is presented using d`Alembert`s principle and Lagrangian dynamics. A Matlab M file was implemented to generate the EOMs of a two-bladed, free yaw wind turbine. The EOMs will be compared in the future to those of a similar wind turbine modeled with the YawDyn code for verification. This project was sponsored by Sandia National Laboratories as part of the Adaptive Structures and Control Task. This is the final report of Sandia Contract AS-0985.

  10. Issues in the development of a general design algorithm for reliable failure detection

    NASA Technical Reports Server (NTRS)

    Chow, E. Y.; Willsky, A. S.

    1980-01-01

    The design of residual-generation processes is briefly discussed, the goal being to develop a methodology for designing robust processes of this type. It is noted that analytical redundancy forms the basis for residual-generation, representing the relationships between the outputs of sensors and inputs of actuators via the dynamics of the system. It is because of this relationship that sensor outputs (even those of dissimilar sensors and at different times) can, in effect, be compared to ascertain whether they are consistent with normal system behavior. The residuals can be seen as constituting the discrepancy between the output resulting from such comparisons; they should display noise-like characteristics only in the normal mode. Failures in the system would lead to a discrepancy between the observed and expected behavior of the sensor outputs and hence to abnormal characteristics (failure signatures) in the residual.

  11. Development of automatic image analysis algorithms for protein localization studies in budding yeast

    NASA Astrophysics Data System (ADS)

    Logg, Katarina; Kvarnström, Mats; Diez, Alfredo; Bodvard, Kristofer; Käll, Mikael

    2007-02-01

    Microscopy of fluorescently labeled proteins has become a standard technique for live cell imaging. However, it is still a challenge to systematically extract quantitative data from large sets of images in an unbiased fashion, which is particularly important in high-throughput or time-lapse studies. Here we describe the development of a software package aimed at automatic quantification of abundance and spatio-temporal dynamics of fluorescently tagged proteins in vivo in the budding yeast Saccharomyces cerevisiae, one of the most important model organisms in proteomics. The image analysis methodology is based on first identifying cell contours from bright field images, and then use this information to measure and statistically analyse protein abundance in specific cellular domains from the corresponding fluorescence images. The applicability of the procedure is exemplified for two nuclear localized GFP-tagged proteins, Mcm4p and Nrm1p.

  12. Connecting the dots : analysis, development and applications of the SimpleX algorithm

    NASA Astrophysics Data System (ADS)

    Kruip, Chael

    2011-11-01

    The SimpleX radiative transfer method is based on the interpretation of photons as particles interacting on a natural scale: the local mean free path. In our method, light is transported along the lines of an unstructured Delaunay mesh that encodes this natural distance and represents the physical medium. The SimpleX method is fast, highly adaptive and its computational cost does not scale with the number of sources. It is therefore well-suited for cosmological applications where it is essential to cover many orders of magnitude in resolution and where millions of sources can exist within a single simulation. We describe the code, its recent developments and apply it to several relevant astrophysical problems. In particular, we perform radiative transfer calculations of cosmological reionization and of the wind-wind interaction region of the Eta Carinae binary system.

  13. Runway Exit Designs for Capacity Improvement Demonstrations. Phase 1: Algorithm Development

    NASA Technical Reports Server (NTRS)

    Trani, A. A.; Hobeika, A. G.; Sherali, H.; Kim, B. J.; Sadam, C. K.

    1990-01-01

    A description and results are presented of a study to locate and design rapid runway exits under realistic airport conditions. The study developed a PC-based computer simulation-optimization program called REDIM (runway exit design interactive model) to help future airport designers and planners to locate optimal exits under various airport conditions. The model addresses three sets of problems typically arising during runway exit design evaluations. These are the evaluations of existing runway configurations, addition of new rapid runway turnoffs, and the design of new runway facilities. The model is highly interactive and allows a quick estimation of the expected value of runway occupancy time. Aircraft populations and airport environmental conditions are among the multiple inputs to the model to execute a viable runway location and geometric design solution. The results presented suggest that possible reductions on runway occupancy time (ROT) can be achieved with the use of optimally tailored rapid runway designs for a given aircraft population. Reductions of up to 9 to 6 seconds are possible with the implementation of 30 m/sec variable geometry exits.

  14. Development of a Data Reduction algorithm for Optical Wide Field Patrol

    NASA Astrophysics Data System (ADS)

    Park, Sun-youp; Keum, Kang-Hoon; Lee, Seong-Whan; Jin, Ho; Park, Yung-Sik; Hong-Suh; Jo, Jung Hyun; Moon, Hong-Kyu; Bae, Young-Ho; Choi, Jin; Choi, Young-Jun; Park, Jang-Hyun; Lee, Jung-Ho

    2013-09-01

    The detector subsystem of the Optical Wide-field Patrol (OWL) network efficiently acquires the position and time information of moving objects such as artificial satellites through its chopper system, which consists of 4 blades in front of the CCD camera. Using this system, it is possible to get more position data with the same exposure time by changing the streaks of the moving objects into many pieces with the fast rotating blades during sidereal tracking. At the same time, the time data from the rotating chopper can be acquired by the time tagger connected to the photo diode. To analyze the orbits of the targets detected in the image data of such a system, a sequential procedure of determining the positions of separated streak lines was developed that involved calculating the World Coordinate System (WCS) solution to transform the positions into equatorial coordinate systems, and finally combining the time log records from the time tagger with the transformed position data. We introduce this procedure and the preliminary results of the application of this procedure to the test observation images.

  15. Methane emissions from tropical wetlands in LPX: Algorithm development and validation using atmospheric measurements

    NASA Astrophysics Data System (ADS)

    Houweling, S.; Ringeval, B.; Basu, A.; Van Beek, L. P.; Van Bodegom, P.; Spahni, R.; Gatti, L.; Gloor, M.; Roeckmann, T.

    2013-12-01

    Tropical wetlands are an important and highly uncertain term in the global budget of methane. Unlike wetlands in higher latitudes, which are dominated by water logged peatlands, tropical wetlands consist primarily of inundated river floodplains responding seasonally to variations in river discharge. Despite the fact that the hydrology of these systems is obviously very different, process models used for estimating methane emissions from wetlands commonly lack a dedicated parameterization for the tropics. This study is a first attempt to develop such a parameterization for use in the global dynamical vegetation model LPX. The required floodplain extents and water depth are calculated offline using the global hydrological model PCR-GLOBWB, which includes a sophisticated river routing scheme. LPX itself has been extended with a dedicated floodplain land unit and flood tolerant PFTs. The simulated species competition and productivity have been verified using GLC2000 and MODIS, pointing to directions for further model improvement regarding vegetation dynamics and hydrology. LPX simulated methane fluxes have been compared with available in situ measurements from tropical America. Finally, estimates for the Amazon basin have been implemented in the TM5 atmospheric transport model and compared with aircraft measured vertical profiles. The first results that will be presented demonstrate that, despite the limited availability of measurements, useful constraints on the magnitude and seasonality of Amazonian methane emissions can be derived.

  16. Design and development of a new micro-beam treatment planning system: effectiveness of algorithms of optimization and dose calculations and potential of micro-beam treatment.

    PubMed

    Tachibana, Hidenobu; Kojima, Hiroyuki; Yusa, Noritaka; Miyajima, Satoshi; Tsuda, Akihisa; Yamashita, Takashi

    2012-07-01

    A new treatment planning system (TPS) was designed and developed for a new treatment system, which consisted of a micro-beam-enabled linac with robotics and a real-time tracking system. We also evaluated the effectiveness of the implemented algorithms of optimization and dose calculations in the TPS for the new treatment system. In the TPS, the optimization procedure consisted of the pseudo Beam's-Eye-View method for finding the optimized beam directions and the steepest-descent method for determination of beam intensities. We used the superposition-/convolution-based (SC-based) algorithm and Monte Carlo-based (MC-based) algorithm to calculate dose distributions using CT image data sets. In the SC-based algorithm, dose density scaling was applied for the calculation of inhomogeneous corrections. The MC-based algorithm was implemented with Geant4 toolkit and a phase-based approach using a network-parallel computing. From the evaluation of the TPS, the system can optimize the direction and intensity of individual beams. The accuracy of the dose calculated by the SC-based algorithm was less than 1% on average with the calculation time of 15 s for one beam. However, the MC-based algorithm needed 72 min for one beam using the phase-based approach, even though the MC-based algorithm with the parallel computing could decrease multiple beam calculations and had 18.4 times faster calculation speed using the parallel computing. The SC-based algorithm could be practically acceptable for the dose calculation in terms of the accuracy and computation time. Additionally, we have found a dosimetric advantage of proton Bragg peak-like dose distribution in micro-beam treatment. PMID:22544809

  17. A baseline algorithm for face detection and tracking in video

    NASA Astrophysics Data System (ADS)

    Manohar, Vasant; Soundararajan, Padmanabhan; Korzhova, Valentina; Boonstra, Matthew; Goldgof, Dmitry; Kasturi, Rangachar

    2007-10-01

    Establishing benchmark datasets, performance metrics and baseline algorithms have considerable research significance in gauging the progress in any application domain. These primarily allow both users and developers to compare the performance of various algorithms on a common platform. In our earlier works, we focused on developing performance metrics and establishing a substantial dataset with ground truth for object detection and tracking tasks (text and face) in two video domains -- broadcast news and meetings. In this paper, we present the results of a face detection and tracking algorithm on broadcast news videos with the objective of establishing a baseline performance for this task-domain pair. The detection algorithm uses a statistical approach that was originally developed by Viola and Jones and later extended by Lienhart. The algorithm uses a feature set that is Haar-like and a cascade of boosted decision tree classifiers as a statistical model. In this work, we used the Intel Open Source Computer Vision Library (OpenCV) implementation of the Haar face detection algorithm. The optimal values for the tunable parameters of this implementation were found through an experimental design strategy commonly used in statistical analyses of industrial processes. Tracking was accomplished as continuous detection with the detected objects in two frames mapped using a greedy algorithm based on the distances between the centroids of bounding boxes. Results on the evaluation set containing 50 sequences (~ 2.5 mins.) using the developed performance metrics show good performance of the algorithm reflecting the state-of-the-art which makes it an appropriate choice as the baseline algorithm for the problem.

  18. Developing and evaluating an automated appendicitis risk stratification algorithm for pediatric patients in the emergency department

    PubMed Central

    Deleger, Louise; Brodzinski, Holly; Zhai, Haijun; Li, Qi; Lingren, Todd; Kirkendall, Eric S; Alessandrini, Evaline; Solti, Imre

    2013-01-01

    Objective To evaluate a proposed natural language processing (NLP) and machine-learning based automated method to risk stratify abdominal pain patients by analyzing the content of the electronic health record (EHR). Methods We analyzed the EHRs of a random sample of 2100 pediatric emergency department (ED) patients with abdominal pain, including all with a final diagnosis of appendicitis. We developed an automated system to extract relevant elements from ED physician notes and lab values and to automatically assign a risk category for acute appendicitis (high, equivocal, or low), based on the Pediatric Appendicitis Score. We evaluated the performance of the system against a manually created gold standard (chart reviews by ED physicians) for recall, specificity, and precision. Results The system achieved an average F-measure of 0.867 (0.869 recall and 0.863 precision) for risk classification, which was comparable to physician experts. Recall/precision were 0.897/0.952 in the low-risk category, 0.855/0.886 in the high-risk category, and 0.854/0.766 in the equivocal-risk category. The information that the system required as input to achieve high F-measure was available within the first 4 h of the ED visit. Conclusions Automated appendicitis risk categorization based on EHR content, including information from clinical notes, shows comparable performance to physician chart reviewers as measured by their inter-annotator agreement and represents a promising new approach for computerized decision support to promote application of evidence-based medicine at the point of care. PMID:24130231

  19. Evaluation of Carbapenemase Screening and Confirmation Tests with Enterobacteriaceae and Development of a Practical Diagnostic Algorithm

    PubMed Central

    Maurer, Florian P.; Castelberg, Claudio; Quiblier, Chantal; Bloemberg, Guido V.

    2014-01-01

    Reliable identification of carbapenemase-producing members of the family Enterobacteriaceae is necessary to limit their spread. This study aimed to develop a diagnostic flow chart using phenotypic screening and confirmation tests that is suitable for implementation in different types of clinical laboratories. A total of 334 clinical Enterobacteriaceae isolates genetically characterized with respect to carbapenemase, extended-spectrum β-lactamase (ESBL), and AmpC genes were analyzed. A total of 142/334 isolates (42.2%) were suspected of carbapenemase production, i.e., intermediate or resistant to ertapenem (ETP) and/or meropenem (MEM) and/or imipenem (IPM) according to EUCAST clinical breakpoints (CBPs). A group of 193/334 isolates (57.8%) showing susceptibility to ETP, MEM, and IPM was considered the negative-control group in this study. CLSI and EUCAST carbapenem CBPs and the new EUCAST MEM screening cutoff were evaluated as screening parameters. ETP, MEM, and IPM with or without aminophenylboronic acid (APBA) or EDTA combined-disk tests (CDTs) and the Carba NP-II test were evaluated as confirmation assays. EUCAST temocillin cutoffs were evaluated for OXA-48 detection. The EUCAST MEM screening cutoff (<25 mm) showed a sensitivity of 100%. The ETP APBA CDT on Mueller-Hinton agar containing cloxacillin (MH-CLX) displayed 100% sensitivity and specificity for class A carbapenemase confirmation. ETP and MEM EDTA CDTs showed 100% sensitivity and specificity for class B carbapenemases. Temocillin zone diameters/MIC testing on MH-CLX was highly specific for OXA-48 producers. The overall sensitivity, specificity, positive predictive value, and negative predictive value of the Carba NP-II test were 78.9, 100, 100, and 98.7%, respectively. Combining the EUCAST MEM carbapenemase screening cutoff (<25 mm), ETP (or MEM), APBA, and EDTA CDTs, and temocillin disk diffusion on MH-CLX promises excellent performance for carbapenemase detection. PMID:25355766

  20. Development of a Screening Algorithm for Alzheimer's Disease Using Categorical Verbal Fluency

    PubMed Central

    Jeong, Hyeon; Park, Jae Young; Kim, Tae Hui; Lee, Jung Jae; Lee, Seok Bum; Park, Joon Hyuk; Yoon, Jong Chul; Kim, Jeong Lan; Ryu, Seung-Ho; Jhoo, Jin Hyeong; Lee, Dong Young; Kim, Ki Woong

    2014-01-01

    We developed a weighted composite score of the categorical verbal fluency test (CVFT) that can more easily and widely screen Alzheimer's disease (AD) than the mini-mental status examination (MMSE). We administered the CVFT using animal category and MMSE to 423 community-dwelling mild probable AD patients and their age- and gender-matched cognitively normal controls. To enhance the diagnostic accuracy for AD of the CVFT, we obtained a weighted composite score from subindex scores of the CVFT using a logistic regression model: logit (case)  = 1.160+0.474× gender +0.003× age +0.226× education level – 0.089× first-half score – 0.516× switching score -0.303× clustering score +0.534× perseveration score. The area under the receiver operating curve (AUC) for AD of this composite score AD was 0.903 (95% CI = 0.883 – 0.923), and was larger than that of the age-, gender- and education-adjusted total score of the CVFT (p<0.001). In 100 bootstrapped re-samples, the composite score consistently showed better diagnostic accuracy, sensitivity and specificity for AD than the total score. Although AUC for AD of the CVFT composite score was slightly smaller than that of the MMSE (0.930, p = 0.006), the CVFT composite score may be a good alternative to the MMSE for screening AD since it is much briefer, cheaper, and more easily applicable over phone or internet than the MMSE. PMID:24392109

  1. Evaluation of carbapenemase screening and confirmation tests with Enterobacteriaceae and development of a practical diagnostic algorithm.

    PubMed

    Maurer, Florian P; Castelberg, Claudio; Quiblier, Chantal; Bloemberg, Guido V; Hombach, Michael

    2015-01-01

    Reliable identification of carbapenemase-producing members of the family Enterobacteriaceae is necessary to limit their spread. This study aimed to develop a diagnostic flow chart using phenotypic screening and confirmation tests that is suitable for implementation in different types of clinical laboratories. A total of 334 clinical Enterobacteriaceae isolates genetically characterized with respect to carbapenemase, extended-spectrum β-lactamase (ESBL), and AmpC genes were analyzed. A total of 142/334 isolates (42.2%) were suspected of carbapenemase production, i.e., intermediate or resistant to ertapenem (ETP) and/or meropenem (MEM) and/or imipenem (IPM) according to EUCAST clinical breakpoints (CBPs). A group of 193/334 isolates (57.8%) showing susceptibility to ETP, MEM, and IPM was considered the negative-control group in this study. CLSI and EUCAST carbapenem CBPs and the new EUCAST MEM screening cutoff were evaluated as screening parameters. ETP, MEM, and IPM with or without aminophenylboronic acid (APBA) or EDTA combined-disk tests (CDTs) and the Carba NP-II test were evaluated as confirmation assays. EUCAST temocillin cutoffs were evaluated for OXA-48 detection. The EUCAST MEM screening cutoff (<25 mm) showed a sensitivity of 100%. The ETP APBA CDT on Mueller-Hinton agar containing cloxacillin (MH-CLX) displayed 100% sensitivity and specificity for class A carbapenemase confirmation. ETP and MEM EDTA CDTs showed 100% sensitivity and specificity for class B carbapenemases. Temocillin zone diameters/MIC testing on MH-CLX was highly specific for OXA-48 producers. The overall sensitivity, specificity, positive predictive value, and negative predictive value of the Carba NP-II test were 78.9, 100, 100, and 98.7%, respectively. Combining the EUCAST MEM carbapenemase screening cutoff (<25 mm), ETP (or MEM), APBA, and EDTA CDTs, and temocillin disk diffusion on MH-CLX promises excellent performance for carbapenemase detection. PMID:25355766

  2. Development, refinement, and testing of a short term solar flare prediction algorithm

    NASA Technical Reports Server (NTRS)

    Smith, Jesse B., Jr.

    1993-01-01

    During the period included in this report, the expenditure of time and effort, and progress toward performance of the tasks and accomplishing the goals set forth in the two year research grant proposal, consisted primarily of calibration and analysis of selected data sets. The heliographic limits of 30 degrees from central meridian were continued. As previously reported, all analyses are interactive and are performed by the Principal Investigator. It should also be noted that the analysis time involved by the Principal Investigator during this reporting period was limited, partially due to illness and partially resulting from other uncontrollable factors. The calibration technique (as developed by MSFC solar scientists), incorporates sets of constants which vary according to the wave length of the observation data set. One input constant is then varied interactively to correct for observing conditions, etc., to result in a maximum magnetic field strength (in the calibrated data), based on a separate analysis. There is some insecurity in the methodology and the selection of variables to yield the most self-consistent results for variable maximum field strengths and for variable observing/atmospheric conditions. Several data sets were analyzed using differing constant sets, and separate analyses to differing maximum field strength - toward standardizing methodology and technique for the most self-consistent results for the large number of cases. It may be necessary to recalibrate some of the analyses, but the sc analyses are retained on the optical disks and can still be used with recalibration where necessary. Only the extracted parameters will be changed.

  3. 3D–2D registration in mobile radiographs: algorithm development and preliminary clinical evaluation

    PubMed Central

    Otake, Yoshito; Wang, Adam S; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Aygun, Nafi; Lo, Sheng-fu L; Wolinsky, Jean-Paul; Gokaslan, Ziya L; Siewerdsen, Jeffrey H

    2015-01-01

    An image-based 3D–2D registration method is presented using radiographs acquired in the uncalibrated, unconstrained geometry of mobile radiography. The approach extends a previous method for six degree-of-freedom (DOF) registration in C-arm fluoroscopy (namely ‘LevelCheck’) to solve the 9-DOF estimate of geometry in which the position of the source and detector are unconstrained. The method was implemented using a gradient correlation similarity metric and stochastic derivative-free optimization on a GPU. Development and evaluation were conducted in three steps. First, simulation studies were performed that involved a CT scan of an anthropomorphic body phantom and 1000 randomly generated digitally reconstructed radiographs in posterior–anterior and lateral views. A median projection distance error (PDE) of 0.007 mm was achieved with 9-DOF registration compared to 0.767 mm for 6-DOF. Second, cadaver studies were conducted using mobile radiographs acquired in three anatomical regions (thorax, abdomen and pelvis) and three levels of source-detector distance (~800, ~1000 and ~1200 mm). The 9-DOF method achieved a median PDE of 0.49 mm (compared to 2.53 mm for the 6-DOF method) and demonstrated robustness in the unconstrained imaging geometry. Finally, a retrospective clinical study was conducted with intraoperative radiographs of the spine exhibiting real anatomical deformation and image content mismatch (e.g. interventional devices in the radiograph that were not in the CT), demonstrating a PDE = 1.1 mm for the 9-DOF approach. Average computation time was 48.5 s, involving 687 701 function evaluations on average, compared to 18.2 s for the 6-DOF method. Despite the greater computational load, the 9-DOF method may offer a valuable tool for target localization (e.g. decision support in level counting) as well as safety and quality assurance checks at the conclusion of a procedure (e.g. overlay of planning data on the radiograph for verification of the surgical

  4. Development, Implementation and Evaluation of Segmentation Algorithms for the Automatic Classification of Cervical Cells

    NASA Astrophysics Data System (ADS)

    Macaulay, Calum Eric

    Cancer of the uterine cervix is one of the most common cancers in women. An effective screening program for pre-cancerous and cancerous lesions can dramatically reduce the mortality rate for this disease. In British Columbia where such a screening program has been in place for some time, 2500 to 3000 slides of cervical smears need to be examined daily. More than 35 years ago, it was recognized that an automated pre-screening system could greatly assist people in this task. Such a system would need to find and recognize stained cells, segment the images of these cells into nucleus and cytoplasm, numerically describe the characteristics of the cells, and use these features to discriminate between normal and abnormal cells. The thrust of this work was (1) to research and develop new segmentation methods and compare their performance to those in the literature, (2) to determine dependence of the numerical cell descriptors on the segmentation method used, (3) to determine the dependence of cell classification accuracy on the segmentation used, and (4) to test the hypothesis that using numerical cell descriptors one can correctly classify the cells. The segmentation accuracies of 32 different segmentation procedures were examined. It was found that the best nuclear segmentation procedure was able to correctly segment 98% of the nuclei of a 1000 and a 3680 image database. Similarly the best cytoplasmic segmentation procedure was found to correctly segment 98.5% of the cytoplasm of the same 1000 image database. Sixty-seven different numerical cell descriptors (features) were calculated for every segmented cell. On a database of 800 classified cervical cells these features when used in a linear discriminant function analysis could correctly classify 98.7% of the normal cells and 97.0% of the abnormal cells. While some features were found to vary a great deal between segmentation procedures, the classification accuracy of groups of features was found to be independent of the

  5. 3D-2D registration in mobile radiographs: algorithm development and preliminary clinical evaluation

    NASA Astrophysics Data System (ADS)

    Otake, Yoshito; Wang, Adam S.; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Aygun, Nafi; Lo, Sheng-fu L.; Wolinsky, Jean-Paul; Gokaslan, Ziya L.; Siewerdsen, Jeffrey H.

    2015-03-01

    An image-based 3D-2D registration method is presented using radiographs acquired in the uncalibrated, unconstrained geometry of mobile radiography. The approach extends a previous method for six degree-of-freedom (DOF) registration in C-arm fluoroscopy (namely ‘LevelCheck’) to solve the 9-DOF estimate of geometry in which the position of the source and detector are unconstrained. The method was implemented using a gradient correlation similarity metric and stochastic derivative-free optimization on a GPU. Development and evaluation were conducted in three steps. First, simulation studies were performed that involved a CT scan of an anthropomorphic body phantom and 1000 randomly generated digitally reconstructed radiographs in posterior-anterior and lateral views. A median projection distance error (PDE) of 0.007 mm was achieved with 9-DOF registration compared to 0.767 mm for 6-DOF. Second, cadaver studies were conducted using mobile radiographs acquired in three anatomical regions (thorax, abdomen and pelvis) and three levels of source-detector distance (~800, ~1000 and ~1200 mm). The 9-DOF method achieved a median PDE of 0.49 mm (compared to 2.53 mm for the 6-DOF method) and demonstrated robustness in the unconstrained imaging geometry. Finally, a retrospective clinical study was conducted with intraoperative radiographs of the spine exhibiting real anatomical deformation and image content mismatch (e.g. interventional devices in the radiograph that were not in the CT), demonstrating a PDE = 1.1 mm for the 9-DOF approach. Average computation time was 48.5 s, involving 687 701 function evaluations on average, compared to 18.2 s for the 6-DOF method. Despite the greater computational load, the 9-DOF method may offer a valuable tool for target localization (e.g. decision support in level counting) as well as safety and quality assurance checks at the conclusion of a procedure (e.g. overlay of planning data on the radiograph for verification of

  6. Factors associated with late diagnosis of HIV infection and missed opportunities for earlier testing.

    PubMed

    Gullón, Alejandra; Verdejo, José; de Miguel, Rosa; Gómez, Ana; Sanz, Jesús

    2016-10-01

    Late diagnosis (LD) of human immunodeficiency virus (HIV) infection continues to be a significant problem that increases disease burden both for patients and for the public health system. Guidelines have been updated in order to facilitate earlier HIV diagnosis, introducing "indicator condition-guided HIV testing". In this study, we analysed the frequency of LD and associated risk factors. We retrospectively identified those cases that could be considered missed opportunities for an earlier diagnosis. All patients newly diagnosed with HIV infection who attended Hospital La Princesa, Madrid (Spain) between 2007 and 2014 were analysed. We collected epidemiological, clinical and immunological data. We also reviewed electronic medical records from the year before the HIV diagnosis to search for medical consultations due to clinical indicators. HIV infection was diagnosed in 354 patients. The median CD4 count at presentation was 352 cells/mm(3). Overall, 158 patients (50%) met the definition of LD, and 97 (30.7%) the diagnosis of advanced disease. LD was associated with older age and was more frequent amongst immigrants. Heterosexual relations and injection drug use were more likely to be the reasons for LD than relations between men who have sex with men. During the year preceding the diagnosis, 46.6% of the patients had sought medical advice owing to the presence of clinical indicators that should have led to HIV testing. Of those, 24 cases (14.5%) were classified as missed opportunities for earlier HIV diagnosis because testing was not performed. According to these results, all health workers should pursue early HIV diagnosis through the proper implementation of HIV testing guidelines. Such an approach would prove directly beneficial to the patient and indirectly beneficial to the general population through the reduction in the risk of transmission. PMID:27144427

  7. Control Algorithms and Simulated Environment Developed and Tested for Multiagent Robotics for Autonomous Inspection of Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Wong, Edmond

    2005-01-01

    The NASA Glenn Research Center and academic partners are developing advanced multiagent robotic control algorithms that will enable the autonomous inspection and repair of future propulsion systems. In this application, on-wing engine inspections will be performed autonomously by large groups of cooperative miniature robots that will traverse the surfaces of engine components to search for damage. The eventual goal is to replace manual engine inspections that require expensive and time-consuming full engine teardowns and allow the early detection of problems that would otherwise result in catastrophic component failures. As a preliminary step toward the long-term realization of a practical working system, researchers are developing the technology to implement a proof-of-concept testbed demonstration. In a multiagent system, the individual agents are generally programmed with relatively simple controllers that define a limited set of behaviors. However, these behaviors are designed in such a way that, through the localized interaction among individual agents and between the agents and the environment, they result in self-organized, emergent group behavior that can solve a given complex problem, such as cooperative inspection. One advantage to the multiagent approach is that it allows for robustness and fault tolerance through redundancy in task handling. In addition, the relatively simple agent controllers demand minimal computational capability, which in turn allows for greater miniaturization of the robotic agents.

  8. To Achieve an Earlier IFN-γ Response Is Not Sufficient to Control Mycobacterium tuberculosis Infection in Mice

    PubMed Central

    Marzo, Elena; Barril, Carles; Vegué, Marina; Diaz, Jorge; Valls, Joaquim; López, Daniel; Cardona, Pere-Joan

    2014-01-01

    The temporo-spatial relationship between the three organs (lung, spleen and lymph node) involved during the initial stages of Mycobacterium tuberculosis infection has been poorly studied. As such, we performed an experimental study to evaluate the bacillary load in each organ after aerosol or intravenous infection and developed a mathematical approach using the data obtained in order to extract conclusions. The results showed that higher bacillary doses result in an earlier IFN-γ response, that a certain bacillary load (BL) needs to be reached to trigger the IFN-γ response, and that control of the BL is not immediate after onset of the IFN-γ response, which might be a consequence of the spatial dimension. This study may have an important impact when it comes to designing new vaccine candidates as it suggests that triggering an earlier IFN-γ response might not guarantee good infection control, and therefore that additional properties should be considered for these candidates. PMID:24959669

  9. Continuous measurements of water surface height and width along a 6.5km river reach for discharge algorithm development

    NASA Astrophysics Data System (ADS)

    Tuozzolo, S.; Durand, M. T.; Pavelsky, T.; Pentecost, J.

    2015-12-01

    The upcoming Surface Water and Ocean Topography (SWOT) satellite will provide measurements of river width and water surface elevation and slope along continuous swaths of world rivers. Understanding water surface slope and width dynamics in river reaches is important for both developing and validating discharge algorithms to be used on future SWOT data. We collected water surface elevation and river width data along a 6.5km stretch of the Olentangy River in Columbus, Ohio from October to December 2014. Continuous measurements of water surface height were supplemented with periodical river width measurements at twenty sites along the study reach. The water surface slope of the entire reach ranged from during 41.58 cm/km at baseflow to 45.31 cm/km after a storm event. The study reach was also broken into sub-reaches roughly 1km in length to study smaller scale slope dynamics. The furthest upstream sub-reaches are characterized by free-flowing riffle-pool sequences, while the furthest downstream sub-reaches were directly affected by two low-head dams. In the sub-reaches immediately upstream of each dam, baseflow slope is as low as 2 cm/km, while the furthest upstream free-flowing sub-reach has a baseflow slope of 100 cm/km. During high flow events the backwater effect of the dams was observed to propagate upstream: sub-reaches impounded by the dams had increased water surface slopes, while free flowing sub-reaches had decreased water surface slopes. During the largest observed flow event, a stage change of 0.40 m affected sub-reach slopes by as much as 30 cm/km. Further analysis will examine height-width relationships within the study reach and relate cross-sectional flow area to river stage. These relationships can be used in conjunction with slope data to estimate discharge using a modified Manning's equation, and are a core component of discharge algorithms being developed for the SWOT mission.

  10. Light pollution is associated with earlier tree budburst across the United Kingdom.

    PubMed

    Ffrench-Constant, Richard H; Somers-Yeates, Robin; Bennie, Jonathan; Economou, Theodoros; Hodgson, David; Spalding, Adrian; McGregor, Peter K

    2016-06-29

    The ecological impact of night-time lighting is of concern because of its well-demonstrated effects on animal behaviour. However, the potential of light pollution to change plant phenology and its corresponding knock-on effects on associated herbivores are less clear. Here, we test if artificial lighting can advance the timing of budburst in trees. We took a UK-wide 13 year dataset of spatially referenced budburst data from four deciduous tree species and matched it with both satellite imagery of night-time lighting and average spring temperature. We find that budburst occurs up to 7.5 days earlier in brighter areas, with the relationship being more pronounced for later-budding species. Excluding large urban areas from the analysis showed an even more pronounced advance of budburst, confirming that the urban 'heat-island' effect is not the sole cause of earlier urban budburst. Similarly, the advance in budburst across all sites is too large to be explained by increases in temperature alone. This dramatic advance of budburst illustrates the need for further experimental investigation into the impact of artificial night-time lighting on plant phenology and subsequent species interactions. As light pollution is a growing global phenomenon, the findings of this study are likely to be applicable to a wide range of species interactions across the world. PMID:27358370

  11. Light pollution is associated with earlier tree budburst across the United Kingdom

    PubMed Central

    Somers-Yeates, Robin; Bennie, Jonathan; Hodgson, David; Spalding, Adrian

    2016-01-01

    The ecological impact of night-time lighting is of concern because of its well-demonstrated effects on animal behaviour. However, the potential of light pollution to change plant phenology and its corresponding knock-on effects on associated herbivores are less clear. Here, we test if artificial lighting can advance the timing of budburst in trees. We took a UK-wide 13 year dataset of spatially referenced budburst data from four deciduous tree species and matched it with both satellite imagery of night-time lighting and average spring temperature. We find that budburst occurs up to 7.5 days earlier in brighter areas, with the relationship being more pronounced for later-budding species. Excluding large urban areas from the analysis showed an even more pronounced advance of budburst, confirming that the urban ‘heat-island’ effect is not the sole cause of earlier urban budburst. Similarly, the advance in budburst across all sites is too large to be explained by increases in temperature alone. This dramatic advance of budburst illustrates the need for further experimental investigation into the impact of artificial night-time lighting on plant phenology and subsequent species interactions. As light pollution is a growing global phenomenon, the findings of this study are likely to be applicable to a wide range of species interactions across the world. PMID:27358370

  12. Economic Costs Avoided by Diagnosing Melanoma Six Months Earlier Justify >100 Benign Biopsies.

    PubMed

    Aires, Daniel J; Wick, Jo; Shaath, Tarek S; Rajpara, Anand N; Patel, Vikas; Badawi, Ahmed H; Li, Cicy; Fraga, Garth R; Doolittle, Gary; Liu, Deede Y

    2016-05-01

    New melanoma drugs bring enormous benefits but do so at significant costs. Because melanoma grows deeper and deadlier over time, deeper lesions are costlier due to increased sentinel lymph node biopsy, chemotherapy, and disease-associated income loss. Prior studies have justified pigmented lesion biopsies on a "value per life" basis; by contrast we sought to assess how many biopsies are justified per melanoma found on a purely economic basis. We modeled how melanomas in the United States would behave if diagnosis were delayed by 6 months, eg, not biopsied, only observed until the next surveillance visit. Economic loss from delayed biopsy is the obverse of economic benefit of performing biopsy earlier. Growth rates were based on Liu et al. The results of this study can be applied to all patients presenting to dermatologists with pigmented skin lesions suspicious for melanoma. In-situ melanomas were excluded because no studies to date have modeled growth rates analogous to those for invasive melanoma. We assume conservatively that all melanomas not biopsied initially will be biopsied and treated 6 months later. Major modeled costs are (1) increased sentinel lymph node biopsy, (2) increased chemotherapy for metastatic lesions using increased 5-yr death as metastasis marker, and (3) income loss per melanoma death at $413,370 as previously published. Costs avoided by diagnosing melanoma earlier justify 170 biopsies per melanoma found. Efforts to penalize "unnecessary" biopsies may be economically counterproductive.

    J Drugs Dermatol. 2016;15(5):527-532. PMID:27168261

  13. Empirical corroboration of an earlier theoretical resolution to the UV paradox of insect polarized skylight orientation.

    PubMed

    Wang, Xin; Gao, Jun; Fan, Zhiguo

    2014-02-01

    It is surprising that many insect species use only the ultraviolet (UV) component of the polarized skylight for orientation and navigation purposes, while both the intensity and the degree of polarization of light from the clear sky are lower in the UV than at longer (blue, green, red) wavelengths. Why have these insects chosen the UV part of the polarized skylight? This strange phenomenon is called the "UV-sky-pol paradox". Although earlier several speculations tried to resolve this paradox, they did this without any quantitative data. A theoretical and computational model has convincingly explained why it is advantageous for certain animals to detect celestial polarization in the UV. We performed a sky-polarimetric approach and built a polarized skylight sensor that models the processing of polarization signals by insect photoreceptors. Using this model sensor, we carried out measurements under clear and cloudy sky conditions. Our results showed that light from the cloudy sky has maximal degree of polarization in the UV. Furthermore, under both clear and cloudy skies the angle of polarization of skylight can be detected with a higher accuracy. By this, we corroborated empirically the soundness of the earlier computational resolution of the UV-sky-pol paradox. PMID:24402685

  14. Empirical corroboration of an earlier theoretical resolution to the UV paradox of insect polarized skylight orientation

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Gao, Jun; Fan, Zhiguo

    2014-02-01

    It is surprising that many insect species use only the ultraviolet (UV) component of the polarized skylight for orientation and navigation purposes, while both the intensity and the degree of polarization of light from the clear sky are lower in the UV than at longer (blue, green, red) wavelengths. Why have these insects chosen the UV part of the polarized skylight? This strange phenomenon is called the "UV-sky-pol paradox". Although earlier several speculations tried to resolve this paradox, they did this without any quantitative data. A theoretical and computational model has convincingly explained why it is advantageous for certain animals to detect celestial polarization in the UV. We performed a sky-polarimetric approach and built a polarized skylight sensor that models the processing of polarization signals by insect photoreceptors. Using this model sensor, we carried out measurements under clear and cloudy sky conditions. Our results showed that light from the cloudy sky has maximal degree of polarization in the UV. Furthermore, under both clear and cloudy skies the angle of polarization of skylight can be detected with a higher accuracy. By this, we corroborated empirically the soundness of the earlier computational resolution of the UV-sky-pol paradox.

  15. Association of Family History of Epilepsy with Earlier Age Onset of Juvenile Myoclonic Epilepsy